text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Improved error estimates for optimal control of the Stokes problem with pointwise tracking in three dimensions
Manil T. Mohan ,
Department of Mathematics, Indian Institute of Technology Roorkee-IIT Roorkee, Haridwar Highway, Roorkee, Uttarakhand 247667, India
Received September 2019 Revised August 2020 Published November 2020
Fund Project: M. T. Mohan is supported by INSPIRE Faculty Award-IFA17-MA110
In this work, we consider the two dimensional tidal dynamics equations in a bounded domain and address some optimal control problems like total energy minimization, minimization of dissipation of energy of the flow, etc. We also examine an another interesting control problem which is similar to that of the data assimilation problems in meteorology of obtaining unknown initial data, when the system under consideration is the tidal dynamics, using optimal control techniques. For these cases, different distributed optimal control problems are formulated as the minimization of suitable cost functionals subject to the controlled two dimensional tidal dynamics system. The existence of an optimal control as well as the first order necessary conditions of optimality for such systems are established and the optimal control is characterized via the adjoint variable. We also establish the uniqueness of optimal control in small time interval.
Keywords: Tidal dynamics system, first order necessary conditions of optimality, Pontryagin's maximum principle, optimal control.
Mathematics Subject Classification: Primary: 49J20; Secondary: 35Q35, 49K20.
Citation: Manil T. Mohan. First order necessary conditions of optimality for the two dimensional tidal dynamics system. Mathematical Control & Related Fields, doi: 10.3934/mcrf.2020045
F. Abergel and R. Temam, On some control problems in fluid mechanics, Theoretical and Computational Fluid Dynamics, 1 (1990), 303-325. doi: 10.1007/BF00271794. Google Scholar
P. Agarwal, U. Manna and D. Mukherjee, Stochastic control of tidal dynamics equation with Lévy noise, Appl. Math. Optim., 79 (2019), 327-396. doi: 10.1007/s00245-017-9440-2. Google Scholar
V. I. Agoshkov and E. A. Botvinovsky, Numerical solution of a hyperbolic-parabolic system by splitting methods and optimal control approaches, Comput. Methods Appl. Math., 7 (2007), 193-207. doi: 10.2478/cmam-2007-0011. Google Scholar
V. Barbu, Analysis and Control of Nonlinear Infinite Dimensional Systems, Mathematics in Science and Engineering, vol. 190, Academic Press, Inc., Boston, MA, 1993. Google Scholar
N. R. C. Birkett and N. K. Nichols, Optimal control problems in tidal power generation, Industrial Numerical Analysis, Oxford Sci. Publ., Oxford Univ. Press, New York, 1986, 53-89. Google Scholar
T. Biswas, S. Dharmatti and M. T. Mohan, Pontryagin maximum principle and second order optimality conditions for optimal control problems governed by 2D nonlocal Cahn-Hilliard-Navier-Stokes equations, Analysis (Berlin), 40 (2020), 127-150. doi: 10.1515/anly-2019-0049. Google Scholar
T. Biswas, S. Dharmatti and M. T. Mohan, Maximum principle and data assimilation problem for the optimal control problems governed by 2D nonlocal Cahn-Hilliard-Navier-Stokes equations, J. Math. Fluid Mech., 22 (2020), Art. 34, 42 pp. doi: 10.1007/s00021-020-00493-8. Google Scholar
P. G. Ciarlet, Linear and Nonlinear Functional Analysis with Applications, SIAM, Philadelphia, PA, 2013. Google Scholar
S. Doboszczak, M. T. Mohan and S. S. Sritharan, Existence of optimal controls for compressible viscous flow, J. Math. Fluid Mech., 20 (2018), 199-211. doi: 10.1007/s00021-017-0318-5. Google Scholar
L. C. Evans, Partial Differential Equations, Grad. Stud. Math., vol. 19, Amer. Math. Soc., Providence, RI, 1998. Google Scholar
I. Ekeland and T. Turnbull, Infinite-dimensional Optimization and Convexity, University of Chicago Press, Chicago, IL, 1983. Google Scholar
A. V. Fursikov, Optimal control of distributed systems: Theory and applications, American Mathematical Society, Providence, RI, (2000). doi: 10.1090/mmono/187. Google Scholar
G. Galilei, Dialogue Concerning the Two Chief World Systems, 1632. Google Scholar
R. G. Gordeev, The existence of a periodic solution in tide dynamic problem, Journal of Soviet Mathematics, 6 (1976), 1-4. doi: 10.1007/BF01084856. Google Scholar
M. D. Gunzburger, Perspectives in Flow Control and Optimization, Advances in Design and Control, vol. 5, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2003. Google Scholar
A. Haseena, M. Suvinthra, M. T. Mohan and K. Balachandran, Moderate deviations for stochastic tidal dynamics equation with multiplicative noise, Applicable Analysis, 2020. doi: 10.1080/00036811.2020.1781827. Google Scholar
V. M. Ipatova, Solvability of a tide dynamics model in adjacent seas, Russian J. Numer. Anal. Math. Modelling, 20 (2005), 67-79. doi: 10.1515/1569398053270822. Google Scholar
B. A. Kagan, Hydrodynamic Models of Tidal Motions in the Sea, Gidrometeoizdat, Leningrad, 1968. Google Scholar
O. A. Ladyzhenskaya, The Mathematical Theory of Viscous Incompressible Flow, Gordon and Breach, Science Publishers, New York-London-Paris, 1969. Google Scholar
J.-L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer-Verlag, New York-Berlin, 1971. Google Scholar
X. Li and J. Yong, Optimal Control Theory for Infinite Dimensional Systems, Birkhäuser Boston, Inc., Boston, MA, 1995. doi: 10.1007/978-1-4612-4260-4. Google Scholar
G. I. Marchuk and B. A. Kagan, Ocean Tides: Mathematical Models and Numerical Experiments, Pergamon Press, Elmsford, NY, 1984. Google Scholar
G. I. Marchuk and B. A. Kagan, Dynamics of Ocean Tides, Kluwer Academic Publishers, Dordrecht/Boston/London, 1989. doi: 10.1007/978-94-009-2571-7. Google Scholar
U. Manna, J. L. Menaldi and S. S. Sritharan, Stochastic analysis of tidal dynamics equation, Infinite Dimensional Stochastic Analysis, World Sci. Publ., Hackensack, NJ, (2008), 90–113. doi: 10.1142/9789812779557_0006. Google Scholar
M. T. Mohan, On the two dimensional tidal dynamics system: Stationary solution and stability, Appl. Anal., 99 (2020), 1795-1826. doi: 10.1080/00036811.2018.1546002. Google Scholar
M. T. Mohan, Dynamic programming and feedback analysis of the two dimensional tidal dynamics system, in ESAIM: Control, Optimisation and Calculus of Variations, 2020. doi: 10.1051/cocv/2020025. Google Scholar
M. T. Mohan, Necessary conditions for distributed optimal control of two dimensional tidal dynamics system with state constraints, work-in-progress, (2020). Google Scholar
R. Mosetti, Optimal control of sea level in a tidal basin by means of the Pontryagin maximum principle, Applied Mathematical Modelling, 9 (1985), 321-324. Google Scholar
I. Newton, Philosophiae Naturalis Principia Mathematica, William Dawson & Sons, Ltd., London, 1687. Google Scholar
J. Pedlosky, Geophysical Fluid Dyanmics I, II, Springer, Heidelberg, 1981. Google Scholar
J. P. Raymond, Optimal control of partial differential equations, Université Paul Sabatier, Lecture Notes, 2013. Google Scholar
S. C. Ryrie and D. T. Bickley, Optimally controlled hydrodynamics for tidal power in the Severn Estuary, Appl. Math. Modelling, 9 (1985), 1-10. doi: 10.1016/0307-904X(85)90134-9. Google Scholar
S. C. Ryrie, An optimal control model of tidal power generation, Appl. Math. Modelling, 19 (1985), 123-126. doi: 10.1016/0307-904X(94)00012-U. Google Scholar
J. Simon, Compact sets in the space $\mathrm{L}^p(0, T;\mathrm{B})$, Ann. Mat. Pura Appl., 146 (1986), 65-96. doi: 10.1007/BF01762360. Google Scholar
E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton University Press, Princeton, NJ, 1970. Google Scholar
S. S. Sritharan, Optimal Control of Viscous Flow, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1998. doi: 10.1137/1.9781611971415. Google Scholar
M. Suvinthra, S. S. Sritharan and K. Balachandran, Large deviations for stochastic tidal dynamics equation, Commun. Stoch. Anal., 9 (2015), 477-502. doi: 10.31390/cosa.9.4.04. Google Scholar
H. Whitney, Analytic extension of differentiable functions defined in closed sets, Trans. Amer. Math. Soc., 36 (1934), 63-89. doi: 10.1090/S0002-9947-1934-1501735-3. Google Scholar
Z. Yanga and J. M. Hamrickb, Optimal control of salinity boundary condition in a tidal model using a variational inverse method, Estuarine, Coastal and Shelf Science, 62 (2005), 13-24. doi: 10.1016/j.ecss.2004.08.003. Google Scholar
H. Yin, Stochastic analysis of backward tidal dynamics equation, Commun. Stoch. Anal., 5 (2011), 745-768. doi: 10.31390/cosa.5.4.09. Google Scholar
Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033
Giuseppe Capobianco, Tom Winandy, Simon R. Eugster. The principle of virtual work and Hamilton's principle on Galilean manifolds. Journal of Geometric Mechanics, 2021 doi: 10.3934/jgm.2021002
Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051
Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019
Yuyuan Ouyang, Trevor Squires. Some worst-case datasets of deterministic first-order methods for solving binary logistic regression. Inverse Problems & Imaging, 2021, 15 (1) : 63-77. doi: 10.3934/ipi.2020047
Yen-Luan Chen, Chin-Chih Chang, Zhe George Zhang, Xiaofeng Chen. Optimal preventive "maintenance-first or -last" policies with generalized imperfect maintenance models. Journal of Industrial & Management Optimization, 2021, 17 (1) : 501-516. doi: 10.3934/jimo.2020149
Wenrui Hao, King-Yeung Lam, Yuan Lou. Ecological and evolutionary dynamics in advective environments: Critical domain size and boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 367-400. doi: 10.3934/dcdsb.2020283
Franck Davhys Reval Langa, Morgan Pierre. A doubly splitting scheme for the Caginalp system with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 653-676. doi: 10.3934/dcdss.2020353
Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301
Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052
Bopeng Rao, Zhuangyi Liu. A spectral approach to the indirect boundary control of a system of weakly coupled wave equations. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 399-414. doi: 10.3934/dcds.2009.23.399
Xianwei Chen, Xiangling Fu, Zhujun Jing. Chaos control in a special pendulum system for ultra-subharmonic resonance. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 847-860. doi: 10.3934/dcdsb.2020144
Mikhail I. Belishev, Sergey A. Simonov. A canonical model of the one-dimensional dynamical Dirac system with boundary control. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021003
Manil T. Mohan
|
CommonCrawl
|
Home > Proceedings > Adv. Stud. Pure Math. > Noncommutativity and Singularities: Proceedings of French–Japanese symposia held at IHÉS in 2006
MSJ-IHÉS Joint Workshop on "Noncommutativity"; Hayashibara Forum on "Singularities"
November 15–18 and 20–23, 2006 | Institut des Hautes Études Scientifiques, Bures-sur-Yvette, France
Noncommutativity and Singularities: Proceedings of French–Japanese symposia held at IHÉS in 2006
Editor(s) Jean-Pierre Bourguignon, Motoko Kotani, Yoshiaki Maeda, Nobuyuki Tose
Adv. Stud. Pure Math., 55: 363pp. (2009). DOI: 10.2969/aspm/05510000
PURCHASE IN PRINT
Read Full Abstract +
This volume consists of selected paper on recent trends and results in the study of various groups of diffeomorphisms, including mapping class groups, from the point of view of algebraic and differential topology, as well as dynamical ones involving foliations and symplectic or contact diffeomorphisms. Most of the authors were invited speakers or participants of the International Symposium on Groups of Diffeomorphisms 2006, which was held at the University of Tokyo (Komaba) in September 2006.
This volume is dedicated to Professor Shigeyuki Morita on the occasion of his 60th anniversary.
This volume is dedicated to Professor Shigeyuki Morita on the occasion of his 60th anniversary. We believe that the scope of this volume well reflects Shigeyuki Morita's mathematical interests. We hope this volume to inspire not only the specialists in these fields but also a wider audience of mathematicians.
Hide All Book Information -
Advanced Studies in Pure Mathematics, Volume 55
Rights: Copyright © 2009 Mathematical Society of Japan
First available in Project Euclid: 28 November 2018
Digital Object Identifier: 10.2969/aspm/05510000
Primary: 00B15
< Previous Volume | Next Volume >
Mathematical Society of Japan
View All Abstracts +
Volume 55 title pages
Advanced Studies in Pure Mathematics Vol. 55, - (2009).
Jean-Pierre Bourguignon , Motoko Kotani , Yoshiaki Maeda , Nobuyuki Tose
MSJ-IHÉS joint Workshop on "Noncommutativity" 15-18 November 2006 – Program
HAYASHIBARA FORUM on "SINGULARITIES" 20-23 November 2006 – Program
From Pitman's theorem to crystals
Philippe Biane
Advanced Studies in Pure Mathematics Vol. 55, 1-13 (2009). https://doi.org/10.2969/aspm/05510001
Read Abstract +
We describe an extension of Pitman's theorem on Brownian motion and the three dimensional Bessel process to several dimensions. We show how this extension is suggested by considering a random walk on a noncommutative space, and is connected with crystals and Littelmann paths.
Singularities and self-similarity in gravitational collapse
Tomohiro Harada
Advanced Studies in Pure Mathematics Vol. 55, 15-30 (2009). https://doi.org/10.2969/aspm/05510015
Einstein's field equations in general relativity admit a variety of solutions with spacetime singularities. Numerical relativity has recently revealed the properties of somewhat generic spacetime singularities. It has been found that in a variety of systems self-similar solutions can describe asymptotic or intermediate behaviour of more general solutions in an approach to singularities. The typical example is the convergence to an attractor self-similar solution in gravitational collapse. This is closely related to the cosmic censorship violation in the spherically symmetric collapse of a perfect fluid. The self-similar solution also plays an important role in critical phenomena in gravitational collapse. The critical phenomena are understood as the intermediate behaviour around a critical self-similar solution. We see that the convergence and critical phenomena are understood in a unified manner in terms of attractors of codimension zero and one, respectively, in renormalisation group flow.
Horospherical geometry in the hyperbolic space
Shyuichi Izumiya
This is a survey article on the recent results of the "horospherical geometry" in the hyperbolic space. Detailed arguments for the results have been appeared or will appear in several different articles.
Instanton counting and the chiral ring relations in supersymmetric gauge theories
Hiroaki Kanno
We compute topological one-point functions of the chiral operator $\mathrm{Tr}\ \varphi^k$ in the maximally confining phase of $U(N)$ supersymmetric gauge theory. These chiral one-point functions are of particular interest from gauge/string theory correspondence, since they are related to the equivariant Gromov–Witten theory of $\mathbf{P}^1$. By considering the power sums of Jucys–Murphy elements in the class algebra of the symmetric group we can derive a combinatorial identity that leads the relations among chiral one-point functions. Using the operator formalism of free fermions, we also compute the vacuum expectation value of the loop operator $\langle \mathrm{Tr}\ e^{it\varphi}\rangle$ which gives the generating function of the one-point functions.
Superconformal field theory and operator algebras
Yasuyuki Kawahigashi
We present an operator algebraic approach to superconformal field theory and a classification result in this framework. This is based on a joint work with S. Carpi and R. Longo
Iterating the hessian: a dynamical system on the moduli space of elliptic curves and dessins d'enfants
Patrick Popescu-Pampu
Each elliptic curve can be embedded uniquely in the projective plane, up to projective equivalence. The hessian curve of the embedding is generically a new elliptic curve, whose isomorphism type depends only on that of the initial elliptic curve. One gets like this a rational map from the moduli space of elliptic curves to itself. We call it the hessian dynamical system. We compute it in terms of the $j$-invariant of elliptic curves. We deduce that, seen as a map from a projective line to itself, it has 3 critical values, which correspond to the point at infinity of the moduli space and to the two elliptic curves with special symmetries. Moreover, it sends the set of critical values into itself, which shows that all its iterates have the same set of critical values. One gets like this a sequence of dessins d'enfants. We describe an algorithm allowing to construct this sequence.
Quantum Birkhoff normal forms and semiclassical analysis
San Vũ Ngọc
Advanced Studies in Pure Mathematics Vol. 55, 99-116 (2009). https://doi.org/10.2969/aspm/05510099
The goal of this text is to motivate for an effective version of the quantum Birkhoff normal form, which gives precise spectral asymptotics even in the presence of resonances, in the semiclassical limit. Microlocal analysis, via pseudo-differential operators and Toeplitz operators is used in order to achieve this.
On geometric analogues of Iwasawa main conjecture for a hyperbolic threefold
Ken-ichi Sugiyama
Advanced Studies in Pure Mathematics Vol. 55, 117-135 (2009). https://doi.org/10.2969/aspm/05510117
We will discuss a relation between a special value of Ruelle–Selberg L-function of a unitary local system on a hyperbolic threefold of finite volume and Alexander invariant. A philosophy of our results are based on Iwasawa Main Conjecture in number theory.
Partial regularity and its application to the blow-up asymptotics of parabolic systems modelling chemotaxis with porous medium diffusion
Yoshie Sugiyama
Upper curvature bounds and singularities
Takao Yamaguchi
The purpose of this note is to describe recent development on the study of the local structure of singular spaces with curvature bounded above, due to Lychak and Nagano[16], [17] for the characterization of topological manifolds, and due to Kleiner, Nagano, Shioya and Yamaguchi [15] for the general characterization of 2-dimensional spaces.
High-order processing of singular data
Yosef Yomdin, Gal Zahavi
This paper provides a survey of some recent results (mostly in [68, 71, 72, 73, 74]) concerning high-order representation and processing of singular data. We present these results from a certain general point of view which we call a "Model-Net" approach: this is a method of representation and processing of various types of mathematical data, based on the explicit recovery of the hierarchy of data singularities. As an example we use a description of singularities and normal forms of level surfaces of "product functions" recently obtained in [68, 34] and on this base describe in detail the structure of the Model-net representation of such surfaces.
Then we discuss a "Taylor-net" representation of smooth functions consisting of a net of Taylor polynomials of a prescribed degree $k$ (or $k$-jets) of this function stored at a certain grid in its domain. Following [72, 74] we present results on the stability of Hermite fitting, which is the main tool in acquisition of Taylor-net data.
Next we present (following [71, 73, 74]) a method for numerical solving PDE's based on Taylor-net representation of the unknown function. We extend this method also to the case of the Burgers equation near a formed shock-wave.
Finally, we shortly discuss (following [28, 56]) the problem of a non-linear acquisition of Model-nets from measurements, as well as some additional implementations of the Model-net approach.
An analogue of Serre fibrations for $C^*$-algebra bundles
Siegfried Echterhoff, Ryszard Nest, Herve Oyono-Oyono
We study an analogue of Serre fibrations in the setting of $C^*$-algebra bundles. We derive in this framework a Leray–Serre type spectral sequence. We investigate a class of examples which generalise on one hand principal bundles with a $n$-torus as structural group and on the other hand non-commutative tori.
Spontaneous partial breaking of $\mathcal{N} = 2$ supersymmetry and the $U(N)$ gauge model
Kazuhito Fujiwara, Hiroshi Itoyama, Makoto Sakaguchi
We briefly review properties of the $\mathcal{N} = 2$ $U(N)$ gauge model composed of $\mathcal{N} = 1$ superfields. This model can be regarded as a low-energy effective action of $\mathcal{N} = 2$ Yang–Mills theory equipped with electric and magnetic Fayet–Iliopoulos terms. In this model, the $\mathcal{N} = 2$ supersymmetry is spontaneously broken to $\mathcal{N}= 1$, and the Nambu–Goldstone fermion comes from the overall $U(1)$ part of $U(N)$ gauge group. We also give $\mathcal{N} = 1$ supermultiplets appearing in the vacua. In addition, we give a manifestly $\mathcal{N} = 2$ symmetric formulation of the model by employing the unconstrained $\mathcal{N} = 2$ superfields in harmonic superspace. Finally, we study a decoupling limit of the Nambu–Goldstone fermion and identify the origin of the fermionic shift symmetry with the second, spontaneously broken supersymmetry.
An analogue of the space of conformal blocks in $(4k + 2)$-dimensions
Kiyonori Gomi
Based on projective representations of smooth Deligne cohomology groups, we introduce an analogue of the space of conformal blocks to compact oriented $(4k + 2)$-dimensional Riemannian manifolds with boundary. For the standard $(4k + 2)$-dimensional disk, we compute the space concretely to prove that its dimension is finite.
The quantum Knizhnik–Zamolodchikov equation and non-symmetric Macdonald polynomials
Masahiro Kasatani, Yoshihiro Takeyama
We construct special solutions of the quantum Knizhnik–Zamolodchikov equation on the tensor product of the vector representation of the quantum algebra of type $A_{N-1}$. They are constructed from non-symmetric Macdonald polynomials through the action of the affine Hecke algebra.
Local Gromov–Witten invariants of cubic surfaces
Yukiko Konishi
We compute local Gromov–Witten invariants of cubic surfaces via nef toric degeneration.
Flop invariance of the topological vertex
Satoshi Minabe
A table of $\theta$-curves and handcuff graphs with up to seven crossings
Hiromasa Moriuchi
We enumerate all the $\theta$-curves and handcuff graphs with up to seven crossings by using algebraic tangles and prime basic $\theta$-polyhedra. Here, a $\theta$-polyhedron is a connected graph embedded in a 2-sphere, whose two vertices are 3-valent, and the rest are 4-valent. There exist twenty-four prime basic $\theta$-polyhedra with up to seven 4-valent vertices. We can obtain a $\theta$-curve and handcuff graph diagram from a prime basic $\theta$-polyhedron by substituting algebraic tangles for their 4-valent vertices.
A quantization of the sixth Painlevé equation
Hajime Nagoya
The sixth Painlevé equation has the affine Weyl group symmetry of type $D_{4}^{(1)}$ as a group of Bäcklund transformations and is written as a Hamiltonian system. We propose a quantization of the sixth Painlevé equation with the extended affine Weyl group symmetry of type $D_{4}^{(1)}$.
Lagrangian fibrations and theta functions
Yuichi Nohara
It is known that holomorphic sections of an ample line bundle $L$ (and its tensor power $L^k$) on an Abelian variety $A$ are given by theta functions. Moreover, a natural basis of the space of holomorphic sections is related to a certain Lagrangian fibration of $A$. We study projective embeddings of $A$ given by the basis for $L^k$, and show that moment maps of toric actions on the ambient projective spaces, restricted to $A$, approximate the Lagrangian fibration of $A$ for large $k$. The case of Kummer variety is also discussed.
Generalized Q-functions and UC hierarchy of B-type
Yuji Ogawa
We define a generalization of Schur's Q-function for an arbitrary pair of strict partitions, which is called the generalized Q-function. We prove that all the generalized Q-functions solve a series of non-linear differential equations called the UC hierarchy of B-type (BUC hierarchy). We furthermore investigate the BUC hierarchy from the viewpoint of representation theory. We consider the Fock representation of the algebra of neutral fermions and establish the boson-fermion correspondence. Using this, we discuss the relationship between the BUC hierarchy and a certain infinite dimensional Lie algebra.
The space of triangle buildings
Mikaël Pichot
I report on recent work of Sylvain Barré and myself on the space of triangle buildings.
From a set-theoretic point of view the space of triangle buildings is the family of all triangle buildings (also called Bruhat–Tits buildings of type $\tilde{A}_2$) considered up to isomorphism. This is a continuum. We shall see that it provides new tools and a general framework for studying triangle buildings, which connects notably to foliation and lamination theory, quasi-periodicity of metric spaces, and noncommutative geometry.
This text is a general presentation of the subject and explains some of these connections. Several open problems are mentioned. The last sections set up the basis for an approach via $K$-theory.
Ends of metric measure spaces with nonnegative Ricci curvature
Masayoshi Watanabe
We prove that metric measure spaces with nonnegative Ricci curvature have at most two ends.
On ideal boundaries of some Coxeter groups
Saeko Yamagata
If a group acts geometrically (i.e., properly discontinuously, cocompactly and isometrically) on two geodesic spaces $X$ and $X'$, then an automorphism of the group induces a quasi-isometry $X \to X'$. We find a geometric action of a Coxeter group $W$ on a CAT(0) space $X$ and an automorphism $\phi$ of $W$ such that the quasi-isometry $X \to X$ arising from $\phi$ can not induce a homeomorphism on the boundary of $X$ as in the case of Gromov-hyperbolic spaces.
On manifolds which are locally modeled on the standard representation of a torus
Takahiko Yoshida
This is an expository article on manifolds which are locally modeled on the standard representation of a torus and their classifications.
|
CommonCrawl
|
Surface Chemistry | Question Bank for Class 12 Chemistry
Get Surface Chemistry important questions for Boards exams. Download or View the Important Question bank for Class 12 Chemistry. These important questions will play significant role in clearing concepts of Chemistry. This question bank is designed by NCERT keeping in mind and the questions are updated with respect to upcoming Board exams. You will get here all the important questions for class 12 chemistry chapter wise CBSE. Click Here for Detailed Chapter-wise Notes of Chemistry for Class 12th, JEE & NEET. You can access free study material for all three subject's Physics, Chemistry and Mathematics. Click Here for Detailed Notes of any chapter. eSaral provides you complete edge to prepare for Board and Competitive Exams like JEE, NEET, BITSAT, etc. We have transformed classroom in such a way that a student can study anytime anywhere. With the help of AI we have made the learning Personalized, adaptive and accessible for each and every one. Visit eSaral Website to download or view free study material for JEE & NEET. Also get to know about the strategies to Crack Exam in limited time period.
Q. Why are substances like platinum and palladium often used for carrying out electrolysis of aqueous solutions?
Ans. Platinum and palladium form inert electrodes, i.e., they are not attacked by the ions of the electrolyte or the products of electrolysis. Hence, they are used as electrodes for carrying out the electrolysis.
Q. Why are powdered substances more effective adsorbent than their crystalline forms ?
Ans. Powdered substances have greater surface area as compared to their crystalline forms. Greater the surface area, greater is the adsorption.
Q. Why is it necessary to remove CO when ammonia is obtained by Haber's process ?
Ans. CO acts as a poison for the catalyst used in the manufacture of ammonia by Haber's process. Hence, it is necessary to remove it.
Q. Why is the ester hydrolysis slow in the beginning and becomes faster after some time ?
Ans. The ester hydrolysis takes place as follows : The acid produced in the reaction acts as catalyst (autocatalyst) for the reaction. Hence, the reaction becomes faster after some time.
Q. What is the role of desorption in the process of catalysis?
Ans. Desorption makes the surface of the solid catalyst free for fresh adsorption of the reactants on the surface.
Q. Give reason why a finely divided substance is more effective as an adsorbent ?
Ans. Adsorption is a surface phenomenon. Since finely divided substance has large surface area, hence, adsorption occurs to a greater extent.
Q. What is demulsification? Name two demulsifiers.
Ans. The decomposition of an emulsion into constituent liquids is called demulsification. Demulsification can be done by boiling or freezing.
Q. What do you understand by activity and selectivity of catalysts.
Ans. Activity of a catalyst refers to the ability of catalyst to increase the rate of chemical reaction. Selectivity of a catalyst refers to its ability to direct the reaction to give a specific product.
Q. Why does physisorption decrease with increase of temperature ?
Ans. Physisorption is an exothermic process : According to Le Chatelier's principle, if we increase the temperature, equilibrium will shift in the backward direction, i.e., gas is released from the adsorbed surface.
Q. What modification can you suggest in the Hardy Schulze law ?
Ans. According to Hardy Schulze law, the coagulating ion has charge opposite to that on the colloidal particles. Hence, the charge on colloidal particles is neutralized and coagulation occurs. The law can be modified to include the following : When oppositely charged sols are mixed in proper proportions to neutralize the charges of each other, coagulation of both the sols occurs.
Q. Why is it essential to wash the precipitate with water before estimating it quantitatively ?
Ans. Some amount of the electrolytes mixed to form the precipitate remains adsorbed on the surface of the particles of the precipitate. Hence, it is essential to wash the precipitate with water to remove the sticking electrolytes (or any other impurities) before estimating it quantitatively.
Q. Distinguish between the meaning of the terms adsorption and absorption. Give one example of each.
Ans. In adsorption, the substance is concentrated only at surface and does not penetrate through the surface to the bulk of adsorbent while in absorption, the substance is uniformly distributed through out the bulk of the solid. A distinction can be made between absorption and adsorption by taking an example of water vapour. Water vapours are absorbed by anhydrous calcium chloride but adsorbed by silica gel.
Q. What role does adsorption play in heterogeneous catalysis?
Ans. In heterogeneous catalysis, the reactants are generally gases whereas catalyst is a solid. The reactant molecules are adsorbed on the surface of the solid catalyst by physical adsorption or chemisorption. As a result, the concentration of the reactant molecules on the surface increases and hence the rate of reaction increase. Alternatively, one of the reactant molecules undergo fragmentation on the surface of the solid catalyst producing active species which react faster. The product molecules in either case have no affinity for the solid catalyst and are desorbed making the surface free for fresh adsorption. This theory is called adsorption theory.
Q. Give two chemical methods for the preparation of colloids.
Ans. Colloidal solutions can be prepared by chemical reactions involving, double decomposition, oxidation, reduction and hydrolysis. (i) Double decomposition : A colloidal sol of arsenious sulphide is obtained by passing hydorgen sulphide into a solution of arsenious oxide in distilled water. $A s_{2} O_{3}+3 H_{2} S \rightarrow A s_{2} S_{3}+3 H_{2} O$ (ii) Oxidation : A colloidal solution of sulphur can be obtained by passing hydrogen sulphide into a solution of sulphur dioxide in water through a solution of an oxidizing agent (bromide water, nitric acid etc.) $\mathrm{SO}_{2}+2 \mathrm{H}_{2} \mathrm{S} \rightarrow 3 \mathrm{S}+2 \mathrm{H}_{2} \mathrm{O}$ $H_{2} S+[O] \rightarrow H_{2} O+S$
Q. How are the colloidal solutions classified, on the basis of physical states of the dispersed phase and dispersion medium ?
Ans.
Q. Describe a chemical method each for the preparation of sol of sulphur and platinum in water.
Ans. Preparation of sol of sulphur : A colloidal solution o sulphur can be obtained by passing hydrogen sulphide into a solution of sulphur dioxide in water or through solution of an oxidizing agent (bromine water, nitric acid etc.) $\mathrm{SO}_{2}+2 \mathrm{H}_{2} \mathrm{S} \longrightarrow 3 \mathrm{S}+\mathrm{H}_{2} \mathrm{O}$ $H_{2} S+[O] \longrightarrow H_{2} O+[S]$ Preparation of platinum sol : It is prepared by electrical disintegration or Bredig's arc method. In this method, electric arc is struck between electrodes of the metal immersed in the dispersion medium. The intense heat produced vaporizes some of the metal, which then condenses to form particles of calloidal size.
Q. Give four examples of heterogeneous catalysis.
Ans. When the catalyst exists in a different phase than that of reactants. It is said to be heterogeneous catalyst and the catalysis is called heterogeneous catalysis. (i) Manufacture of $N H_{3}$ from $N_{2}$ and $H_{2}$ by Haber's process using iron as catalyst. $N_{2}+3 H_{2} \stackrel{F e}{\longrightarrow} 2 N H_{3}$ (ii) Manufacture of $\mathrm{CH}_{3} \mathrm{OH}$ from $\mathrm{CO}$ and using $(\mathrm{a}$ mixture of $C u, Z n O$ and $C r_{2} O_{3}$ $\mathrm{CO}+2 \mathrm{H}_{2} \frac{\mathrm{Cu}+\mathrm{ZnO}+\mathrm{Cr}_{2} \mathrm{O}_{3}}{\mathrm{Heat}} \rightarrow \mathrm{CH}_{3} \mathrm{OH}$ (iii) Oxidation of with using as catalyst in ostwald process. (iv) Hydrogenation of oils to form vegetable ghee using finely divided nickel. $\mathrm{Oil}+\mathrm{H}_{2} \stackrel{\mathrm{Ni}}{\longrightarrow}$ Vegetable ghee.
Q. Describe some features of catalysis by zeolites
Ans. Features of catalysis by zeolites. (i) Zeolites are hydrated alumino-silicates which have a three-dimensional network structure containing water molecules in their pores. (ii) To use them as catalyst, they are heated so that water of hydration present in the pores is lost and the pores become vacant. (iii) The size of the pores varies from 260 to $740 \mathrm{pm} .$ Thus, only those molecules adsorbed in these pores and catalysed whose size is small enough to enter these pores hence, they act as molecular sieves or shape selective catalyst. An important catalyst used in petroleum industry is ZSM- 5 (Zeolite sieve of molecular porosity 5 ). It converts alcohol into petrol by first dehydrating them to form a mixture of hydrocarbons.
Q. What is shape selective catalysis.
Ans. The catalytic reaction that depends upon the pore structure of the catalyst and the size of the reactant and product molecules is called shape-selective catalysis. Zeolites are good shape-selective catalysts because of their honeycomb-like structures. They are microporous aluminosilicates with three dimensional network of silicates in which some silicon atoms are replaced by aluminium atoms giving $A l-O-S i$ framework. The reactions taking place in zeolites depend upon the size and shape of reactant and product molecules as well as upon the pores and cavities of the zeolites. They are found in nature as well as synthesised for catalytic selectivity. Zeolites are being very widely used as catalysts in petrochemical industries for cracking of hydrocarbons and isomerisation. An important zeolite catalyst used in the petroleum industry is ZSM- - . It converts alcohols directly into gasoline (petrol) by dehydrating them to give a mixture of hydrocarbons.
Q. Give four uses of emulsions.
Ans. Uses of emulsions are : (1) In medicines : A wide variety of pharmaceutical preparations are emulsions. For example, emulsions of cod liver oil. These emulsified oils are easily acted upon by digestive juices in the stomach and, hence are readily digested. (2) Digestion of fats : Digestion of fats in the intestines is aided by emulsification. (3) In disinfectants : The disinfectants such as dettol give emulsions of the oil in water type when mixed with water. (4) In building roads : An emulsion of asphalt and water is used for building roads. In this way there is no necessity of melting the asphalt.
Q. What are micelles ? Give an example of a micellers system.
Ans. There are some substances which at low concentrations behave as normal strong electrolytes, but at higher concentration exhibit colloidal behaviour due to the formation of aggregates. The aggregated Particles thus formed are called micelles. The formation of micelles takes place only above a particular temperature called kraft temperature $\left(\mathrm{T}_{\mathrm{k}}\right)$ and above a particular concentr-ation called critical micelle concentration (CMC). Surface active agents such as soaps and synthetic detergents belong to this class. For soaps, the CMC is $10^{-4}$ to $10^{-3} \mathrm{moll}^{-1}$. micelles may contain as many as 100 molecules or more.
Q. Comment on the statement that "Colloid is not a substance but state of a substance".
Ans. The given statement is true. This is because the same substance may exist as a colloid under certain conditions and as a crystalloid under certain other conditions. For example, $N a C l$ in water behaves as a crystalloid while in benzene, it behaves as a colloid. Similarly, dilute soap solution behaves like a crystalloid while concentrated solution behaves as a colloid (called associated colloid). It is the size of the particles which matters, i.e., the state in which the substance exists. If the size of the particles lies in the range 1 nm to 1000 nm to it is in the colloidal state.
Q. What is the difference between physical adsorption and chemisorption?
Q. What are the factors which influence the adsorption of gas on a solid?
Ans. Factors influencing the adsorption of gas on solid: (i) Nature of gas: Under given conditions of temperature and pressure, the easily liquifiable gases like $N H_{3}, H C l, C O_{2},$ etc. are adsorbed to a greater extent than the permanent gases such as $H_{2}, O_{2}, N_{2}$ etc. It is because the vander waal's forces or molecular forces are more predominant in the former than in later category. Effect of Nature of the Adsorbent: Activated charcoal (ii) is the most common adsorbent for the gases which can be easily liquefied. The gases $H_{2}, O_{2}$ and $N_{2}$ are adsorbed on metals like $N i, P d$ etc. (iii) Specific area of the solid : Specific area of an adsorbing solid is the surface available for adsorption per gram of the adsorbent. Greater the specific area of the solid, greater would be its adsorbent power. (vi) Effect of pressure : It is expected that extent of adsorption increases with increase of pressure. (v) Effect of temperature : In physisorption $x / m$ decreases with rise in temperature. While in chemisorptionslightly increases in the beginning and then decreases as the temperature rises. This initial increase is due to the fact that, like chemical reactions, chemisorption also requires activation energy.
Q. What is an adsorption isotherm? Distinguish between Freundlich adsorption isothermal and Langmuir adsorption isotherm.
Ans. Adsorption isotherm. It is a graph indicating the variation of the mass or the gas adsorbed per gram $(x / m)$ of the adsorbent with pressure $(p)$ at constant temperature. Distinction between Freundlich and Langmuir adsorption isotherms (i) The mathematical expressions representing adsorption are: Freundlich adsorption isotherm: $\frac{x}{m}=k p^{1 / n}$ Langmuir adsorption isotherm: $\frac{x}{m}=\frac{a p}{1+b p}$ (ii) Langmuir adsorption isotherm is more general and is applicable at all pressure while Freundlich adsorption isotherm fails at high pressure. The latter can be deduced from the former. (iii) Langmuir adsorption postulates that adsorption is monomolecular whereas Freundlich adsorption suggests that it is multimolecular.
Q. What do you understand by activation of adsorbent? How is it achieved?
Ans. Activation of adsorbent implies increases in the adsorption power of the adsorbent. It involves increase in the surface area of the adsorbent and is achieved by following methods. $\bullet$ By finely dividing the adsorbent. $\bullet$ By removing the gases already adsorbed $\bullet$ By making the surface of adsorbent rough by chemical or mechanical methods.
Q. Discuss the effect of pressure and temperature on the adsorption of gases by solids.
Ans. Effect of pressure : It is expected that extent of adsorption increases with increase in pressure. The extent of the adsorption is generally expressed as $x / m$ where $m$ is the mass of the adsorbent and $x$ is that of the adsorbate when equilibrium has been attained. A graph drawn between extent of adsorption $\left(\frac{x}{m}\right)$ and the pressure $p$ of the gas at constant temperature is called adsorption isotherm. . Effect of temperature : Magnitude of adsorption decrease with increase in the temperature. A graph drawn between extent of adsorption $\left(\frac{x}{m}\right)$ and temperature ( $T$ ) at constant pressure is called adsorption isobar physical adsorption isobar shows a decrease in If with rise in temperature, the isobar of chemisorption $x / 1$ shows an increase in the beginning and then decreases as the temperature rises. This initial increase is due to the fact that like chemical reactions, chemisorption also requires activation energy.
Q. What are lyophilic and lyophobic sols ? give one example of each type.
Ans. (i) Lyophilic colloids : The word 'lyophilic' means liquid-loving. Colloidal sols directly formed by mixing substances like gum, gelatine, starch, rubber, etc., with a suitable liquid (the dispersion medium) are called lyophilic sols. An important characteristic of these sols is that if the dispersion medium is separated from the dispersed phase (say by evaporation), the sol can be reconstituted by simply remixing with the dispersion medium. That is why these sols are also called reversible sols. Furthermore, these sols are quite stable and cannot be easily coagulated. (ii) Lyophobic colloids : The word 'lyophobic' means liquid-hating. Substances like metals, their sulphides, etc., when simply mixed with the dispersion medium do not form the colloids. Their colloidal sols can be prepared only by special methods. Such sols are called lyophobic sols. These sols are readily precipitated (or coagulated) on the addition of small amounts of electrolytes, by heating or by shaking and hence, are not stable. Further, once precipitated, they do not give back the colloidal sol by simple addition of the dispersion medium. Hence, these sols are also called irreversible sols. Lyophobic sols need stabilising agents for their preservation.
Q. What is the difference between multimolecular and macromolecular colloids ? Give one example of each. How are associated colloids different from these two types of colloids ?
Ans. On dissolution, a large number of atoms or smaller mole-cules of a substance aggregate together to form species having size in the colloidal range (diameter < 1 nm). The species thus formed are multimolecular colloids while macromolecules in suitable solvents form solutions in which the size of macromolecules may be in colloidal range. Difference between associated colloids multimolecular and macromolecules colloids. Multimolecular colloids are formed by the aggregation of a large number of small atoms molecules such as S8. Macromolecular colloids contain molecules of large size like starch the size of molecules in this case have the dimensions of colloids. Their size lies in the colloidal range. The associated colloids are formed by electrolytes which dissociate into ions and these ions associate together to form ionic micelles whose size lies in the colloidal range, soap is common example of this type . The micelle formation occurs above a particular concentration (called critical micellisation concentration) and above a particular temperature, called Kraft temperature.
Q. Explain what is observed when (i) An electrolyte is added to ferric hydroxide sol (ii) An emulsion is subjected to centrifugation (iii) Direct current is passed through a colloidal sol (iv) A beam of light is passed through a colloidal solution.
Ans. (i) The positively charged colloidal particles of $F e(O H)_{3}$ sol get coagulated by the oppositely charged ion provided by electrolyte. (ii) The constituent liquids of the emulsion separate out. In other words, demulsification occurs. (iii) On passing direct current, colloidal particles move towards the oppositely charged electrode where they lose their charge and get coagulated. (iv) Scattering of light by the colloidal particles takes place and the path of light becomes illuminated. This is called Tyndall effect.
Q. Explain the following terms : (1) Peptization, (2) Electrophoresis, (3) Coagulation,
Ans. (1) Peptization : Peptization may be defined as the process of converting a precipitate into colloidal sol by shaking it with dispersion medium in presence of a small amount of electrolyte. The electrolyte used for this purpose is called peptizing agent. (2) Electrophoresis : In this process colloidal particles move towards oppositely charged electrodes, get discharged and precipitated. (3) Coagulation : The stability of the lyophobic sol is due to the presence of charge on colloidal particles. If, some how, the charge is removed, the particles will come nearer to each other to form aggregate (or coagulate) and settle down.
Q. Explain the following terms with suitable examples (1) Gel (2) Aerosol and (3) Hydrosol
Ans. (1) Gel : It is a colloidal dispersion of a liquid in a solid, common example is butter. (2) Aerosol : It is a colloidal dispersion of a liquid in a gas, common example is, fog. (3) Hydrosol :It is a colloidal sol of a solid in water as the dispersion medium, common example is starch sol or gold sol.
Q. What are emulsions ? What are their different types ? Give one example of each type.
Ans. These are liquid-liquid colloidal systems, i.e., the dispersion of finely divided droplets in another liquid. If a mixture of two immiscible or partially miscible liquids is shaken, a coarse dispersion of one liquid in the other is obtained which is called emulsion. Generally, one of the two liquids is water. There are two types of emulsions. (i) Oil dispersed in water (O/W type) and (ii) Water dispersed in oil (W/O type ). In the first system, water acts as dispersion medium. Examples of this type of emulsion are milk and vanishing cream. In milk, liquid fat is dispersed in water. In the second system, oil acts as dispersion medium. Common examples of this type are butter and cream. Emulsions of oil in water are unstable and sometimes they separate into two layers on standing. For stabilisation of an emulsion, a third component called emulsifying agent is usually added. The emulsifying agent forms an interfacial film between suspended particles and the medium. The principal emulsifying agents of O/W emulsions are proteins, gums, natural and synthetic soaps, etc., and for W/O, heavy metal, salts of fatty acids, long chain alcohols, etc. Emulsions can be diluted with any amount of the dispersion medium. On the other hand, the dispersed liquid when mixed, forms a separate layer. The droplets in emulsions are often negatively charged and can be precipitated by electrolytes. They also show Brownian movement and Tyndall effect. Emulsions can be broken into constituent liquids by heating, freezing, etc.
Q. Action of soap is due to emulsification and micelle formation. Comment.
Ans. Cleansing action of soap: Washing action of soap is due to the emulsification of grease and taking it away in the water along with dirt or dust present on grease. Explanation : The cleansing action of soap can be explained keeping in mind that a soap molecule contains a non-polar hydrophobic group and a polar hydrophilic group. The dirt is held on the surface of clothes by the oil or grease which is present there. since oil or grease are not soluble in water, therefore, the dirt particles cannot be removed by simply washing the cloth with water. When soap is applied, the non-polar alkyl group dissolves in oil droplets while the polar $-C O O^{-} N a^{+}$ groups remain dissolved in water (Figure). In this way, each oil droplet is surrounded by negative charge. These negatively charged oil droplets cannot coagulate and a stable emulsion is formed. These oil droplets (containing dirt particles) can be washed away with water along with dirt particles.
nega getu
Sept. 4, 2021, 7:21 p.m.
mustafa koya
July 27, 2021, 3:50 a.m.
|
CommonCrawl
|
pp. 36644-36659
•https://doi.org/10.1364/OE.441488
Measurement of the biphoton second-order correlation function with analog detectors
D. A. Safronenkov, N. A. Borshchevskaya, T. I. Novikova, K. G. Katamadze, K. A. Kuznetsov, and G. Kh. Kitaeva
D. A. Safronenkov,1,* N. A. Borshchevskaya,1 T. I. Novikova,1 K. G. Katamadze,1,2 K. A. Kuznetsov,1 and G. Kh. Kitaeva1
1Lomonosov Moscow State University, 119991 Moscow, Russia
2Valiev Institute of Physics and Technology, AcademyRussian Academy of Sciences, 117218 Moscow, Russia
*Corresponding author: [email protected]
D. A. Safronenkov https://orcid.org/0000-0002-0207-2513
K. G. Katamadze https://orcid.org/0000-0002-7631-7341
K. A. Kuznetsov https://orcid.org/0000-0001-6075-7628
G. Kh. Kitaeva https://orcid.org/0000-0002-4860-9937
D Safronenkov
N Borshchevskaya
T Novikova
K Katamadze
K Kuznetsov
G Kitaeva
D. A. Safronenkov, N. A. Borshchevskaya, T. I. Novikova, K. G. Katamadze, K. A. Kuznetsov, and G. Kh. Kitaeva, "Measurement of the biphoton second-order correlation function with analog detectors," Opt. Express 29, 36644-36659 (2021)
Analog single-photon counter for high-speed scanning microscopy
Sucbei Moon, et al.
Generation of sub-MHz and spectrally-bright biphotons from hot atomic vapors with a phase...
Chia-Yu Hsu, et al.
Opt. Express 29(3) 4632-4644 (2021)
Wide-range wavelength-tunable photon-pair source for characterizing single-photon detectors
Lijiong Shen, et al.
Photodetectors
Quantum communications
Quantum detectors
Quantum efficiency
Single photon avalanche diodes
Single photon detectors
Original Manuscript: August 31, 2021
Revised Manuscript: October 14, 2021
Manuscript Accepted: October 15, 2021
Generation of biphotons
Klyshko method of SPDC-based absolute quantum efficiency calibration
Characterization of the biphoton correlation function and quantum efficiencies of APDs in the photon-counting mode
Switching to analog detection mode with the same APDs
Analog detection with PMT
An experimental scheme and data processing approaches are proposed for measuring by analog photo detectors the normalized second-order correlation function of the biphoton field generated under spontaneous parametric down-conversion. Obtained results are especially important for quantum SPDC-based technologies in the long-wave spectral ranges, where it is difficult to use the single-photon detector at least in one of the two biphoton channels. The methods of discrimination of analog detection samples are developed to eliminate the negative influence of the detection noises and get quantitatively true values of both the correlation function and the detector quantum efficiency. The methods are demonstrated depending on whether two single-photon avalanche photo detectors are used in both SPDC channels, or at least one single-photon detector is replaced by a photo-multiplier tube which cannot operate in the photon counting mode.
© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Quantum-correlated pairs of optical photons (biphotons) are widely used in modern quantum technologies, from quantum communication [1], computing [2], metrology [3], to various types of quantum spectroscopy, imaging and sensing [4–11]. Spontaneous parametric down-conversion (SPDC) process provides a remarkable possibility to generate biphotons with the highest possible level of correlation. The normalized second-order correlation function g(2) is usually considered as a quantitative measure of this level. Today the single-photon detectors and fast coincidence circuits are the necessary key elements in a vast majority of quantum optical schemes. They are successfully used in monitoring quantum properties of biphotons generated in the X-ray [12] and optical ranges up to near-IR [13]. However, if a frequency of at least one photon from the biphoton pair goes down from mid-IR down to the terahertz (THz) frequency range, application of the coincidence technique becomes impossible. The reasons are in great difficulties in creating THz receivers that can operate in the single-photon or photon-number resolving detection modes [14,15]. This greatly constrains the advancement of quantum-optical technologies to terahertz range, and up to now all the experimental progress within this direction is concerned with detection of optical signal radiation which is assumed to be correlated with its THz idler counterpart [16–22].
In principle, one can measure the correlation function applying the oldest approach [23] based on measuring the correlation between photocurrents of analog detectors instead of coincidences of single-photon detectors. This possibility was studied theoretically for the optical [24] and optical-terahertz biphotons [25]. It was shown that if the noise contribution to samples of analog detectors is negligible, the both methods should give the same result. However, the quantum contribution to a biphoton correlation function is significant at low photon fluxes, where, unfortunately, the effect of noise on the detector samples is usually quite large. Experiments show that this drastically decreases the results of g(2) measuring if it is done directly by taking into account all the photocurrents from the initially obtained raw set of statistical data. Problems with noises are resolved and the biphoton correlations are successfully used in calibrating quantum efficiencies of that analog detectors that can operate in single-photon detection modes in principle, from the photon number resolving detectors [26] up to ECCD cameras [27]. But the challenges in exact quantitative measuring of g(2) still remain when the outputs of the analog detectors cannot be mapped onto integer photoelectron counts. The main goal of the present work was to develop a relevant experimental procedure for processing the analog samples and determining g(2) using analog detectors of this type. Since the true g(2) value must be known in advance, the study was carried out completely in the optical range, in which the exact g(2) value could be pre-measured using the conventional photon counting detection technique. A number of intuitively meaningful processing procedures were applied to analog samples. Some of them were shown to give an artificially increased g(2) value. Finally, after quantitative comparing the results with the expected true g(2) values in each case, the optimal approaches were selected. The proposed methods open the possibility of quantum optical measurements with a wide range of sensitive analog detectors, which, however, cannot operate in the photon counting modes. They will be especially useful for developing quantum technologies at THz and mid-IR frequencies.
In the final Section 6 of this paper, the best of proposed procedures is described and realized experimentally for all-optical biphotons by two photo detectors, one of which (a PMT module) cannot in any way operate in the single-photon detection mode. The procedure validity is verified by comparing the obtained values of g(2) with the results of measuring g(2) in the photon counting mode using two single-photon avalanche photo diodes (APDs). Apart from measuring the values of g(2), we calibrate quantum efficiency of one of APDs, first in a standard photon-counting scheme, secondly in an analog scheme with the same APDs, and finally in an analog scheme with PMT. In all the cases, the same biphoton field was characterized and used; its generation is described in Sec.2. In Sec. 3 we remind some general aspects of the Klyshko method for SPDC-based calibration of quantum efficiency of single-photon detectors [28–31], and consider modification of this method when analog samples of these detectors are measured. Results of application of the standard photon counting approach for measuring g(2) and efficiencies of APDs are presented in Sec.4. In the next Sec.5, we switch the electronic circuit to an analog detection mode and show that the same results are obtained for g(2) and APD efficiency in the modified experimental scheme. Finally, in Sec.6, one of APDs is substituted by PMT module, and we present a method of data processing which enables to get the same (true) values of the biphoton correlation function g(2) . As an additional marker of validity of the proposed procedure, we also provide here the results of SPDC calibration of the quantum efficiency of APD when the analog PMT stands in the idler channel. It is shown, that our method of g(2) characterization gives the same value of APD quantum efficiency as in case when there is a single-photon detector in the idler channel.
2. Generation of biphotons
We generate orthogonally polarized collinear frequency-degenerate biphotons by using type-II SPDC process in 41.2°-cut beta barium borate (BBO) crystal pumped at 405 nm. Schematics of the optical part of an experimental setup is shown in Fig. 1(a).
Fig. 1. Schematics of the optical part of the experimental setup (a), and electronic circuits in case of photon counting (b) or analog (c) detection modes. L1, L2, Li, Ls, lenses; F1, F2, Fi, Fs, filters; PBS, polarization beam splitter; Di, Ds, detectors; MMF, multimode fiber.
As a pump source, we used a spatially single-mode diode laser with a tunable power. A Faraday isolator was installed to prevent reflections from different optical elements from entering the laser diode. A half-wave plate was used to control the laser polarization. In order to get rid of radiation with longer wavelengths, the laser beam was passed through the filter F1 blocking the wavelengths longer than 450 nm. Then the radiation was focused by a 40 mm-focus quartz lens L1 onto 1 mm thick BBO crystal. The filter F2 was placed in front of the second lens L2 (F = 45 mm) after the crystal to cut off the pumping radiation with a wavelength of less than 600 nm. The transmitted part of SPDC radiation was collimated by the lens L2 and incident on a polarizing beam splitter, which made it possible to distribute generated photons into the "signal" or "idler" channels depending on their orthogonal polarizations.
To apply the Klyshko method for quantum efficiency calibration with high accuracy [28–32], unequal sets of spatio-temporal modes were chosen for detection in the two channels. The signal channel was wider in space and frequency ranges; it collected e-polarized SPDC photons. Signal radiation passed through an interference filter Fs with a center wavelength 800 nm and full width at half maximum 40 nm. Then it was focused by the 11 mm focal lens Ls onto a multimode optical fiber of 62.5 μm transverse section, connected to the photodetector Ds. In the idler channel, the o-polarized SPDC radiation passed through an interference filter Fi with a center wavelength 810 nm and a full width at a half maximum 10 nm. Then it was focused by 11 mm focal lens Li onto a multimode optical fiber of 50 μm transverse section, connected to the photodetector Di.
In the process of measuring the radiation at the output of signal and idler channels, three different photodetectors were changed at positions Ds and Di. Two of them were single-photon avalanche Si photo detectors, APD #1 (Laser Components COUNT NIR module with a dead time 45 ns) and APD#2 (a homemade module with a dead time 220 ns), providing electrical pulses in the standard TTL format. The third one was H7422-50 Hamamatsu photosensor module with a sensitive photomultiplier tube (PMT). PMT could operate in an analog mode only.
An intensity transmission coefficient $K_s^{}$ of optical elements standing between BBO crystal and the detector Ds in the signal channel was measured using radiation of CW diode laser at 808 nm. It was found that $K_s^{} = 0.39$ for e-polarized photons. Remarkably, the intensity transmission coefficient measured by the same method for the idler channel is not actual for our further consideration, since it does not account the losses of that idler photons which are correlated with detected signal photons but do not pass through the narrower spectral and spatial selective elements in the idler channel.
3. Klyshko method of SPDC-based absolute quantum efficiency calibration
According to its general definition, the normalized biphoton second-order correlation function ${g^{(2)}}$ describes correlation of electric fields in idler and signal channels. In case of single-mode detection, it is connected with averages of the photon number operators in signal and idler modes as [33]
(1)$$g_1^{(2)} = \frac{{\left\langle {\hat{N}{{_{}^j}_i}\hat{N}{{_{}^j}_s}} \right\rangle }}{{\left\langle {\hat{N}{{_{}^j}_i}} \right\rangle \left\langle {\hat{N}{{_{}^j}_s}} \right\rangle }} = 2 + \frac{1}{{{N_j}}}. $$
Here, j denotes the corresponding pair of single modes, and ${N_j}$ scales with the parametric gain coefficient ${G_{is}}$ as ${N_j}\sim {sh ^2}\sqrt {{G_{is}}} $ for phase-matched signal and idler modes. Equation (1) follows from the specific quantum nature of SPDC process, where the numbers of generated signal and idler photons are the same in exactly correlated signal and idler modes, so that $\left\langle {\hat{N}{{_{}^j}_i}} \right\rangle = \left\langle {\hat{N}{{_{}^j}_s}} \right\rangle = {N_j}$. In our case, the multimode fields with total numbers of photons $\left\langle {{N_a}} \right\rangle = \sum\limits_j {\left\langle {\hat{N}_a^j} \right\rangle } \approx M\left\langle {\hat{N}_a^j} \right\rangle$ (here and below, $a = s,i$) are detected in the signal and idler channels, and the biphoton correlation function ${g^{(2)}}$ decreases [34,35] according to
(2)$${g^{(2)}} = \frac{{\left\langle {N_i^{}N_s^{}} \right\rangle }}{{\left\langle {N_i^{}} \right\rangle \left\langle {N_s^{}} \right\rangle }} = 1 + \frac{{g_1^{(2)} - 1}}{M}. $$
Here, $M = {M_ \bot }{M_{||}}$ is a product of the numbers of temporal (longitudinal, ${M_{||}}$) and spatial (transverse, ${M_ \bot }$) modes under detection in the broadest channel. In our setup, this corresponds to the signal channel, where the wider optical fiber and the wider spectral filter are inserted. In particular, the number of longitudinal modes is determined as a ratio ${M_{||}} = {t_{det}}/{\tau _{coh}}$ between the biphoton detection time ${t_{det}}$ and the coherence time of signal photons under detection ${\tau _{coh}} = 2\pi /\Delta {\omega _s} = {\lambda _s}^2/c\Delta {\lambda _s}$.
A mean number of photo-counts $\left\langle {{n_a}} \right\rangle$ detected by each photo-detector within ${t_{det}}$ should be proportional to the mean number of generated SPDC photons, optical transmission of the channel $K_a^{}$, and quantum efficiency of the detector ${\eta _a}$: $\left\langle {{n_a}} \right\rangle = {K_a}{\eta _a}\left\langle {{N_a}} \right\rangle$. Quantum efficiency of each detection channel does not affect the measured value of ${g^{(2)}}$. However, according to Klyshko method [28–30], ${\eta _a}{K_a}$ can be determined after multiplying $({{g^{(2)}} - 1} )$ by the mean number of photo-counts detected in this channel. Following this approach, absolute quantum efficiency of the detector placed in the signal channel is determined as
(3)$${\eta _s} = ({{g^{(2)}} - 1} )\cdot \left\langle {{n_s}} \right\rangle /{K_s}.$$
The number $\left\langle {{n_s}} \right\rangle$ can be easily determined if single-photon or photon-number resolving detectors are used, but has a more complex relation to samples of other type detectors.
Generally, any event of photon absorption by an intensity-sensitive detector leads to formation of a short-time photocurrent surge $i_0^m$ at the detector's output. The number of such events $n_{}^{}$ during the time ${t_{det}}$ is directly determined by detection in a photon counting mode. But an analog detector issues only the total current $i$ detected over ${t_{det}}$, that is equal to
(4)$$i = \sum\limits_{m = 1}^{{n_{}}} {i_0^m\tau _0^m/} {t_{det}}. $$
Here, $z_0^m \equiv i_0^m\tau _0^m$ denotes a charge carried by each elementary current pulse, and $\tau _0^m$ corresponds to its duration. Statistical distribution of elementary charges could be very different in different detector types [24,36], so that it is impossible to give a general receipt for estimating $\left\langle n \right\rangle$ according to a measured value of $\left\langle i \right\rangle$. This makes it difficult to apply the Klyshko method using the analog data in the most part of cases. However, if the statistical distribution of the elementary charges $P(z_0^m)$ is narrow (i.e. its dispersion is small in comparison with the average value of an elementary single-photon charge $\left\langle {{z_0}} \right\rangle$), it does not affect significantly the distribution of currents $P(i)$. Then the relation between a photo-current and a number of photo-counts can be described by mean parameters of the elementary current pulses, so that
(4a)$$i = n\left\langle {{z_0}} \right\rangle /{t_{det}} \equiv n\left\langle {{i_0}} \right\rangle.$$
Here, $\left\langle {{i_0}} \right\rangle$ is an average single-photon current issued when only one photon is detected during ${t_{det}}$. We will show in the next sections that approximation (4a) is valid at least when a single-photon detector is used in the analog detection mode. In this case the mean number of photo counts $\left\langle n \right\rangle$ is simply determined after measuring $\left\langle i \right\rangle$ as $\left\langle n \right\rangle = \left\langle i \right\rangle /\left\langle {{i_0}} \right\rangle$, and the modified Klyshko approach can be applied for calibrating the quantum efficiency using analog samples of such a detector.
4. Characterization of the biphoton correlation function and quantum efficiencies of APDs in the photon-counting mode
First, we applied the standard approach for characterizing both the biphoton normalized second-order correlation function ${g^{(2)}}$ and quantum efficiencies of single-photon APDs. During a properly extended data acquisition time (T∼1s), we determined the total numbers of single-photon counts from each detector placed in signal (${m_s}$) and idler (${m_i}$) channels, and the number ${m_{cc}}$ of the single-photon count coincidences. Measurements were done using the electronic circuit shown in Fig. 1(b). The time window of the coincidence scheme ${t_{cc}}$ corresponded to the detection time in general theory, ${t_{det}} = {t_{cc}} = 8\,\textrm{ns}$. To account the dark noise counts of each detector, we measured the corresponding samples ${m_{s0}}$ and ${m_{i0}}$ when the pump radiation was blocked. The mean numbers of photo-counts per detection time (${t_{cc}}$) was calculated as $\left\langle {{n_a}} \right\rangle = ({{m_a} - {m_{a0}}} ){t_{cc}}/T$ ($a = i,s$). The mean number of coincidences per detection time was calculated as $\left\langle {{n_{cc}}} \right\rangle = {m_{cc}}{t_{cc}}/T$, without taking into account a negligibly small number of dark noise coincidences.
Applying the standard procedure with single-photon detectors, we determined ${g^{(2)}}$ experimentally, using digital samples of single-photon APDs as
(5)$${g^{(2)}} \equiv \frac{{\left\langle {{N_i}{N_s}} \right\rangle }}{{\left\langle {{N_i}} \right\rangle \left\langle {{N_s}} \right\rangle }} = \frac{{\left\langle {{n_{cc}}} \right\rangle }}{{\left\langle {{n_i}} \right\rangle \left\langle {{n_s}} \right\rangle }} \equiv \frac{{{m_{cc}}T}}{{({{m_i} - {m_{i0}}} )({{m_s} - {m_{s0}}} ){t_{cc}}}}. $$
Results of measuring ${g^{(2)}}$ for SPDC at different pump powers are presented in Fig. 2 by filled red circles. The obtained values of ${g^{(2)}}$ rapidly scale up when the pump power ${P_{pump}}$ is decreased. Namely this dependence is predicted by Eqs. (1)-(2), since by this way we change proportionally the gain coefficient ${G_{is}}\sim {P_{pump}}$. Data in Fig. 2 was obtained when APD#1 was placed in the signal channel and APD#2 was in the idler channel. When the detectors were swapped, the values of ${g^{(2)}}$ did not change within the experimental error, although each detector samples were completely different in two cases. The vertical error bars indicate statistical uncertainties calculated by repeating the measurements ∼300 times. The horizontal error bars indicate uncertainty of the pump power. Since the pump power was controlled by the laser diode current, this error increased significantly as the lasing threshold was approached.
Fig. 2. Values of the biphoton correlation function, determined experimentally at different powers of pump radiation by three detection systems, using APD#1 as a signal detector and APD#2 as an idler detector, both working in the photon counting mode (red circles), the same detectors operating in the analog mode (black open squares), APD#1 as a signal detector and PMT as an idler detector both operating in the analog mode (blue triangles). Solid line: theoretical approximation for the correlation function dependence on the pump power with a single scaling coefficient.
Quantum efficiencies of both APDs were measured when the detector under calibration was placed in the signal channel. They were calculated in accordance with Eqs. (3) and (5):
(6)$${\eta _s} = ({{g^{(2)}} - 1} )\cdot \frac{{({{m_s} - {m_{s0}}} ){t_{cc}}}}{{T{K_s}}}$$
The results obtained at different pump powers for APD#1 are shown in Fig. 3 by red circles. As it should, within the experiment uncertainties the values of quantum efficiencies do not demonstrate any dependence on the pump power. Noticeably, the experimental accuracy was less for the quantum efficiency of APD#1 than for APD#2. It takes place due to a higher dark noise level of APD#2, which detected lower photon fluxes in the narrow idler channel when the quantum efficiency of APD#1 was measured. In both cases, the values of ${\eta _s}$ were less than the passport data of the devices. It indicates that not all the losses in the signal channel were accounted in the applied value of ${K_s}$. For example, there could be possible losses in connecting the multimode fiber with the detector input window. However, the full compliance with the passport data was not the direct goal of this measurement. For our study, it was important, without any changes in the optical scheme, to obtain the same values of the quantum efficiency when using the analog circuit.
Fig. 3. Quantum efficiencies determined experimentally at different powers of pump radiation for APD#1 1) using the photon counting detection system (filled red circles), 2) using the analog detection system with APD#2 in the idler channel (black open squares), 3) using the analog detection system with PMT in the idler channel (blue triangles).
5. Switching to analog detection mode with the same APDs
At the next step we changed an electronic circuit to detect "almost instantaneous" photocurrents from the both detectors, ${i_s}$ and ${i_i}$. They were measured by two gated integrator and boxcar averager modules SR250 (Fig. 1(c)) and further converted into the digital form by an analog-to-digital converter (ADC). An external pulse generator was used to trigger equal gates at each module with 4 kHz frequency; an optimal delay time between two triggers was adjusted for the maximum of the measured biphoton correlation function. The detection time in this setup corresponded to the gate duration, ${t_{det}} = {t_{gate}}$, and could be changed from 2 ns to 15 µs. We selected different ${t_{gate}}$ exceeding the dead times of our detectors (from 220 ns and above). The number of detected modes changed correspondingly, as $M\sim {t_{gate}}$, and it was checked that the values of the measured biphoton correlation function vary in accordance with Eq. (2). The data on statistical distributions of the values of ${i_s}$, ${i_i}$, and their product ${i_s}{i_i}$, detected within the same gate intervals, was collected within 20-30 min. and analyzed by PC.
Figure 4 shows examples of non-normalized histograms, which correspond to statistical distributions of "instantaneous" signal currents $P({i_s})$, recorded by APD#1 at the same detection time ${t_{gate}}$=500 ns and different values of the laser pump power ${P_{pump}}$, 17.2 mW, 4.5 mW, 0.15 mW, and 0 mW. In each histogram one can see some noise peak in the region of the lowest currents. Only this peak remains when the pump radiation is blocked and no SPDC photons are incident on the detectors. When the pump power increases, the noise peak amplitudes and widths typically grow up. The peaks at higher currents appear at nonzero ${P_{pump}}$. Evidently, they correspond to the cases of detecting one, two or more elementary photo counts during the gate time. When the pump power decreases, each peak position remains stable, but a number of observed peaks decreases together with the amplitude of each peak. The gentle character of the left slope and the sharp right slope of each peak can be easily explained by considering the cases when only a part of an asymmetric single-photon current pulse falls at the beginning or end of the strobe time interval. Up to some possible constant shift along the horizontal axis, the photo count maxima have coordinates equal to $p\left\langle {{i_{ph}}} \right\rangle$, where p is an ordinal number of the peak. An average current $\left\langle {{i_{ph}}} \right\rangle$ carried by a single detector's photo count is easily determined as a distance between two neighboring photo count peaks. Finally, we observe only the first peak in the lowest part of ${P_{pump}}$ scale. Amplitude of this peak gradually "drowns" in noise when the photon flux decreases. If the detector is placed in the low-transmission idler channel, this effect comes earlier, at higher pump powers. However, consistently placing APD#1 and APD#2 into the signal channel we determined the values of $\left\langle {{i_{ph}}} \right\rangle$ for each APD.
Fig. 4. Histograms of statistical distributions of "instantaneous" (averaged over 500 ns detection time) currents recorded from APD#1 in the signal channel at different laser pump powers, 17.2 mW (purple), 4.5 mW (blue), 0.15 mW (green), and 0 mW (red). The dashed lines indicate boundaries of the photo count peak regions taken for further processing.
To measure the true correlation between the samples of the detectors, first we excluded from statistical distributions the data on that current intervals, where ${i_s}$ or ${i_i}$ were not within the areas of the first and other (if applicable) photon peaks. These areas are shaded in examples presented in Fig. 4. The detector samples outside these areas were taken as equal to 0. By this discrimination we could eliminate not only the main part of noise samples of each detector, but also to decrease substantially the noise part in their correlation ${i_i}{i_s}$. After that, the averages $\left\langle {{i_a}} \right\rangle$, $\left\langle {{i_i}{i_s}} \right\rangle$ were determined for the entire sample. Also, to exclude a possible residual noise contribution to the mean SPDC currents, we measured $\left\langle {{i_{a0}}} \right\rangle$ by the same treatment of the histograms detected when the pump radiation was blocked. After that the biphoton correlation function was calculated as
(7)$$g_M^{(2)} \equiv \frac{{\left\langle {{N_i}{N_s}} \right\rangle }}{{\left\langle {{N_i}} \right\rangle \left\langle {{N_s}} \right\rangle }} = \frac{{\left\langle {{i_i}{i_s}} \right\rangle }}{{\left( {\left\langle {{i_i}} \right\rangle - \left\langle {{i_{i0}}} \right\rangle } \right)\left( {\left\langle {{i_s}} \right\rangle - \left\langle {{i_{s0}}} \right\rangle } \right)}}. $$
Here, the index "M" denotes that the number of detected modes was sufficiently higher (in ${t_{gate}}/{t_{cc}}$ times) than in case of photon-counting detection regime in the previous section. The noise value $\left\langle {{i_{a0}}} \right\rangle$ for APD#1 was so small that we did not observe any accidental noise contribution to correlation $\left\langle {{i_{i0}}{i_{s0}}} \right\rangle$. Also, there was practically no need to take it into account in the denominator of Eq. (7).
To compare the biphoton correlation functions detected by analog and photon-counting experimental set-ups, we recalculated $g_M^{(2)}$ into $g_{}^{(2)}$, which corresponds to the same number of modes as in the previous experiment. It was done using the direct consequence from (2)
(8)$${g^{(2)}} = 1 + ({g_M^{(2)} - 1} )\frac{{{t_{gate}}}}{{{t_{cc}}}}. $$
Since the number of spatial modes, as well as the spectral intervals of detection did not change, we accounted here only an effect of different detection times. The results on this reduced $g_{}^{(2)}$, obtained at different values of the pump power, are shown in Fig. 2 by the black square dots. Here and below, for all measurements in the analog regime, the vertical error bars indicate possible deviations calculated by taking dispersions of the current samples in each statistical data set, including the set of current products ${i_i}{i_s}$. In the experiment, the gate time was taken ${t_{gate}}$=500 ns, longer than the dead times (220 ns and 45 ns) and the single-photon pulse times (18 ns and 16 ns) of APDs. Thus, the scale factor required for comparison with the photon-counting results was 500/8 = 62.5. It can be seen that the values of correlation function $g_{}^{(2)}$ obtained by different detection methods with two APDs coincide within the experimental errors. The both sets of experimental values of the correlation function were determined when APD#1 was placed in the signal channel and APD#2 was in the idler channel.
We have also calculated the quantum efficiencies of APD#1 and APD#2. It was done using a modified relation (6) in the form
(9)$${\eta _s} = ({g_M^{(2)} - 1} )\cdot \frac{{\left\langle {{i_s}} \right\rangle }}{{\left\langle {{i_{ph}}} \right\rangle {K_s}}}.$$
This relation well agrees with Eq. (31) obtained previously in the detailed theoretical study of absolute calibration of analog detectors using SPDC [24]. Eq. (31) in this work depicts relation between the correlation function of the detected photo currents and the quantum efficiency of one of the analog detectors for the case when the elementary single-photon current pulses do not overlap. The single difference can appear when $g_M^{(2)}$ becomes not very large in comparison with 1, since Eq. (31) in [24] does not assume subtracting 1 from the correlation function before calculating the quantum efficiency. But this is not the case both in our experiments and under the condition of low parametric gain for which Eq. (31) was derived.
The results obtained using (9) for APD#1 are shown by black open squares in Fig. 3. It is seen that the experimental error grows up when the pump power and, correspondingly, the SPDC photon fluxes scale down. The accuracy of the results can be further increased by collecting a more representative set of statistical data. However, quite good agreement is already noticeable between the values of the quantum efficiency obtained using the analog and the photon counting calibration procedures. This testifies a general applicability of the modified calibration method to measuring quantum efficiency of single-photon APDs when they operate in an analog detection mode. But fortunately, at this stage we could isolate the main part of the analog detection noise by selecting the regions of the photon count peaks in the histograms due to the single-photon response of our APDs.
6. Analog detection with PMT
At the final stage of the study we removed APD#2 from the analog electronic detection scheme, while APD#1 remained in the signal channel. Since we excluded APD # 2 with the longest dead time, we could take a smaller gate time ${t_{gate}}$=100 ns. In the idler channel, APD#2 was replaced by H7422-50 Hamamatsu module with PMT (Fig. 1(c)). The module could produce samples in analog mode only. Its sensitive element is unable to operate in a photon-counting regime, as clearly demonstrate histograms in Fig. 5. Figure 5(a) presents histograms of PMT samples in the idler channel, taken at the same pump powers as the histograms of APD#1 in the signal channel in Fig. 4. In contrary to the case of single-photon APDs, in the PMT histograms it is impossible to separate unambiguously the samples related to absorption of certain number of photons and the pure noise samples. To study how large is the noise contribution, we considered the conditional distributions of the PMT samples. Figure 5(b) shows example of such distributions, created from the PMT samples detected when
(1) no photons were detected in the signal channel by APD#1 (by taking only that PMT samples which correspond to APD#1 samples from the interval below the first dashed line in Fig. 4; yellow color in Fig. 5(b)),
(2) the 1-photon count peak was detected in the signal channel by APD#1 (taking only that PMT samples which correspond to APD#1 samples from the interval of the first peak in Fig. 4; cyan color in Fig. 5(b)),
(3) the 2-photon count peak was detected in the signal channel by APD#1 (taking only that PMT samples which correspond to APD#1 samples from the interval of the second peak in Fig. 4; magenta color in Fig. 5(b)).
Fig. 5. Histograms of statistical distributions of "instantaneous" (averaged over 100 ns detection time) currents recorded from PMT in the idler channel. (a): unconditional distributions of total PMT samples at different laser pump powers, 17.2 mW (purple), 4.5 mW (blue), 0.15 mW (green), and 0 mW (red). (b): conditional distributions recorded at 17.2 mW pump power when no photons (yellow), one photon (cyan), and two photons (magenta) were detected in the signal channel by APD#1.
All the conditional histograms were obtained from the samples recorded at ${P_{pump}}$ = 17.2 mW and present the corresponding parts of the total histogram shown in Fig. 5(a) by the purple color. One can see in Fig. 5(b) that the overwhelming majority of PMT samples is recorded when no photons were detected in the idler channel during the same strobe time. Of course, it does not mean that there were no photons at all in the signal channel, since the quantum efficiency of APD#1 is below 100%. But the signal channel is wider. Thus, some rather significant part of the samples on the yellow histogram is still purely noise in nature. And these noise samples can have the amplitudes within the whole range of detected PMT samples. Most probably, it takes place due to a wide spread of possible elementary single-photon currents ${i_{ph}}$ [36] generated in PMT.
The presence of noise contribution can decrease noticeably the experimentally measured correlation function. For example, a simple subtraction of the mean noise current $\left\langle {{i_{i0}}} \right\rangle$, measured when the pump radiation is blocked (the black histogram in Fig. 6(a)), from the mean current $\left\langle {{i_i}} \right\rangle$ (measured through the total histograms such as the red histogram in Fig. 6(a) recorded at 17.2 mW pump power) is not enough to get the true values of the biphoton correlation function. The noise contribution to the values of $\left\langle {{i_s}{i_i}} \right\rangle$ remains incompatible in this case and the recorded correlation will be lower than it actually is. This demonstrates Fig. 6(b), where the ${g^{(2)}}$ values obtained by this way are presented together with the results of ${g^{(2)}}$ characterization using the single photon detection technique.
Fig. 6. (a): Histograms of PMT samples recorded when the pump was blocked (black color), and with 17.2 mW pump power (red color – total distribution, blue color - conditional distribution). (b): Values of the biphoton correlation function, determined experimentally by the photon counting circuit with two single-photon detectors APD#1 and APD#2 (red) and by the analog circuit with APD#1 and PMT (black points); no discrimination was applied to PMT samples.
Evidently, some PMT samples should be eliminated (i.e. replacing by zero) before processing the whole PMT statistical set. This procedure decreases the effective value of the PMT quantum efficiency, but, in contrast to the noise contribution, can have no influence on the measured value of ${g^{(2)}}$. We studied different ways of such discrimination of the PMT samples. The simplest one was to eliminate the low-level PMT samples. Indeed, the higher is a single sample ${i_i}$, the lower should be a relative contribution of the noise to this sample. Thus, by eliminating (i.e. replacing by zero) sufficiently low values of ${i_i}$, one can obtain higher values of ${g^{(2)}}$, up to its true level measured at both previous stages. This method enabled to obtain the larger and larger ${g^{(2)}}$ values when the level of discrimination ${i_{i,thr}}$ was increased, finally approaching the true value of ${g^{(2)}}$ obtained by the photon counting technique for the same SPDC field. As for example, Fig. 7 shows this dependency calculated using the data set recorded at ${P_{pump}}$ = 17.2 mW; the red line corresponds to the true level of ${g^{(2)}}$. It is seen that, at the same time, uncertainty of the obtained ${g^{(2)}}$ values increases gradually when the statistical set of samples is depleted so much. The uncertainties can be too high especially at low pump powers, and very long expositions will be necessary to decrease them. Anyway, this discrimination method could be very important when the both detectors, the idler and the signal ones, do not possess the single-photon response property.
Fig. 7. Values of the biphoton correlation function, determined experimentally by the photon counting circuit (red curve) and by the analog circuit with discrimination of PMT samples below the cut-off PMT current ii,thr . The pump power is 17.2 mW.
In our case there is a single-photon APD in an opposite SPDC channel. The regions of pure noise samples and the samples with the photon-induced contributions are clearly spaced in its histogram. This provides an opportunity to make optimal discrimination of the PMT samples based on the samples of APD#1 in the signal channel. However, leaving only that PMT samples, which were obtained when APD#1 detected photon counts, one can artificially increase the level of correlations. This illustrates Fig. 8. Here, as for example, we provide at the bottom graph the histograms of APD#1 recorded at 17.2 mW pump power and when the pump was blocked. Before calculating ${g^{(2)}}$, we discriminated (replaced by 0) those PMT samples, that were obtained simultaneously with the APD#1 samples below a certain threshold level ${i_{s,thr}}$. After that the APD#1 samples were discriminated by the same way as in previous section. The processed data including all the zero values was used for calculation all the distribution moments. Discrimination using the same principle was done when the pure noise data from PMT was processed to get $\left\langle {{i_{i0}}} \right\rangle$ for substituting into Eq. (7). The upper graph shows results of ${g^{(2)}}$ evaluation using this algorithm with different ${i_{s,thr}}$ values. It is seen, that the obtained ${g^{(2)}}$ values grow when the threshold level ${i_{s,thr}}$ is increased. This growth in the initial range of ${i_{s,thr}}$ from 0 up to 30-50 finally gives the true ${g^{(2)}}$ value detected with two single photon detectors (a horizontal line in the upper graph). Remarkably, namely this range corresponds to that APD#1 histogram which was detected in the absence of any incident photons (red color in the bottom graph). Further increase of ${i_{s,thr}}$ gives an artificial increase of the measured ${g^{(2)}}$ above its true value. This can be explained by the growing presence of photon count contributions in the corresponding ranges of the APD#1 histogram.
Fig. 8. Upper graph: Values of the biphoton correlation function, determined experimentally by the photon counting circuit (red line) and by the analog circuit with discrimination of PMT samples below the cut-off APD#1 current is,thr . Bottom graph: Histograms of APD#1 samples recorded when the pump was blocked (red), and with 17.2 mW pump power (violet – total distribution, black – within the photon counting peaks). The pump power is 17.2 mW.
Thus, our analysis shows that the discrimination of PMT samples according to the samples of single-photon APD#1 in the opposite channel is optimal from the point of minimal statistical errors. The threshold level ${i_{s,thr}}$ should be taken at the end of the pure noise (recorded without any incident photons) APD histogram to avoid the systematic errors. Only a small part of all PMT samples is discriminated by this way (see the conditional distribution of PMT samples marked by blue color in Fig. 6(a)), but it enables to eliminate the influence of analog detector noise correctly. Other case, the obtained ${g^{(2)}}$ can be less (due to non-eliminated noises) or higher (due to artificial procedure-induced correlations) than its true value. Apparently, using this method, we can eliminate the effect of pure electrical noise, which has the same effect on both APD and PMT samples, and, nevertheless, not artificially impose a superfluous photon correlation. The same values of the biphoton correlation function as in previous cases of single-photon APDs in the both SPDC channels are obtained with a fairly small uncertainty for all the considered pump power levels. Results of this method are presented in Fig. 2 by blue triangles. Evidently, the applied method of discrimination can be especially useful for detecting true biphoton correlations in strongly non-degenerate SPDC processes, when one of the biphoton frequencies falls into the range where there are no single photon detectors.
By substituting obtained with PMT $g_M^{(2)}$ values into Eq. (9), we have determined the quantum efficiency of APD#1. Obtained by this way values of quantum efficiency of APD#1 are shown in Fig. 3 together with results of other calibration procedures. Within the experimental error, they coincide with the results of calibration by the two previous methods. As it is seen also from Fig. 3, the uncertainties increase for the data of all three types when the pump power scales down. Much more pronounced at pump powers below 3 mW, this effect takes place mostly due to increase of statistical errors in detecting the photon fluxes which become weaker when the pump power decreases. Obviously, the statistical uncertainties can be decreased by increasing the total data acquisition time, as it was observed in our experiments also. However, the pump power dependences of vertical errors in Fig. 3 tend to saturate at the high pump powers. One can estimate the contribution of possible systematic errors to the total uncertainty budget by taking as an upper level the uncertainties detected at the highest pump power, 17.2 mW in our case. Reasoning in this way, we obtain the relative systematic error for measuring the quantum efficiency of APD # 1 in all three detection regimes to be no more than the same level of 9%. It is noteworthy that application of the discrimination procedure proposed for PMT gives the value of quantum efficiency, which coincides with much greater accuracy with the result of direct application of Klyshko method in the photon counting regime.
We have developed an experimental scheme and proposed the data processing approaches for measuring by analog photo detectors the normalized second-order correlation function of the biphoton SPDC fields ${g^{(2)}}$. The approaches are alternative to the traditional method based on the use of single-photon detectors in the photon-counting scheme, and can be very important in experimental conditions when the detectors which can operate in the photon counting mode are not available. At all steps of our study we compared the determined values of ${g^{(2)}}$ and, in parallel, of the quantum efficiency of one of the single-photon detectors, with the corresponding results of the conventional photon-counting technique.
The experimental scheme operates with two gated integrator boxcar modules which simultaneously determine photo currents from detectors of signal and idler SPDC radiation. These "almost instantaneous" currents (samples) are actually results of averaging over a small gate time. It is shown that this time corresponds to the time window of the coincidence scheme in the photon-counting approach. The gate time scales the number of detected spectral modes and, correspondingly, the value of the detected correlation function ${g^{(2)}}$ of the multi-mode SPDC field. However, statistical distributions of the samples should be specially processed for calculating the exact ${g^{(2)}}$ values.
It is shown that high noise contributions to the analog samples can drastically decrease the determined ${g^{(2)}}$ below its true value. To avoid this effect it is necessary to apply a proper discrimination procedure to the raw sets of samples before calculating the moments of the sample distributions. During the discrimination procedure some noisy samples are left equal to zero in the set. In case of APD with single-photon response, the statistical distribution of the samples contains several well-resolved peaks corresponding to pure noise samples, and the samples with one photon count during the gate time, two photons, et cet. We found that the true results on ${g^{(2)}}$ value with low uncertainty can be obtained if all the APD samples beyond the photon count peaks are substituted by 0. For the analog PMT detector the direct discrimination of low-amplitude samples and the heralding discrimination procedures with the cut-off levels corresponding to APD samples of different origin are studied. It is found that some of them can lead to artificially increased ${g^{(2)}}$ values. The direct discrimination approach needs longer measurement times to get the final result with a satisfactory uncertainty, but is applicable regardless of which type detector is in the opposite SPDC channel. As the best method we recognize the heralding-type approach which requires a detector with a single-photon property in the opposite channel. In this case, the PMT samples are discriminated or not, depending on whether the amplitudes of the analog samples, simultaneously recorded by the opposite detector, are below or above their characteristic threshold value. The correct choice of the threshold value is shown to be an important point for successful application of the heralding approach. The threshold value is determined according to the pure noise histogram of the single-photon samples recorded in complete darkness.
In general, discrimination can lead to underestimated results on quantum efficiencies of the detectors if the method of quantum calibration based on correlation between signal and idler photons is applied. Fortunately, this does not concern the discrimination procedure proposed for the single-photon detectors inserted in the analog circuit. This is demonstrated for both cases, when an analog PMT, and when another single-photon APD operate in the opposite SPDC channel.
The obtained results will be most relevant in applications of quantum SPDC technologies in the long-wave spectral ranges, where it is difficult to use the single-photon detectors at least in one channel of the quantum-correlated biphoton field.
Russian Science Foundation (17-12-01134).
We are grateful to Maria Chekhova, Pavel Prudkovskii for stimulating discussions and to Anton Konovalov for the technical help.
The main part of the data underlying the results presented in this paper is available within the article, the not publicly available at this time and more detailed part may be obtained from the authors upon reasonable request.
1. N. Gisin and R. Thew, "Quantum communication," Nat. Photonics 1(3), 165–171 (2007). [CrossRef]
2. F. Flamini, N. Spagnolo, and F. Sciarrino, "Photonic quantum information processing: a review," Rep. Prog. Phys. 82(1), 016001 (2019). [CrossRef]
3. V. Giovannetti, S. Lloyd, and L. Maccone, "Advances in quantum metrology," Nat. Photonics 5(4), 222–229 (2011). [CrossRef]
4. L. A. Lugiato, A. Gatti, and E. Brambilla, "Quantum imaging," J. Opt. B: Quantum Semiclass. Opt. 4(3), S176–S183 (2002). [CrossRef]
5. G.B. Lemos, V. Borish, G.D. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, "Quantum imaging with undetected photons," Nature 512(7515), 409–412 (2014). [CrossRef]
6. M. Genovese, "Real applications of quantum imaging," J. Opt. 18(7), 073002 (2016). [CrossRef]
7. P.-A. Moreau, E. Toninelli, T. Gregory, and M.J. Padgett, "Ghost Imaging Using Optical Correlations," Laser & Photonics Reviews 12(1), 1700143 (2018). [CrossRef]
8. A.V. Paterova and L.A. Krivitsky, "Nonlinear interference in crystal superlattices," Light: Sci. Appl. 9(1), 82 (2020). [CrossRef]
9. J.-Z Yang, M.-F. Li, X.-X. Chen, W.-K. Yu, and A.-N. Zhang, "Single-photon quantum imaging via single-photon illumination," Appl. Phys. Lett. 117(21), 214001 (2020). [CrossRef]
10. A. Nomerotski, M. Keach, P. Stankus, P. Svihra, and S. Vintskevich, "Counting of Hong-Ou-Mandel bunched optical photons using a fast pixel camera," Sensors 20(12), 3475 (2020). [CrossRef]
11. A.S. Clark, M. V. Chekhova, J.C.F. Matthews, J.G. Rarity, and R.F. Oulton, "Special Topic: Quantum sensing with correlated light sources," Appl. Phys. Lett. 118(6), 060401 (2021). [CrossRef]
12. A. Schori, D. Borodin, K. Tamasaku, and S. Shwartz, "Ghost imaging with paired x-ray photons," Phys. Rev. A 97(6), 063804 (2018). [CrossRef]
13. M. Arahata, Y. Mukai, B. Cao, T. Tashima, R. Okamoto, and S. Takeuchi, "Wavelength variable generation and detection of photon pairs in visible and mid-infrared regions via spontaneous parametric downconversion," J. Opt. Soc. Am. B 38(6), 1934–1941 (2021). [CrossRef]
14. J.O.D. Williams, J.A. Alexander-Webber, J.S. Lapington, M. Roy, I.B. Hutchinson, A.A. Sagade, M.-B. Martin, P. Braeuninger-Weimer, A. Cabrero-Vilatela, R. Wang, A. De Luca, F. Udrea, and S. Hofmann, "Towards a graphene-based low intensity photon counting photodetector," Sensors 16(9), 1351 (2016). [CrossRef]
15. P. M. Echternach, B. J. Pepper, T. Reck, and C. M. Bradford, "Single photon detection of 1.5 THz radiation with the quantum capacitance detector," Nat Astron 2(1), 90–97 (2018). [CrossRef]
16. G.K. Kitaeva, P.V. Yakunin, V.V. Kornienko, and A.N. Penin, "Absolute brightness measurements in the terahertz frequency range using vacuum and thermal fluctuations as references," Appl. Phys. B 116(4), 929–937 (2014). [CrossRef]
17. C. Riek, D. V. Seletskiy, A.S. Moskalenko, J.F. Schmidt, P. Krauspe, S. Eckart, S. Eggert, G. Burkard, and A. Leitenstorfer, "Direct sampling of electric-field vacuum fluctuations," Science 350(6259), 420–423 (2015). [CrossRef]
18. B. Haase, M. Kutas, F. Riexinger, P. Bickert, A. Keil, D. Molter, M. Bortz, and G. von Freymannet, "Spontaneous parametric down-conversion of photons at 660 nm to the terahertz and sub-terahertz frequency range," Opt. Express 27(5), 7458 (2019). [CrossRef]
19. I.C. Benea-Chelmus, F.F. Settembrini, G. Scalari, and J. Faist, "Electric field correlation measurements on the electromagnetic vacuum state," Nature 568(7751), 202–206 (2019). [CrossRef]
20. K.A. Kuznetsov, E.I. Malkova, R.V. Zakharov, O.V. Tikhonova, and G.Kh. Kitaeva, "Nonlinear interference in strongly non-degenerate regime and Schmidt mode analysis," Phys. Rev. A 101(5), 053843 (2020). [CrossRef]
21. M. Kutas, B. Haase, P. Bickert, F. Riexinger, D. Molter, and G. von Freymann, "Terahertz quantum sensing," Sci. Adv. 6(11), 8065 (2020). [CrossRef]
22. T.I. Novikova, K.A. Kuznetsov, A.A. Leontyev, and G.Kh. Kitaeva, "Study of SPDC spectra to reveal temperature dependences for optical-terahertz biphotons," Appl. Phys. Lett. 116(26), 264003 (2020). [CrossRef]
23. R. Hanbury Brown and R. Q. Twiss, "Correlation between Photons in two Coherent Beams of Light," Nature 177(4497), 27–29 (1956). [CrossRef]
24. G. Brida, M. Genovese, I. Ruo-Berchera, M. Chekhova, and A. Penin, "Possibility of absolute calibration of analog detectors by using parametric downconversion: a systematic study," J. Opt. Soc. Am. B 23(10), 2185–2193 (2006). [CrossRef]
25. G.Kh. Kitaeva, A.A. Leontyev, and P.A. Prudkovskii, "Quantum correlation between optical and terahertz photons generated under multimode spontaneous parametric down-conversion," Phys. Rev. A 101(5), 053810 (2020). [CrossRef]
26. J. Perina, O. Haderka, A. Allevi, and M. Bondani, "Absolute calibration of photon-number-resolving detectors with an analog output using twin beams," Appl. Phys. Lett. 104(4), 041113 (2014). [CrossRef]
27. A. Avella, I. Ruo-Berchera, I.P. Degiovanni, G. Brida, and M. Genovese, "Absolute calibration of an EMCCD camera by quantum correlation, linking photon counting to the analog regime," Opt. Lett. 41(8), 1841–1844 (2016). [CrossRef]
28. D.N. Klyshko, "On the use of two-photon light for absolute calibration of photoelectric detectors," Sov. J. Quantum Electron. 10(9), 1112–1117 (1980). [CrossRef]
29. D.N. Klyshko, Photons and Nonlinear Optics (Gordon and Breach: New York, 1988).
30. A.N. Penin and A.V. Sergienko, "Absolute standardless calibration of photodetectors based on quantum two-photon fields," Appl. Opt. 30(25), 3582–3588 (1991). [CrossRef]
31. M.D. Eisamen, J. Fan, A. Migdall, and S.V. Polyakov, "Invited review article: Single-photon sources and detectors," Rev. Sci. Instrum. 82(7), 071101 (2011). [CrossRef]
32. S.V. Polyakov and A.L. Migdall, "High accuracy verification of a correlated photon-based method for determining photoncounting detection efficiency," Opt. Express 15(4), 1390–1407 (2007). [CrossRef]
33. C.T. Lee, "Nonclassical photon statistics of two-mode squeezed states," Phys. Rev. A 42(3), 1608–1616 (1990). [CrossRef]
34. O. A. Ivanova, T. S. Iskhakov, A. N. Penin, and M. V. Chekhova, "Multiphoton correlations in parametric down-conversion and their measurement in the pulsed regime," Quantum Electron. 36(10), 951–956 (2006). [CrossRef]
35. G. Brida, I.P. Degiovanni, M. Genovese, M.L. Rastello, and I. Ruo-Berchera, "Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera," Opt. Express 18(20), 20572 (2010). [CrossRef]
36. P. Prudkovskii, A. Leontyev, K. Kuznetsov, and G. Kitaeva, "Towards Measuring Terahertz Photon Statistics by a Superconducting Bolometer," Sensors 21(15), 4964 (2021). [CrossRef]
N. Gisin and R. Thew, "Quantum communication," Nat. Photonics 1(3), 165–171 (2007).
F. Flamini, N. Spagnolo, and F. Sciarrino, "Photonic quantum information processing: a review," Rep. Prog. Phys. 82(1), 016001 (2019).
V. Giovannetti, S. Lloyd, and L. Maccone, "Advances in quantum metrology," Nat. Photonics 5(4), 222–229 (2011).
L. A. Lugiato, A. Gatti, and E. Brambilla, "Quantum imaging," J. Opt. B: Quantum Semiclass. Opt. 4(3), S176–S183 (2002).
G.B. Lemos, V. Borish, G.D. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, "Quantum imaging with undetected photons," Nature 512(7515), 409–412 (2014).
M. Genovese, "Real applications of quantum imaging," J. Opt. 18(7), 073002 (2016).
P.-A. Moreau, E. Toninelli, T. Gregory, and M.J. Padgett, "Ghost Imaging Using Optical Correlations," Laser & Photonics Reviews 12(1), 1700143 (2018).
A.V. Paterova and L.A. Krivitsky, "Nonlinear interference in crystal superlattices," Light: Sci. Appl. 9(1), 82 (2020).
J.-Z Yang, M.-F. Li, X.-X. Chen, W.-K. Yu, and A.-N. Zhang, "Single-photon quantum imaging via single-photon illumination," Appl. Phys. Lett. 117(21), 214001 (2020).
A. Nomerotski, M. Keach, P. Stankus, P. Svihra, and S. Vintskevich, "Counting of Hong-Ou-Mandel bunched optical photons using a fast pixel camera," Sensors 20(12), 3475 (2020).
A.S. Clark, M. V. Chekhova, J.C.F. Matthews, J.G. Rarity, and R.F. Oulton, "Special Topic: Quantum sensing with correlated light sources," Appl. Phys. Lett. 118(6), 060401 (2021).
A. Schori, D. Borodin, K. Tamasaku, and S. Shwartz, "Ghost imaging with paired x-ray photons," Phys. Rev. A 97(6), 063804 (2018).
M. Arahata, Y. Mukai, B. Cao, T. Tashima, R. Okamoto, and S. Takeuchi, "Wavelength variable generation and detection of photon pairs in visible and mid-infrared regions via spontaneous parametric downconversion," J. Opt. Soc. Am. B 38(6), 1934–1941 (2021).
J.O.D. Williams, J.A. Alexander-Webber, J.S. Lapington, M. Roy, I.B. Hutchinson, A.A. Sagade, M.-B. Martin, P. Braeuninger-Weimer, A. Cabrero-Vilatela, R. Wang, A. De Luca, F. Udrea, and S. Hofmann, "Towards a graphene-based low intensity photon counting photodetector," Sensors 16(9), 1351 (2016).
P. M. Echternach, B. J. Pepper, T. Reck, and C. M. Bradford, "Single photon detection of 1.5 THz radiation with the quantum capacitance detector," Nat Astron 2(1), 90–97 (2018).
G.K. Kitaeva, P.V. Yakunin, V.V. Kornienko, and A.N. Penin, "Absolute brightness measurements in the terahertz frequency range using vacuum and thermal fluctuations as references," Appl. Phys. B 116(4), 929–937 (2014).
C. Riek, D. V. Seletskiy, A.S. Moskalenko, J.F. Schmidt, P. Krauspe, S. Eckart, S. Eggert, G. Burkard, and A. Leitenstorfer, "Direct sampling of electric-field vacuum fluctuations," Science 350(6259), 420–423 (2015).
B. Haase, M. Kutas, F. Riexinger, P. Bickert, A. Keil, D. Molter, M. Bortz, and G. von Freymannet, "Spontaneous parametric down-conversion of photons at 660 nm to the terahertz and sub-terahertz frequency range," Opt. Express 27(5), 7458 (2019).
I.C. Benea-Chelmus, F.F. Settembrini, G. Scalari, and J. Faist, "Electric field correlation measurements on the electromagnetic vacuum state," Nature 568(7751), 202–206 (2019).
K.A. Kuznetsov, E.I. Malkova, R.V. Zakharov, O.V. Tikhonova, and G.Kh. Kitaeva, "Nonlinear interference in strongly non-degenerate regime and Schmidt mode analysis," Phys. Rev. A 101(5), 053843 (2020).
M. Kutas, B. Haase, P. Bickert, F. Riexinger, D. Molter, and G. von Freymann, "Terahertz quantum sensing," Sci. Adv. 6(11), 8065 (2020).
T.I. Novikova, K.A. Kuznetsov, A.A. Leontyev, and G.Kh. Kitaeva, "Study of SPDC spectra to reveal temperature dependences for optical-terahertz biphotons," Appl. Phys. Lett. 116(26), 264003 (2020).
R. Hanbury Brown and R. Q. Twiss, "Correlation between Photons in two Coherent Beams of Light," Nature 177(4497), 27–29 (1956).
G. Brida, M. Genovese, I. Ruo-Berchera, M. Chekhova, and A. Penin, "Possibility of absolute calibration of analog detectors by using parametric downconversion: a systematic study," J. Opt. Soc. Am. B 23(10), 2185–2193 (2006).
G.Kh. Kitaeva, A.A. Leontyev, and P.A. Prudkovskii, "Quantum correlation between optical and terahertz photons generated under multimode spontaneous parametric down-conversion," Phys. Rev. A 101(5), 053810 (2020).
J. Perina, O. Haderka, A. Allevi, and M. Bondani, "Absolute calibration of photon-number-resolving detectors with an analog output using twin beams," Appl. Phys. Lett. 104(4), 041113 (2014).
A. Avella, I. Ruo-Berchera, I.P. Degiovanni, G. Brida, and M. Genovese, "Absolute calibration of an EMCCD camera by quantum correlation, linking photon counting to the analog regime," Opt. Lett. 41(8), 1841–1844 (2016).
D.N. Klyshko, "On the use of two-photon light for absolute calibration of photoelectric detectors," Sov. J. Quantum Electron. 10(9), 1112–1117 (1980).
D.N. Klyshko, Photons and Nonlinear Optics (Gordon and Breach: New York, 1988).
A.N. Penin and A.V. Sergienko, "Absolute standardless calibration of photodetectors based on quantum two-photon fields," Appl. Opt. 30(25), 3582–3588 (1991).
M.D. Eisamen, J. Fan, A. Migdall, and S.V. Polyakov, "Invited review article: Single-photon sources and detectors," Rev. Sci. Instrum. 82(7), 071101 (2011).
S.V. Polyakov and A.L. Migdall, "High accuracy verification of a correlated photon-based method for determining photoncounting detection efficiency," Opt. Express 15(4), 1390–1407 (2007).
C.T. Lee, "Nonclassical photon statistics of two-mode squeezed states," Phys. Rev. A 42(3), 1608–1616 (1990).
O. A. Ivanova, T. S. Iskhakov, A. N. Penin, and M. V. Chekhova, "Multiphoton correlations in parametric down-conversion and their measurement in the pulsed regime," Quantum Electron. 36(10), 951–956 (2006).
G. Brida, I.P. Degiovanni, M. Genovese, M.L. Rastello, and I. Ruo-Berchera, "Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera," Opt. Express 18(20), 20572 (2010).
P. Prudkovskii, A. Leontyev, K. Kuznetsov, and G. Kitaeva, "Towards Measuring Terahertz Photon Statistics by a Superconducting Bolometer," Sensors 21(15), 4964 (2021).
Alexander-Webber, J.A.
Allevi, A.
Arahata, M.
Avella, A.
Benea-Chelmus, I.C.
Bickert, P.
Bondani, M.
Borish, V.
Borodin, D.
Bortz, M.
Bradford, C. M.
Braeuninger-Weimer, P.
Brambilla, E.
Brida, G.
Burkard, G.
Cabrero-Vilatela, A.
Cao, B.
Chekhova, M.
Chekhova, M. V.
Chen, X.-X.
Clark, A.S.
Cole, G.D.
De Luca, A.
Degiovanni, I.P.
Echternach, P. M.
Eckart, S.
Eggert, S.
Eisamen, M.D.
Faist, J.
Fan, J.
Flamini, F.
Gatti, A.
Genovese, M.
Giovannetti, V.
Gisin, N.
Gregory, T.
Haase, B.
Haderka, O.
Hanbury Brown, R.
Hofmann, S.
Hutchinson, I.B.
Iskhakov, T. S.
Ivanova, O. A.
Keach, M.
Keil, A.
Kitaeva, G.
Kitaeva, G.K.
Kitaeva, G.Kh.
Klyshko, D.N.
Kornienko, V.V.
Krauspe, P.
Krivitsky, L.A.
Kutas, M.
Kuznetsov, K.
Kuznetsov, K.A.
Lapington, J.S.
Lapkiewicz, R.
Lee, C.T.
Leitenstorfer, A.
Lemos, G.B.
Leontyev, A.
Leontyev, A.A.
Li, M.-F.
Lloyd, S.
Lugiato, L. A.
Maccone, L.
Malkova, E.I.
Martin, M.-B.
Matthews, J.C.F.
Migdall, A.
Migdall, A.L.
Molter, D.
Moreau, P.-A.
Moskalenko, A.S.
Mukai, Y.
Nomerotski, A.
Novikova, T.I.
Okamoto, R.
Oulton, R.F.
Padgett, M.J.
Paterova, A.V.
Penin, A.
Penin, A. N.
Penin, A.N.
Pepper, B. J.
Perina, J.
Polyakov, S.V.
Prudkovskii, P.
Prudkovskii, P.A.
Ramelow, S.
Rarity, J.G.
Rastello, M.L.
Reck, T.
Riek, C.
Riexinger, F.
Roy, M.
Ruo-Berchera, I.
Sagade, A.A.
Scalari, G.
Schmidt, J.F.
Schori, A.
Sciarrino, F.
Seletskiy, D. V.
Sergienko, A.V.
Settembrini, F.F.
Shwartz, S.
Spagnolo, N.
Stankus, P.
Svihra, P.
Takeuchi, S.
Tamasaku, K.
Tashima, T.
Thew, R.
Tikhonova, O.V.
Toninelli, E.
Twiss, R. Q.
Udrea, F.
Vintskevich, S.
von Freymann, G.
von Freymannet, G.
Wang, R.
Williams, J.O.D.
Yakunin, P.V.
Yang, J.-Z
Yu, W.-K.
Zakharov, R.V.
Zeilinger, A.
Zhang, A.-N.
Appl. Opt. (1)
Appl. Phys. B (1)
Appl. Phys. Lett. (4)
J. Opt. (1)
J. Opt. B: Quantum Semiclass. Opt. (1)
J. Opt. Soc. Am. B (2)
Laser & Photonics Reviews (1)
Light: Sci. Appl. (1)
Nat Astron (1)
Opt. Lett. (1)
Phys. Rev. A (4)
Quantum Electron. (1)
Rep. Prog. Phys. (1)
Rev. Sci. Instrum. (1)
Sci. Adv. (1)
Sov. J. Quantum Electron. (1)
(1) g 1 ( 2 ) = ⟨ N ^ j i N ^ j s ⟩ ⟨ N ^ j i ⟩ ⟨ N ^ j s ⟩ = 2 + 1 N j .
(2) g ( 2 ) = ⟨ N i N s ⟩ ⟨ N i ⟩ ⟨ N s ⟩ = 1 + g 1 ( 2 ) − 1 M .
(3) η s = ( g ( 2 ) − 1 ) ⋅ ⟨ n s ⟩ / K s .
(4) i = ∑ m = 1 n i 0 m τ 0 m / t d e t .
(4a) i = n ⟨ z 0 ⟩ / t d e t ≡ n ⟨ i 0 ⟩ .
(5) g ( 2 ) ≡ ⟨ N i N s ⟩ ⟨ N i ⟩ ⟨ N s ⟩ = ⟨ n c c ⟩ ⟨ n i ⟩ ⟨ n s ⟩ ≡ m c c T ( m i − m i 0 ) ( m s − m s 0 ) t c c .
(6) η s = ( g ( 2 ) − 1 ) ⋅ ( m s − m s 0 ) t c c T K s
(7) g M ( 2 ) ≡ ⟨ N i N s ⟩ ⟨ N i ⟩ ⟨ N s ⟩ = ⟨ i i i s ⟩ ( ⟨ i i ⟩ − ⟨ i i 0 ⟩ ) ( ⟨ i s ⟩ − ⟨ i s 0 ⟩ ) .
(8) g ( 2 ) = 1 + ( g M ( 2 ) − 1 ) t g a t e t c c .
(9) η s = ( g M ( 2 ) − 1 ) ⋅ ⟨ i s ⟩ ⟨ i p h ⟩ K s .
James Leger, Editor-in-Chief
|
CommonCrawl
|
Spatiotemporal multi-disease transmission dynamic measure for emerging diseases: an application to dengue and zika integrated surveillance in Thailand
Chawarat Rotejanaprasert ORCID: orcid.org/0000-0003-2623-00771,2,
Andrew B. Lawson3 &
Sopon Iamsirithaworn4
New emerging diseases are public health concerns in which policy makers have to make decisions in the presence of enormous uncertainty. This is an important challenge in terms of emergency preparation requiring the operation of effective surveillance systems. A key concept to investigate the dynamic of infectious diseases is the basic reproduction number. However it is difficult to be applicable in real situations due to the underlying theoretical assumptions.
In this paper we propose a robust and flexible methodology for estimating disease strength varying in space and time using an alternative measure of disease transmission within the hierarchical modeling framework. The proposed measure is also extended to allow for incorporating knowledge from related diseases to enhance performance of surveillance system.
A simulation was conducted to examine robustness of the proposed methodology and the simulation results demonstrate that the proposed method allows robust estimation of the disease strength across simulation scenarios. A real data example is provided of an integrative application of Dengue and Zika surveillance in Thailand. The real data example also shows that combining both diseases in an integrated analysis essentially decreases variability of model fitting.
The proposed methodology is robust in several simulated scenarios of spatiotemporal transmission force with computing flexibility and practical benefits. This development has potential for broad applicability as an alternative tool for integrated surveillance of emerging diseases such as Zika.
The nature of infectious diseases has been changing rapidly in conjunction with dramatic societal and environmental changes. This is a substantial challenge in terms of emergency preparedness requiring the implementation of a wide range of surveillance policies. The recent emergence of Zika outbreaks associated with birth defects prompted the World Health Organization (WHO) to declare a public health emergency of international concern in February 2016 [1]. After that, there has been an explosion in research and planning as the global health community has turned their attention to understanding and controlling Zika virus. However, the lack of important information needed to assess the global health threat from the virus still remains [2]. The behavior of an infectious disease is often formidable or sometimes not feasible to be evaluated by conducting experiments with real communities. As a result, mathematical models explaining the transmission of infectious diseases are a valuable tool for planning disease-management policies.
An important question when a new emerging disease occurs is the disease transmission mechanism and how infectious the disease is. A key concept in epidemiology to indicate the scale and speed of spread in a susceptible population is the transmissibility of the infection, characterized by the basic reproduction number, R0. This quantity has a definition of longer-term endemicity in a given population (R0 < 1 stops an epidemic) [3]. An extensive range of estimation methods have been proposed (see examples [4, 5]). Although the basic reproduction number can be useful for understanding the transmissibility of an infectious disease, the methods based on fitting deterministic transmission models are often difficult to use and generalize in practice due to context-specific assumptions which often do not hold [6, 7].
It is of practical importance to consider a computationally feasible and robust methodology to evaluate the force of infection. It has been proposed that the time course of an epidemic can be partly achieved by estimating the effective (instantaneous) reproduction number [8, 9]. However, since the contact rates among people may differ due to differences the local environment, human behavior, vector abundance, and, potentially, interactions with other viruses, disease transmissibility will vary across locations as well. Although spatial heterogeneity has been considered (see examples [10, 11]), the reproduction numbers are estimated separately for single areas without accounting for spatial variation and overdispersion in modeling. Due to very limited information when new emerging infection initially occurs, it is natural to look for strategies that mirror relevant information. Surveillance systems have been operated singly for various types of infectious diseases, however with multiple streams of geo-coded disease information available it is important to be able to take advantage of the multivariate health data. The benefits of multivariate surveillance lie in the ability to observe concurrency of patterns of disease and to allow conditioning of one disease on others. To assist public health practitioners to assess disease transmissibility in field settings, we thus develop the proposed methodology to allow for incorporating spatiotemporal knowledge from related diseases to enhance performance of surveillance system which was not considered in previous studies.
The aim of this study is to develop a generic and robust methodology for estimating spatiotemporally varying transmissibility that can be instantaneously computed for each location and time within a user-friendly environment for real-time surveillance. Not only our method has a practical interpretation with theoretical foundation but also can be understood and applied by non-technical users. The proposed method is defined in the next section with a simulation study to demonstrate robustness of the methodology. A case study is also provided of an application of integrative surveillance of Dengue and Zika virus activities in Thailand.
Spatiotemporal measure of disease transmission
The basic reproduction number is one of the principal concepts widely used as an epidemiological measure of the transmission potential which is theoretically defined as the number of secondary infections produced by a single infectious individuals in a susceptible population [3, 12]. However, not withstanding the issues with underlying theoretical assumptions such as population susceptibility and dynamic nature of infectious diseases, the basic reproduction number is difficult to apply in real situations. For instance, few epidemics are only ever observed when a new infection enters in a susceptible population in which the disease can also persist but had not been or able to be detected for a period of time. This situation violates a primary assumption about the 'at risk' population which commonly appears in new emerging diseases such as Zika. Another example is that the nature of infectious diseases is dynamic and should be consistently monitored whereas the basic reproduction number is an infinite measure which then fails to satisfy the dynamic behavior of infections. Therefore, we need to be cautious about the underlying model assumptions when applying the concept. Otherwise, it could lead to inappropriate disease-management policies or even uncontrollable outbreaks (see more discussion in [6, 7, 13]). In this paper we propose an alternative measure of spatiotemporally varying disease transmission, which we will call the surveillance reproduction number. Not affected by context-specific restrictions, this measure provides a practical interpretation that can be flexibly applied to many applications in infectious disease epidemiology. Moreover this measure which accounts for spatiotemporal heterogeneity can robustly estimate the disease strength simultaneously for all areal units and time periods, and is ideally suitable for emerging disease surveillance. To derive the proposed methodology, compartment modeling is reviewed as the foundation of our development.
There are various forms of compartmental models for infectious diseases (see examples in [5, 14]). One of the early modeling contributions is the Kermack-McKendrick model [15], a compartmental model with formulation of flow rates between disease stages of a population. A special case of the model is the well-known SIR (susceptible-infectious-recovered) model. A SIR model is usually used to describe a situation where a disease confers immunity against re-infection, to indicate that the passage of individuals is from the susceptible class S to the infective class I and to the removed class R. A common SIR model used to describe the disease at location i and time t can be specified as follows:
$$ {\displaystyle \begin{array}{l}\frac{dS_{it}}{dt}=-{a}_{it}{S}_{it}\\ {}\left(\frac{\partial }{\partial t}+\frac{\partial }{\partial l}\right){I}_{it}(l)={a}_{it}{S}_{it}-{b}_{it}{I}_{it}(l)\\ {}\frac{dM_{it}}{dt}={\int}_0^{\infty }{b}_{it}{I}_{it}(l) dl\end{array}} $$
where Iit(0) = aitSit. Denote the numbers of susceptible and recovered (removed) individuals by Sit and Mit. Note that we use M for the removed to avoid notation confusion with the surveillance reproduction number that will be constructed later. Iit(l) is the number of infected individuals at time t with the infectious period l. l is the time elapsed since infection which is the time period of being infectious since the person got infected. bit(l) is the recovery rate during l. ait is known as disease transmissibility at time t which is defined later. as \( {a}_{it}={\int}_0^{\infty }{c}_{it}(l){I}_{it}(l) dl \) where cit(l) is the rate of secondary transmission per single infectious case. Although infectious modeling is usually described in a preferential sampling setting in which locations are spatially modeled, one should be aware of possible bias due to the selective sampling scheme [16, 17]. Alternatively, our methodology is developed in a conditional framework by instead conditioning the aggregated count on a fixed areal unit such as county or health district.
The disease dynamic is assumed to follow a Poisson process such that the incidence rate at which someone infected in time t − 1 generates new infections in time step t at location i is μit. The relationship between the incidence rate μit and the prevalence Iit is assumed to be Iit(l) = hit(l)μit − l for t − l > 0 where hit(l) > 0, l > 0, is a proportional constant, and μit = Iit(0). The incidence rate, the number of susceptible individuals get infected, at location i and time t equals aitSit, i.e. μit = aitSit. The transmissibility \( {a}_{it}={\int}_0^{\infty }{c}_{it}(l){I}_{it}(l) dl \) can be seen as the force of infection or rate at which susceptible people get infected. For example, this quantity increases if a person has a respiratory disease and does not perform good hygiene during the course of infection or decreases if that person rests in bed. Then we have that
$$ {\mu}_{it}={a}_{it}{S}_{it}={\int}_0^{\infty }{c}_{it}(l){S}_{it}(l){h}_{it}(l){\mu}_{it-l} dl. $$
Let ζit(l) = cit(l)Sit(l)hit(l). ζit(l) reflects the reproductive power or effective contact rate between infectious and susceptible individuals at calendar time t, location i and infected time l.
To define and develop the surveillance reproduction number, Rs. it, we further assume that there exist two sets of functions, Rs, it = { Rs, it }, the set of surveillance reproduction numbers, and \( {\boldsymbol{G}}_{it}=\left\{\;{g}_{it}(l)|{\int}_0^{\infty }{g}_{it}(l)\ dl=1\;\right\} \), distributional functions over the infectious time at each location, such that ζit(l) can be decomposed into a product of those functions, i.e., ζit(l) = Rs, itgit(l). There are a number of functions in those sets satisfying the conditions. A non-trivial solution can be defined as
$$ {\int}_0^{\infty }{\zeta}_{it}(l)\; dl={\int}_0^{\infty }{R}_{s, it}{g}_{it}(l)\; dl={R}_{s, it}{\int}_0^{\infty }{g}_{it}(l)\; dl={R}_{s, it}. $$
However, this leads to the same issue as the basic reproductive number that we usually do not know the number of susceptible people for a given location and time which would not be very useful in field settings of emerging diseases. Hence we define the surveillance reproduction number in which \( {\mu}_{it}={\int}_0^{\infty }{\zeta}_{it}(l){\mu}_{it-l} dl \). That is
$$ {\mu}_{it}={\int}_0^{\infty }{R}_{s, it}{g}_{it}(l){\mu}_{it-l} dl={R}_{s, it}{\int}_0^{\infty }{g}_{it}(l){\mu}_{it-l} dl $$
and, therefore, we have that
$$ {R}_{s, it}=\frac{\mu_{it}}{\int_0^{\infty }{g}_{it}(l){\mu}_{it-l} dl}. $$
Since \( {\int}_0^{\infty }{g}_{it}(l)\ dl=1 \), Rs, it can also be interpreted as the ratio of the current incidence rate to the total (weighted sum) infectiousness of infected individuals. Because patient's information is often collected in a discrete fashion, then Rs, it can be estimated as \( {R}_{s, it}\approx \frac{\mu_{it}}{\sum_{l=1}^L{g}_{it}(l){\mu}_{it-l}} \) where L is the maximum period of infection. Thus this quantity represents force of infection as the number of secondary infected cases that each infected individual would infect averaged over their infectious lifespan in at location i during time t. However, it is hard to derive incidence density rates due to the lack of monitoring of individual new cases and real exposed population required during a given time period and location. Then we assume that \( {\mu}_{it}={h}_{it}^{\hbox{'}}{I}_{it} \) where \( {h}_{it}^{\hbox{'}}>0 \) is a proportional constant between prevalence and incidence at calendar time t and location i. Then the surveillance reproduction number can be expressed as
$$ {R}_{s, it}=\frac{h_{it}^{\hbox{'}}{I}_{it}}{\int_0^{\infty }{g}_{it}(l){h}_{it-l}^{\hbox{'}}{I}_{it-l} dl}\approx \frac{I_{it}}{\int_0^{\infty }{g}_{it}(l){I}_{it-l} dl}. $$
Hence, to estimate the surveillance number with prevalence, the ratio of incidence and prevalence is assumed to be nearly constant over time. This is a limitation of our development. This assumption may not be appropriate for long duration diseases such as chronic conditions but rather suitable for infections with relatively short duration.
The proposed methodology has practical advantages over the traditional basic reproduction number. One of which is that our method is based on prevalence, not affected by the assumption about susceptibility which is often difficult or infeasible to know. Another, since our metric is dynamic (does not depend on the infinite definition), it can be sequentially calculated which is very suitable for monitoring the disease strength, ideally for emerging diseases.
To account for spatiotemporal variation and overdispersion, μit is modeled to link to a linear predictor consisting of local variables such as environmental and demographic factors and space-time random effects to account for spatiotemporal heterogeneity as log(μit) = α + Xitβit + ui + vi + λt + δit. The correlated (ui) and uncorrelated (vi) spatial components have an intrinsic conditional autoregressive (ICAR) prior distribution and zero mean Gaussian distribution respectively. In addition, there are a separate temporal random effect (λt) and a space-time interaction term (δit) in the linear predictor. Often the temporal effect is described using an autoregressive prior distribution such allowing for a type of nonparametric temporal effect. Note that this distribution is a random walk prior distribution with one-unit lag. For the interaction term, the prior structure is usually assumed to be distributed as a zero mean Gaussian distribution. The estimate of the reproduction number is however also dependent on the choice of the infectiousness profile, git(l), which assumes to be Log-Normal distributed and standardized to sum to one.
Let yit be the number of new cases at location i time t and the disease transmission is presumably modeled with a Poisson process. However, the cases are usually reported at a discrete time such as weekly or monthly. Assuming the transmissibility remains in the time interval (t, t + 1], the incidence at location i time t is Poisson distributed with mean μit. Then the full model specification is as follows:
$$ {\displaystyle \begin{array}{l}{y}_{it}\sim Poisson\left({\mu}_{it}\right)\\ {}\log \left({\mu}_{it}\right)=\alpha +{\boldsymbol{X}}_{it}{\boldsymbol{\beta}}_{it}+{u}_i+{v}_i+{\lambda}_t+{\delta}_{it}\kern1em \\ {}\alpha \sim N\left(0,{\tau}_{\alpha}^{-1}\right);{\beta}_{it}\sim N\left(0,{\tau}_{\beta}^{-1}\right)\\ {}{u}_i\sim ICAR\left({\tau}_u^{-1}\right);{v}_i\sim N\left(0,{\tau}_v^{-1}\right)\\ {}{\lambda}_t\sim N\left({\lambda}_{t-1},{\tau}_{\lambda}^{-1}\right)\\ {}{\delta}_{it}\sim N\left(0,{\tau}_{\delta}^{-1}\right)\end{array}}\kern0.5em {\displaystyle \begin{array}{l}{R}_{s, it}=\frac{\mu_{it}}{\sum_{l=1}^L{g}_{il}{\mu}_{it-l}}\\ {}{g}_{il}=\frac{\exp \left({w}_{il}\right)}{\sum_{l=1}^L\exp \left({w}_{il}\right)}\\ {}{w}_{il}\sim N\left(0,{\tau}_w^{-1}\right)\\ {}{\tau}_{\ast}^{-1/2}\sim Unif\left(0,10\right).\end{array}} $$
To evaluate our proposed methodology, we simulate data without covariates in several situations with different magnitudes of transmissibility. The simulation map used as a basis for our evaluation is the district map of the province of Bangkok, Thailand. This province has 50 districts (i = 1–50) with a reasonably regular spatial distribution. The simulated incidence are generated for 20 weeks (t = 1–20) in four district groups with different levels of the reproduction numbers. Figure 1 displays the maps showing locations of simulated Rs of each district group at weeks 5, 10, 15, and 20. The simulated incidence of each district group with different levels of Rs is shown in Fig. 2 in which each dot represents a simulated value from a given simulation set. The first group (middle region in Fig. 1) is simulated with increasing magnitudes of transmission as Rs, it = 0.2 + (t × 0.15). The Rs, it is assumed to be increasing every time period by the size of 0.15. Then incidence with an exponentially increase is generated in this scenario to represent regions with an outbreak (left panel in Fig. 2). The second district group (western region in Fig. 1) is assumed to have decreasing magnitudes simulated as Rs, it = 4.0 − (t × 0.2). As can be seen in Fig. 2 (second panel from the left), the incidence in this scenario increases at the beginning due to strongly positive force of infection but will be decreasing afterwards. In the third scenario (eastern region in Fig. 1), Rs, it is assumed to have the size of 1.5 until week 12 and then reduced to 0.8 afterwards. This scenario represents the situation where an effective intervention is introduced to control an outbreak. The rest of the districts are assumed to have a constant low infection rate at Rs, it = 0.8 over the 20 time periods. To sample the discrete weight wil is drawn from a normal distribution with mean of 1.5 with standard deviation of one. The infectious time, L, of 3 weeks is set to generate the incidence.
Bangkok maps of simulated Rs during weeks 5, 10, 15, and 20 (left-right)
Simulated incidence of districts in group 1 (increasing Rs), group 2 (decreasing Rs), group 3 (with a jump) and groups 4 (constant Rs = 0.8)
We generate 100 simulated incidence datasets starting with the number of newly infected people as 2, 1, and 6 for the first 3 weeks. For weeks t > 3, the new cases yit are sampled from a Poisson distribution for each location with mean \( {\mu}_{it}={R}_{s, it}{\sum}_{l=1}^3{g}_{il}{\mu}_{it-l} \). That is \( {\mu}_{i1}=2;{\mu}_{i2}=1;{\mu}_{i3}=6;{\mu}_{it}={R}_{s, it}{\sum}_{l=1}^t{g}_{il}{\mu}_{it-l},t>3 \). The infectious time interval is also evaluated in the simulation study to examine the effect of different window sizes. We investigate the sensitivity of the window choice by assuming L = 2, 3 and 4 weeks in the simulation study because the infectious period of arthropod-borne diseases such as Dengue and Zika usually lasts for a couple of weeks [18]. The results displayed are from posterior sampling carried out on WinBUGS, user-friendly software, using MCMC with an initial burn-in period of 100,000 iterations to assess the convergence of MCMC chains.
The simulated and corresponding estimated Rs for each district group with different infectious times are shown in Fig. 3. Our methodology allows estimating a constant surveillance reproduction number used for simulation in scenario 4. The constant changes in Rs are detected in both increasing (scenario 1) and decreasing (scenario 2) force of infection. It also can effectively identify a jump in transmissibility (scenario 3). Figure 4 (top row) displays the mean squared error (MSE) of the estimated surveillance numbers in all scenarios and the correct infectious time, L = 3, yields the most precise estimate (the least MSE).
Plots of the posterior estimated Rs of all district groups with infectious periods of 2 (top), 3 (middle), and 4 (bottom) weeks from all simulated datasets. The black lines show the estimated mean with dash lines showing the 95% credible interval. The grey lines display posterior realizations and the red lines are the true Rs used for simulation
Bar plots of MSE and MSPRE of the estimated and predicted reproduction number with different infectious times of four district groups
The estimate of the surveillance number also depends on the choice of the time window size L. However, it may not be feasible to know the true infection time in real situations. Then we examine a loss function metric which employs the predictive distribution to guide selecting the infectious profile. A commonly used loss function is the mean squared predictive error (MSPE) [19] comparing the observed data to the predicted values from a fitted model. However we are interested in the loss function in estimation of Rs . We then propose another predictive loss function, the mean squared predictive reproduction error (MSPRE), to evaluate the model predictive adequacy in terms of reproduction number defined as.
\( {MSPRE}_{it}={\sum}_{n=1}^N{\left({R}_{s, itn}^y-{R}_{s, itn}^{y^{pred}}\right)}^2/N \), where \( {R}_{s, itn}^y=\frac{y_{it}}{\sum_{l=1}^L{g}_{inl}{y}_{it-l}} \), \( {R}_{s, itn}^{y^{pred}}=\frac{y_{int}^{pred}}{\sum_{l=1}^L{g}_{inl}{y}_{\mathit{\operatorname{int}}-l}^{pred}} \), N is the size of posterior sampler and \( {y}_{itn}^{pred} \) is generated from its posterior predictive distribution at the n th posterior sampling after burn-in period. Figure 4 (bottom row) presents the MSPRE for district groups corresponding to different choices of infection time window. We can see the infectious time of 3 weeks has both the least MSE and MSPRE. Thus this metric can provide guidance on which the time windows to consider in practice. The use of MSPRE will also be demonstrated in a case study provided later. It should be noted that the time elapsed since infection, l, could vary by individual. However we model the aggregated count conditional on spatial units instead of at individual level. Then the infectious time would be averaged over an area. Given the sampling framework it is reasonable to assume a constant infectious time for the population. Nonetheless it is also possible that the infectious time has a spatiotemporal distribution over the study area which is perhaps dependent on environmental or demographic variables. Then the covariates should be included in the model when available as well.
As presented we have developed a robust methodology to estimate disease transmissibility varying across locations and time periods. Our method allows for computational flexibility not affected by conventional restrictions which generally are difficult to apply in real situations. However due to very limited knowledge when new emerging infection initially occurs, it is extremely challenging for policy makers to make decision based upon enormous uncertainty. Therefore it is essential to consider the analysis integrating relevant information streams in order to develop the best disease-management plans possible. Hence in the next section we extend the univariate framework to allow for incorporating knowledge from related diseases.
Multivariate surveillance reproduction number
Limitations in availability of disease information constrain public health efforts to prevent and control outbreaks. Thus utilizing knowledge, we have from other sources such as related diseases can principally help improving the surveillance system. Dengue is one of the major arthropod-borne diseases in tropical and sub-tropical regions. The virus belongs to the genus Flavivirus and is primarily transmitted by Aedes mosquitoes as well as Zika. Similarity in virological characteristics and identification as etiologic agents of the similar illness and their co-infection suggest that these 2 Aedes mosquito-transmitted viruses can be circulating in the same area confirming the underlying potential for Zika to have a similar spreading pattern to Dengue [20, 21]. Therefore, in this section we develop a multivariate transmissibility measure allowing for transferring of spatiotemporal knowledge of these two flaviviruses in order to maximize the surveillance ability which was not considered in previous literature.
In the multi-disease surveillance setting, spatial data on multiple diseases are observed at each time period. We assume that yit and \( {y}_{it}^s \) are the number of new Zika and Dengue (with superscript) cases which are Poisson distributed with means μit and \( {\mu}_{it}^s \) for each area i and time t. Using a logarithmic link function, α and αs are the overall intercepts, and Xitβit and \( {\boldsymbol{X}}_{it}^s{\boldsymbol{\beta}}_{it}^s \) are the covariate predictors for both diseases. In general, once multiple diseases are introduced into an analysis there is a need to consider relations between the diseases. This can be achieved in various ways. A basic approach to this is to consider cross-correlation between the diseases [22, 23]. There is a numerous literature in the specification of cross-disease modeling using Gaussian process [24]. The multivariate conditional autoregressive (MCAR) model [25] and the shared component model [26] are the two primary approaches to model disease risk correlations across both spatial units and diseases. Here we use an extended version of the convolution model [27] to incorporate diseases' correlation using multivariate spatially-correlated, ui and \( {u}_i^s \), and non-correlated, vi and \( {v}_i^s \), random effects to account for unobserved confounders and spatial variation. To capture the temporal trend a multivariate autoregressive prior distribution, which allows for sharing the temporal information between the diseases, is assumed for the temporal random effects, λt and \( {\lambda}_t^s \). In addition, there is a space-time interaction term for each disease, δit and \( {\delta}_{it}^s \), which are assumed to have a Gaussian prior distribution. Finally infectivity profiles gil and \( {g}_{il}^s \) are jointly approximated by a standardized multivariate Log-Normal distribution. A multivariate extension of the reproduction number, Rms, it, incorporating information from both diseases can be defined as \( {R}_{ms, it}=\frac{\mu_{it}}{\int_0^{\infty }{g}_{it}(l){\mu}_{it-l} dl}\approx \frac{\mu_{it}}{\sum_{l=1}^L{g}_{il}{\mu}_{it-l}} \) and \( {R}_{ms, it}^s=\frac{\mu_{it}^s}{\int_0^{\infty }{g}_{it}^s(l){\mu}_{it-l}^s dl}\approx \frac{\mu_{it}^s}{\sum_{l=1}^L{g}_{il}^s{\mu}_{it-l}^s} \). Then full specification of the joint modeling of Zika and Dengue is following:
$$ {\displaystyle \begin{array}{l}{y}_{it}\sim Poisson\left({\mu}_{it}\right);{y}_{it}^s\sim Poisson\left({\mu}_{it}^s\right)\\ {}\log \left({\mu}_{it}\right)=\alpha +{X}_{it}{\beta}_{it}+{u}_i+{v}_i+{\lambda}_t+{\delta}_{it}\\ {}\log \left({u}_{it}^s\right)={\alpha}^s+{X}_{it}^s{\beta}_{it}^s+{u}_i^s+{v}_i^s+{\lambda}_t^s+{\delta}_{it}^s\\ {}\alpha \sim N\left(0,{\tau}_{\alpha}^{-1}\right);{\alpha}^s\sim N\left(0,{\tau}_{\alpha^s}^{-1}\right)\\ {}{\beta}_{it}\sim N\left(0,{\tau}_{\beta}^{-1}\right);{\beta}_{it}^s\sim N\left(0,{\tau}_{\beta^s}^{-1}\right)\\ {}\left[\begin{array}{c}{u}_i\\ {}{u}_i^s\end{array}\right]\sim MCAR\left({\sum}_u^{-1}\right);\left[\begin{array}{c}{v}_i\\ {}{v}_i^s\end{array}\right]\sim MVN\left(\begin{array}{c}0\\ {}0\end{array},{\sum}_v^{-1}\right)\\ {}\left[\begin{array}{c}{\lambda}_t\\ {}{\lambda}_t^s\end{array}\right]\sim MVN\left(\begin{array}{c}{\lambda}_{t-1}\\ {}{\lambda}_{t-1}^s\end{array},{\sum}_{\lambda}^{-1}\right)\end{array}}\kern0.5em {\displaystyle \begin{array}{l}{\delta}_{it}\sim N\left(0,{\tau}_{\delta}^{-1}\right);{\delta}_{it}^s\sim N\left(0,{\tau}_{\delta^s}^{-1}\right)\\ {}{R}_{ms, it}=\frac{\mu_{it}}{\sum_{l=1}^L{g}_{il}{\mu}_{it-l}};{R}_{ms, it}^s=\frac{\mu_{it}^s}{\sum_{l=1}^L{g}_{il}^s{\mu}_{it-l}^s}\\ {}{g}_{il}=\frac{\exp \left({w}_{il}\right)}{\sum_{l=1}^L\exp \left({w}_{il}\right)}{g}_{il}^s=\frac{\exp \left({w}_{il}^s\right)}{\sum_{l=1}^L\exp \left({w}_{il}^s\right)}\\ {}\left[\begin{array}{c}{w}_{il}\\ {}{w}_{il}^s\end{array}\right]\sim MVN\left(\begin{array}{c}0\\ {}0\end{array},{\Sigma}_w^{-1}\right)\\ {}{\tau}_{\ast}^{-1/2}\sim Unif\left(0,10\right).\end{array}} $$
Application to dengue and Zika virus surveillance activities in Thailand
Dengue is endemic in Thailand with peak transmission rates occur in the rainy season, between June and September, all across the country, but particularly in northeastern Thailand. Zika was first reported in Thailand in 2012, and the Bangkok Metropolitan Authority has been conducting regular screen tests on its residents since then. To demonstrate performance of the proposed integrative method we apply the multivariate surveillance number, Rms, to the Dengue and Zika prevalence in Thailand. The cases of both diseases were from the province of Chanthaburi consisting of 10 health districts during July 10th until August 27th 2016, total of 7 weeks. The information of case patients was reported by the public hospitals to surveillance center. Note that the dengue patients included in this analysis were both who diagnosed with Dengue fever (DF) and Dengue hemorrhagic fever (DHF). The results displayed are based on the approximation of the surveillance number developed using prevalence in (6) and posterior sampling carried out using WinBUGS software an initial burn-in period of 100,000 iterations to assess the convergence of MCMC chains.
The estimates of the surveillance numbers are expected to depend on the choice of the size of infectious time l. The Aedes aegypti mosquito is the primary vector of Dengue. The virus is transmitted to humans through the bites of infected female mosquitoes. After virus incubation for 4–10 days, an infected mosquito is capable of transmitting the virus for the rest of its life. Infected symptomatic or asymptomatic humans are the main carriers and multipliers of the virus, serving as a source of the virus for uninfected mosquitoes. Patients who are already infected with the dengue virus can transmit the infection (for 4–5 days; maximum 12 days) via Aedes mosquitoes after their first symptoms appear. Zika is usually milder with symptoms lasting for several days to a week. People usually don't get sick enough to go to the hospital, and they very rarely die of Zika [18].
MSPRE is applied to guide on the choice of infectious time for the model. Table 1 displays the values of MSPRE of both diseases fitted with different sizes of the infectious times. The window size of 2 weeks fitted with the univariate model yields the least MSPRE for Zika and Dengue. Though based on the clinical manifestation point of view a window size less than 2 weeks may be possible for Zika, the result suggested by MSPRE is sensible combining with knowledge from epidemiological perspective that incubation period and virus lifespan in a mosquito can prolong the infectious period. Nonetheless we have only weekly data and would recommend using a finer temporal scale if appropriate when such data are available. To evaluate the performance, we compare the univariate and multivariate models weekly so that both models would have the same set of data. Based on the guidance from MSPRE and information discussed earlier, we thus choose the size of infectious time to be 2 weeks for both diseases in the analysis. Because the infectious times are assumed to be 2 weeks, for simplicity, we assume the weights of serial intervals to have a Beta (1,1) prior distribution instead of standardized log-normal distribution. All covariance matrices are assumed to be a fixed matrix of \( \left[\begin{array}{cc}100& 0\\ {}0& 100\end{array}\right] \) which however could also be modeled with a Wishart distribution as well.
Table 1 MSPRE of Dengue and Zika fitted with the univariate model for different time windows
Table 2 presents DIC values obtained from the analysis with both the univariate and multivariate models during weeks 4–7. The DIC of the multivariate model is less than the ones from each disease fitted separately by the univariate model across all time periods. Moreover, pD, which can be seen as model complexity, is also much smaller in the case of multivariate model. This suggests that pooling information from both diseases in the analysis essentially decreases variability in model fitting and provides a much better description of the spreading pattern of Zika and Dengue.
Table 2 DIC (pD) values for Dengue and Zika fitted with univariate and multivariate models during weeks 4–7
The estimated Rs and Rms describe the pattern of Dengue transmission similarly. However Rms (bottom row, the second column in Fig. 5) of Dengue, which also infuses information of Zika pattern in the integrative platform, provides a smaller transmission rate than Rs (middle row, the second column in Fig. 5) at a district in the south during the week of August 7th – August 13th. This is because during the same week the number of Zika incidence at that district (the first row, second column in Fig. 6) was decreasing from the previous week (the first row, first column in Fig. 6). On the other hand, during the week of August 7th – August 13th Dengue incidence was increasing from the previous week. Hence the Rms of Zika estimated from the multivariate model suggests a higher estimate of the disease strength than from the univariate model. These results demonstrate that the proposed integrated model allows for transferring transmission knowledge between the related diseases to optimize surveillance ability.
Maps of weekly Dengue incidence (top), Rs (middle), and Rms (bottom) in Chantaburi during July 31st – August 27th 2016
Maps of weekly Zika incidence (top), Rs (middle), and Rms (bottom) in Chantaburi during July 31st – August 27th 2016
A new emerging disease can occur in one place and have the potential to spread globally. This is an important challenge in terms of emergency preparation requiring the operation of surveillance systems. A traditional concept to study the dynamic of infectious diseases is the basic reproduction number. However, the intuitive appeal of its theoretical interpretation can outlast the appropriateness of situations if applied incautiously. So it is remarkably crucial to be aware of their caveats when adopting that measure. Otherwise, it could mislead to the inappropriate backbone of disease-management policy. Alternatively, we present a robust and flexible methodology for estimating spatiotemporally varying reproduction numbers. Withstanding the issues of context-specific assumptions, our method provides more practical advantages and can be used to simultaneously estimate disease transmissibility for each location and time within a user-friendly environment for real-time assessment of new emerging diseases.
To evaluate our method, we simulate data in several situations with different magnitudes of transmission and sizes of infectious period. The simulation results suggest that the proposed method allows robust estimation of the surveillance reproduction number used for simulation across simulated scenarios. Though from the simulation study the method would not suffer much from the infection window size, MSPRE may be helpful in providing guidance on the choice of infectious period in practice. Due to limited information when new emerging infection newly occurs, the univariate framework is extended to allow for incorporating knowledge from related diseases in order to maximize the surveillance capability. A case study is provided of an integrative application of Dengue and Zika surveillance in Thailand.
A significant portion of arbovirus incidence (eg. ZIKA) is underestimated due to asymptomatic infection without presenting any clinical symptoms [28]. Nevertheless, the contribution of asymptomatic reservoirs to the overall disease burden has not been well quantified, which introduces considerable uncertainty into modeling studies of disease transmission dynamics and control strategies. Policy and practice on case detection and reporting of Dengue and Zika is a critical factor due to the nature of diseases that have high proportion of asymptomatic infection. Therefore novel surveillance tools, such as integrated surveillance, should be developed and applied to improve estimates of disease incidence especially asymptomatic infections such as Dengue and Zika.
The significance of our development lies in the advantages of multivariate surveillance in the ability to borrow strength across diseases and to allow conditioning of one on others. When applying the multivariate framework, the relevant diseases should epidemiologically and clinically sound. Studies indicate the underlying possibility for Zika to have a similar spreading pattern to Dengue [2, 20]. We hence extend our method to allow for integrating related diseases' information and also demonstrate its performance in the example of Dengue and Zika surveillance in Thailand. The data example shows that combining both diseases in an integrated analysis essentially decreases variability of model fitting. The result suggests that the proposed integrative platform which allows for transferring transmission knowledge between the related diseases sharing similar etiology not only can enhance the estimation of transmissibility but also helps explaining the spreading pattern of Zika and Dengue much better. This is a significant importance of the proposed multi-disease measure in improving surveillance ability.
Though the proposed method demonstrates robust performance, it should be noted that those data present a lot of both clinical and epidemiological complexity. In this work we prevalence information is assumed for the model due to difficulties of disease investigation which implies that the ratio of incidence and prevalence is nearly constant over time. This is a limitation of our development. This assumption may not be appropriate for chronic diseases but rather suitable for infections with relatively short duration. There is a further need for studies of virus circulation persistence and the ecological factors including characterizing immunological cross-reacting which could shorten or prolong the epidemic [29]. Across both clinical and ecological studies, it is also important to evaluate the effect of host, viral, and vector relationships for fuller understanding of the disease mechanism [18]. However, the proposed methodology can be served as a flexible platform to incorporate those potential epidemiologic and ecologic determinants that drive the disease risk as they are available.
New emerging diseases are public health crises in which policy makers have had to make decisions in the presence of massive uncertainty. As presented the proposed methodology is robust in several simulated scenarios of transmission force with computing flexibility and practical benefits. Thus this development is ideally suitable for surveillance applications of new emerging diseases such as Zika. To further prevent and control new emerging infection, we must have a fuller understand the modes of transmission which are currently lacking. In such context, it is natural to look for strategies that mirror those applied for relevant diseases. By transferring information from diseases sharing the similar etiology such as Dengue, our multivariate framework can successfully integrate knowledge and hence improve the surveillance system effectively. Therefore, in current situations whereas there are threats from new infection, a robust and flexible platform is thus essentially needed to be readily prepared in order to rapidly gain an understanding of the new disease transmission mechanism to counter the local and global health concerns.
The data that support the findings of this study were obtained from the Thai Bureau of Epidemiology, but restrictions apply to the availability of these data, which were used with permission for the current study, and are therefore not publicly available. However, data may be available from the authors upon reasonable request and with permission of the Thai Bureau of Epidemiology.
DF:
DHF:
Dengue hemorrhagic fever
DIC:
Deviance Information Criterion
ICAR:
Intrinsic Conditional Autoregressive model
MCAR:
Multivariate Conditional Autoregressive model
MCMC:
MSE:
MSPE:
Mean Squared Predictive Error
MVN:
Multivariate Normal distribution
R 0 :
Basic reproduction number
R ms :
R s :
Surveillance reproduction number
Lessler J, Chaisson LH, Kucirka LM, Bi Q, Grantz K, Salje H, Cummings DA. Assessing the global threat from Zika virus. Science. 2016;353(6300).
Ferguson NM, et al. Countering the Zika epidemic in Latin America. Science. 2016;353(6297):353–4.
Diekmann O, Heesterbeek JAP. Mathematical epidemiology of infectious diseases: model building, analysis, and interpretation. Chichester: Wiley; 2000.
Dietz K. The estimation of the basic reproduction number for infectious diseases. Stat Methods Med Res. 1993;2(1):23–41.
Brauer F. Compartmental models in epidemiology. In Mathematical epidemiology. Berlin, Heidelberg: Springer; 2008. (pp. 19-79).
Li J, Blakeley D, Smith RJ. The Failure of R (0). In: Computational and mathematical methods in medicine, 2011; 2011. p. 527610.
Heffernan JM, Smith RJ, Wahl LM. Perspectives on the basic reproductive ratio. J R Soc Interface. 2005;2(4):281–93.
Fraser C. Estimating individual and household reproduction numbers in an emerging epidemic. PLoS One. 2007;2(8):e758.
Cori A, et al. A new framework and software to estimate time-varying reproduction numbers during epidemics. Am J Epidemiol. 2013;178(9):1505–12.
Keeling MJ. The effects of local spatial structure on epidemiological invasions. Proc R Soc Lond B Biol Sci. 1999;266(1421):859–67.
Chowell G, et al. Estimation of the reproduction number of dengue fever from spatial epidemic data. Math Biosci. 2007;208(2):571–89.
Nishiura H. Correcting the actual reproduction number: a simple method to estimate R0 from early epidemic growth data. Int J Environ Res Public Health. 2010;7(1):291–302.
Roberts M. The pluses and minuses of 0. J R Soc Interface. 2007;4(16):949–61.
Hethcote HW. The mathematics of infectious diseases. SIAM Rev. 2000;42(4):599–653.
Kermack WO, McKendrick AG. A contribution to the mathematical theory of epidemics. Proceedings of the royal society of london. Series A, Containing papers of a mathematical and physical character. 1927;115(772):700–21.
Diggle PJ, Menezes R, Su Tl. Geostatistical inference under preferential sampling. J R Stat Soc Ser C Appl Stat. 2010;59(2):191–232.
Gelfand AE, Sahu SK, Holland DM. On the effect of preferential sampling in spatial prediction. Environmetrics. 2012;23(7):565–78.
Hamel R, Liégeois F, Wichit S, Pompon J, Diop F, Talignani L, Missé D. Zika virus: epidemiology, clinical features and host-virus interactions. Microbes and Infection. 2016;18(7-8):441-9.
Gelfand AE, Ghosh SK. Model choice: a minimum posterior predictive loss approach. Biometrika. 1998;85(1):1–11.
Dupont-Rouzeyrol M, et al. Co-infection with Zika and dengue viruses in 2 patients, New Caledonia, 2014. Emerg Infect Dis. 2015;21(2):381–2.
Cardoso CW, et al. Outbreak of exanthematous illness associated with Zika, chikungunya, and dengue viruses, Salvador, Brazil. Emerg Infect Dis. 2015;21(12):2274.
Lawson A. Statistical Methods in Spatial Epidemiology. Somerset: Wiley; 2013.
Lawson A, et al. Handbook of Spatial Epidemiology. Boca Raton (Fla.): Chapman & Hall/CRC; 2016.
Banerjee S, Carlin B, Gelfand A. Hierarchical modeling and analysis for spatial data. Boca Raton (Fla.): Chapman & Hall/CRC.; 2015.
Gelfand AE, Vounatsou P. Proper multivariate conditional autoregressive models for spatial data analysis. Biostatistics. 2003;4(1):11–5.
Knorr-Held L, Best NG. A shared component model for detecting joint and selective clustering of two diseases. J R Stat Soc A Stat Soc. 2001;164(1):73–85.
Besag J, York J, Mollié A. Bayesian image restoration, with two applications in spatial statistics. Ann Inst Stat Math. 1991;43(1):1–20.
Moghadas SM, et al. Asymptomatic transmission and the dynamics of Zika infection. Sci Rep. 2017;7(1):5829.
Dejnirattisai W, et al. Dengue virus sero-cross-reactivity drives antibody-dependent enhancement of infection with zika virus. Nat Immunol. 2016;17(9):1102–8.
We would like to thank Dr. Saranath Lawpoolsri for assistance with the epidemiological interpretation. We are also thankful for constructive suggestions from reviewers to improve our manuscript.
This research was supported by the new researcher grant of Mahidol University and ICTM grant from the Faculty of Tropical Medicine. The funding body had no role in the design or analysis of the study, interpretation of results, or writing of the manuscript.
Department of Tropical Hygiene, Faculty of Tropical Medicine, Mahidol University, Ratchathewi, Bangkok, 10400, Thailand
Chawarat Rotejanaprasert
Mahidol-Oxford Tropical Medicine Research Unit, Faculty of Tropical Medicine, Mahidol University, Bangkok, 10400, Thailand
Department of Public Health Sciences, Medical University of South Carolina, Charleston, SC, 29425, USA
Andrew B. Lawson
Department of Disease Control, Ministry of Public Health, Nonthaburi, 11000, Thailand
Sopon Iamsirithaworn
Search for Chawarat Rotejanaprasert in:
Search for Andrew B. Lawson in:
Search for Sopon Iamsirithaworn in:
All authors contributed to the conceptual design of the study. CR developed the statistical methodology with critical input from AL and SI. CR completed all statistical analyses and drafted the manuscript. SI was responsible for clinical revision and improvements of the manuscript. CR and AL contributed to the manuscript editing. All authors have read and approved the final manuscript.
Correspondence to Chawarat Rotejanaprasert.
The research was approved by the ethics committee of the Faculty of Tropical Medicine, Mahidol University.
Rotejanaprasert, C., Lawson, A.B. & Iamsirithaworn, S. Spatiotemporal multi-disease transmission dynamic measure for emerging diseases: an application to dengue and zika integrated surveillance in Thailand. BMC Med Res Methodol 19, 200 (2019) doi:10.1186/s12874-019-0833-6
Spatiotemporal
Data analysis, statistics and modelling
|
CommonCrawl
|
Global well-posedness for KdV in Sobolev spaces of negative index
Colliander, James and Keel, Markus and Staffilani, Gigliola and Takaoka, Hideo and Tao, Terence (2001) Global well-posedness for KdV in Sobolev spaces of negative index. Electronic Journal of Differential Equations, 2001 (26). pp. 1-7. ISSN 1072-6691. http://resolver.caltech.edu/CaltechAUTHORS:COLejde01
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:COLejde01
The initial value problem for the Korteweg-deVries equation on the line is shown to be globally well-posed for rough data. In particular, we show global well-posedness for initial data in $H^s(\mathbb{R})$ for -3/10 < s.
© 2001 Southwest Texas State University. Submitted January 31, 2001. Published April 27, 2001. J.E.C. is supported in part by an N.S.F. Postdoctoral Research Fellowship. M.K. is supported in part by N.S.F. Grant DMS 9801558. G.S. is supported in part by N.S.F. Grant DMS 9800879 and by a Terman Award. T.T. is a Clay Prize Fellow and is supported in part by grants from the Packard and Sloan Foundations.
Korteweg-de Vries equation, nonlinear dispersive equations, bilinear estimates
CaltechAUTHORS:COLejde01
http://resolver.caltech.edu/CaltechAUTHORS:COLejde01
http://www.emis.ams.org/journals/EJDE/Volumes/2001/26/abstr.html
|
CommonCrawl
|
The Journal of Mathematical Neuroscience
Geometry of color perception. Part 2: perceived colors from real quantum states and Hering's rebit
M. Berthier ORCID: orcid.org/0000-0002-3298-40111
The Journal of Mathematical Neuroscience volume 10, Article number: 14 (2020) Cite this article
Inspired by the pioneer work of H.L. Resnikoff, which is described in full detail in the first part of this two-part paper, we give a quantum description of the space \(\mathcal{P}\) of perceived colors. We show that \(\mathcal{P}\) is the effect space of a rebit, a real quantum qubit, whose state space is isometric to Klein's hyperbolic disk. This chromatic state space of perceived colors can be represented as a Bloch disk of real dimension 2 that coincides with Hering's disk given by the color opponency mechanism. Attributes of perceived colors, hue and saturation, are defined in terms of Von Neumann entropy.
"The structure of our scientific cognition of the world is decisively determined by the fact that this world does not exist in itself, but is merely encountered by us as an object in the correlative variance of subject and object" [1].
On the mathematics of color perception
The mathematical description of human color perception mechanisms is a longstanding problem addressed by many of the most influential figures of the mathematical physics [1–4]. The reader will find an overview of the main historical contributions at the beginning of [5] where H.L. Resnikoff points out that the space, which we denote by \({\mathcal{P}}\), of perceived colors is one of the very first examples of abstract manifold mentioned by B. Riemann in his habilitation [6], "a pregnant remark". As suggested by H. Weyl [1], it is actually very tempting to characterize the individual color perception as a specific correlative interaction between an abstract space of perceived colors and an embedding space of physical colors. This raises the question of defining intrinsically, in the sense of Riemannian geometry, the space of perceived colors from basic largely accepted axioms. These axioms, which date back to the works of H.G. Grassmann and H. Von Helmholtz [2, 7], state that \({\mathcal{P}}\) is a regular convex cone of dimension 3. It is worth noting that convexity reflects the property that one must be able to perform mixtures of perceived colors or, in other words, of color states [8]. What makes the work of H.L. Resnikoff [5] particularly enticing is the remarkable conclusions that he derives by adding the sole axiom that \({\mathcal{P}}\) is homogeneous under the action of the linear group of background illumination changes [9]. We will discuss in Sect. 6 the relevance of this statement. To the best of our knowledge, this axiom, which involves an external context, has never been verified by psychophysical experiments. It endows \({\mathcal{P}}\) with the rich structure of a symmetric cone [10]. With this additional axiom, and the hypothesis that the distance on \({\mathcal{P}}\) is given by a Riemannian metric invariant under background illumination changes, H.L. Resnikoff shows that \({\mathcal{P}}\) can only be isomorphic to one of the two following Riemannian spaces: the product \(\mathcal{P}_{1}=\mathbb{R}^{+}\times \mathbb{R}^{+}\times \mathbb{R}^{+}\) equipped with a flat metric, namely the Helmholtz–Stiles metric [11], and \(\mathcal{P}_{2}=\mathbb{R}^{+}\times \operatorname{SL}(2,\mathbb{R})/ \operatorname{SO}(2,\mathbb{R})\) equipped with the Rao–Siegel metric of constant negative curvature [12, 13]. Let us recall that the quotient \(\operatorname{SL}(2,\mathbb{R})/\operatorname{SO}(2,\mathbb{R})\) is isomorphic to the Poincaré hyperbolic disk \(\mathcal{D}\). The first space is the usual metric space of the colorimetry, while the second one seems to be relevant to explain psychophysical phenomena such as the ones described by H. Yilmaz in [14] and [15] or physiological mechanisms such as the neural coding of colors of R. and K. de Valois [16, 17]. In the sequel, we focus on the latter.
A quantum glance at color perception
The starting point of this work originates from the second part of [5] dedicated to Jordan algebras. Contrary to H.L. Resnikoff we suppose at first that the perceived color space \({\mathcal{P}}\) can be described from the state space of a quantum system characterized by a Jordan algebra \({\mathcal{A}}\) of real dimension 3 [18–21]. This is our only axiom, see Sect. 2.1 for motivations. Jordan algebras are non-associative commutative algebras that have been classified by P. Jordan, J. Von Neumann, and E. Wigner [22] under the assumptions that they are of finite dimension and formally real. They are considered as a fitting alternative to the usual associative noncommutative algebraic framework for the geometrization of quantum mechanics [23–25]. Not so surprisingly in view of what precedes, \(\mathcal{A}\) is necessarily isomorphic to one of the two following Jordan algebras: the algebra \(\mathbb{R}\oplus \mathbb{R}\oplus \mathbb{R}\) or the algebra \(\mathcal{H}(2,\mathbb{R})\) of symmetric real 2 by 2 matrices. It appears that the two geometric models of H.L. Resnikoff can be recovered from this fact by simply taking the positive cone of \(\mathcal{A}\). The Jordan algebra \(\mathcal{H}(2,\mathbb{R})\) carries a very special structure being isomorphic to the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\). It can be seen as the non-associative algebra linearly spanned by the unit 1 and a spin system of the Clifford algebra of \(\mathbb{R}^{2}\) [21, 26]. The main topic of this work is to exploit these structures to highlight the quantum nature of the space \(\mathcal{P}\) of perceived colors. Actually, the quantum description that we propose gives a precise meaning to the relevant remark of [27], p. 539: "This underlying mathematical structureFootnote 1 is reminiscent of the structure of states (i.e., density matrices) in quantum mechanics. The space of all states is also convex-linear, the boundary consists of pure states, and any mixed state can be obtained by a statistical ensemble of pure states. In the present case, the spectral colors are the analogs of pure states".
Although the geometry of the second model \(\mathcal{P}_{2}\) of H.L. Resnikoff is much richer than the geometry of the first model \(\mathcal{P}_{1}\), very few works are devoted to the possible implications of hyperbolicity in color perception. One of the main objectives of this contribution is to show that the model \(\mathcal{P}_{2}\) is perfectly adapted to explain the coherence between the trichromatic and color opponency theories. We show that the space \(\mathcal{P}\) is the effect space of a so-called rebit, a real quantum qubit, whose state space \(\mathcal{S}\) is isometric to the hyperbolic Klein disk \(\mathcal{K}\). Actually, \(\mathcal{K}\) is isometric to the Poincaré disk \(\mathcal{D}\), but its geodesics are visually very different, being the straight chords of the unit disk. Klein geometry appears naturally when considering the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\) and the 3-dimensional Minkowski future lightcone \(\mathcal{L}^{+}\) whose closure is the state cone of the rebit. We show that the chromatic state space \(\mathcal{S}\) can be represented as a Bloch disk of real dimension 2 that coincides with the Hering disk given by the color opponency mechanism. This Bloch disk is an analog, in our real context, of the Bloch ball that describes the state space of a two-level quantum system of a spin-\(\frac{1}{2}\) particle. The dynamics of this quantum system can be related to the color information processing that results from the activity rates of the four types of spectrally opponent cells [17], see Sect. 7.1. The spectrally opponent interactions in primates are usually considered to be performed by ganglion and lateral geniculate nucleus cells which are very similar with regards to color processing [17, 28].
Following this quantum interpretation, we give precise definitions of the two chromatic attributes of a perceived color, hue and saturation, in terms of Von Neumann entropy.
As explained by P.A.M. Dirac in [29], p. 18, physical phenomena justify the need for considering complex Hilbert spaces in quantum mechanics. Alternatively, the structures we deal with in the sequel are real, and we may consider the space \(\mathcal{P}\) as a nontrivial concrete example of an effect space of a real quantum system. The reader will find more information on real-vector-space quantum theory and its consistency regarding optimal information transfer in [30].
Finally, since the spin factors and the corresponding Clifford algebras share the same representations (and the same squares), one may envisage to adapt to the present context the tools developed in [31] for the harmonic analysis of color images.
Outline of the paper
We introduce in Sect. 2 the mathematical notions that are used to recast the description of the perceived color space geometry into the quantum framework. We begin by explaining the motivations and meaning of the trichromacy axiom which is the cornerstone of our approach. Section 3 is devoted to quantum recalls. It mainly contains the material needed to describe the state space of the so-called rebit, i.e., the two-level real quantum system. Section 4 contains results on Riemannian geometry. The objective of this section is to show that Klein's geometry, or equivalently Hilbert's geometry, is well adapted to quantum states, contrary to the Poincaré geometry used by H.L. Resnikoff. We propose in Sect. 5 to interpret perceived colors as quantum measurement operators. This allows us in particular to give mathematically sound colorimetric definitions. Section 6 is devoted to some results on group actions and homogeneity in relation with the supplementary axiom of H.L. Resnikoff. Finally, in Sect. 7, we discuss some consequences of our work in relation with the neural color coding and relativistic models of respectively R. and K. de Valois, and H. Yilmaz. We also point out some perspectives regarding the links between MacAdam ellipses and Hilbert's metric.
Mathematical preliminaries
We introduce in this section the mathematical notions needed in the sequel. They mainly concern the properties of Jordan algebras. The reader will find more detailed information on this subject in [18, 20, 21, 32] or in the seminal work of P. Jordan [33].
The trichromacy axiom
Before going into detail, we give some explanations in order to justify the mathematical approach adopted. Following the axioms of H.G. Grassmann and H. Von Helmholtz [2, 7], the space of perceived colors is a regular convex cone of real dimension 3. Such a geometrical structure does not carry enough information to allow relevant developments. The main idea of Resnikoff's work is to enrich this structure by requiring that the cone of perceived colors be homogeneous [9]. It appears that if we add one more property, namely self-duality, this cone becomes a symmetric cone [10]. The fundamental remark is that being symmetric the cone of perceived colors can be considered as the set of positive observables of a formally real Jordan algebra \(\mathcal{A}\). This is precisely the statement of the famous Koecher–Vinberg theorem [32]. Since the cone is of real dimension 3, the algebra is also of real dimension 3. Using the classification theorem of P. Jordan, J. Von Neumann, and E. Wigner [22], one can check that the algebra \(\mathcal{A}\) is necessarily isomorphic either to the Jordan algebra \(\mathbb{R}\oplus \mathbb{R}\oplus \mathbb{R}\) or to the Jordan algebra \(\mathcal{H}(2,\mathbb{R})\) of symmetric real 2 by 2 matrices. In consequence, adding the self-duality property, Resnikoff's classification stems from the classification theorem of P. Jordan, J. Von Neumann, and E. Wigner.
The point of view that we wish to put forward is that perceived colors must be described by measurements through some state-observable correspondence. This is formalized in Sect. 5.1 with the notion of quantum effects. In order to emphasize this point of view, we formulate our starting trichromacy axiom as follows: the Grassmann–Von Helmholtz cone of perceived colors is the positive cone of a formally real Jordan algebra of real dimension 3.
Jordan algebras and symmetric cones
A Jordan algebra \(\mathcal{A}\) is a real vector space equipped with a commutative bilinear product \(\mathcal{A}\times \mathcal{A}\longrightarrow \mathcal{A}\), \((a,b)\longmapsto a\circ b\), satisfying the following Jordan identity:
$$ \bigl(a^{2}\circ b\bigr)\circ a=a^{2}\circ (b\circ a) . $$
This Jordan identity ensures that the power of any element a of \(\mathcal{A}\) is well defined (\(\mathcal{A}\) is power associative in the sense that the subalgebra generated by any of its elements is associative). Since a sum of squared observables never vanishes, one logically requires that if \(a_{1}, a_{2},\ldots, a_{n}\) are elements of \(\mathcal{A}\) such that
$$ a_{1}^{2}+a_{2}^{2}+\cdots +a_{n}^{2}=0 , $$
then \(a_{1}=a_{2}=\cdots =a_{n}=0\). The algebra \(\mathcal{A}\) is then said to be formally real. This property endows \(\mathcal{A}\) with a partial ordering: \(a\leq b\) if and only if \(b-a\) is a sum of squares, and therefore the squares of \(\mathcal{A}\) are positive. Formally real Jordan algebras of finite dimension are classified [22]: every such algebra is the direct sum of so-called simple Jordan algebras. Simple Jordan algebras are of the following types: the algebras \(\mathcal{H}(n,\mathbb{K})\) of hermitian matrices with entries in the division algebra \(\mathbb{K}\) with \(\mathbb{K}=\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\) (the algebra of quaternions), the algebra \(\mathcal{H}(3,\mathbb{O})\) of hermitian matrices with entries in the division algebra \(\mathbb{O}\) (the algebra of octonions), and the spin factors \(\mathbb{R}\oplus \mathbb{R}^{n}\), with \(n\geq 0\). The Jordan product on \(\mathcal{H}(n,\mathbb{K})\) and \(\mathcal{H}(3,\mathbb{O})\) is defined by
$$ a\circ b=\frac{1}{2}(ab+ba) . $$
Note that contrary to the usual matrix product, this product is effectively commutative. The spin factors form "the most mysterious of the four infinite series of [simple] Jordan algebras" [34]. They were introduced for the first time under this name by D.M. Topping [35] and are defined as follows. The spin factor \(J(V)\) of a given n-dimensional real inner product space V is the direct sum \(\mathbb{R}\oplus V\) endowed with the Jordan product
$$ (\alpha +\mathbf{v})\circ (\beta +\mathbf{w})=\bigl(\alpha \beta +\langle { \mathbf{v}},{ \mathbf{w}}\rangle +\alpha {\mathbf{w}}+\beta {\mathbf{v}}\bigr) , $$
where α and β are reals, and v and w are vectors of V. The following result is well known [34].
Let \(\mathbb{K}\) be the division algebra \(\mathbb{R}\), \(\mathbb{C}\), \(\mathbb{H}\), or \(\mathbb{O}\). The spin factor \(J(\mathbb{K}\oplus \mathbb{R})\) is isomorphic to the Jordan algebra \(\mathcal{H}(2,\mathbb{K})\).
An explicit isomorphism is given by the map
$$\begin{aligned}& \phi : \mathcal{H}(2,\mathbb{K})\longrightarrow J(\mathbb{K}\oplus \mathbb{R}), \end{aligned}$$
$$\begin{aligned}& \begin{pmatrix} \alpha +\beta & x \\ x^{*} & \alpha -\beta \end{pmatrix}\longmapsto (\alpha +x+\beta ) , \end{aligned}$$
with x in \(\mathbb{K}\) and \(x^{*}\) the conjugate of x. □
Now, we focus on the algebra \(\mathcal{H}(2,\mathbb{R})\) and the spin factor \(J(\mathbb{R}\oplus \mathbb{R})\), both being isomorphic to \(\mathbb{R}\oplus \mathbb{R}^{2}\). The latter is equipped with the Minkowski metric [36]
$$ (\alpha +\mathbf{v})\cdot (\beta +\mathbf{w})=\alpha \beta -\langle { \mathbf{v}},{ \mathbf{w}}\rangle , $$
where α and β are reals, and v and w are vectors of \(\mathbb{R}^{2}\). It turns out that Proposition 1 has a fascinating reformulation: the algebra of observables of a 2-dimensional real quantum system is isomorphic to the 3-dimensional Minkowski spacetime. Let us recall that the lightcone \({\mathcal{L}}\) of \(\mathbb{R}\oplus \mathbb{R}^{2}\) is the set of elements \(a=(\alpha +\mathbf{v})\) that satisfy
$$ a\cdot a=0 , $$
and that a light ray is a 1-dimensional subspace of \(\mathbb{R}\oplus \mathbb{R}^{2}\) spanned by an element of \({\mathcal{L}}\). Every such light ray is spanned by a unique element of the form \((1+\mathbf{v})/2\) with v a unit vector of \(\mathbb{R}^{2}\). Actually, the space of light rays coincides with the projective space \(\mathbb{P}_{1}(\mathbb{R})\). In other words, we have the following result.
There is a one-to-one correspondence between the light rays of the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\) and the rank one projections of the Jordan algebra \({\mathcal{H}}(2,\mathbb{R})\).
The correspondence is given by
$$ (1+\mathbf{v})/2\longmapsto \frac{1}{2} \begin{pmatrix} 1+v_{1} & v_{2} \\ v_{2} & 1-v_{1}\end{pmatrix}, $$
where \(\mathbf{v}=v_{1}e_{1}+v_{2}e_{2}\) is a unit vector of \(\mathbb{R}^{2}\). □
We will see in the next section that this result has a meaningful interpretation: there is a one-to-one correspondence between the light rays of the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\) and the pure state density matrices of the algebra \(\mathcal{H}(2,\mathbb{R})\).
The positive cone \(\mathcal{C}\) of the Jordan algebra \(\mathcal{A}\) is the set of its positive elements, namely
$$ \mathcal{C}=\{a\in \mathcal{A}, a> 0\} . $$
It can be shown that \(\mathcal{C}\) is the interior of the positive domain of \(\mathcal{A}\) defined as the set of squares of \(\mathcal{A}\). The convex cone \(\mathcal{C}\) is symmetric: it is regular, homogeneous, and self-dual [10]. The positive cone \(\mathcal{H}^{+}(2,\mathbb{R})\) of the algebra \(\mathcal{H}(2,\mathbb{R})\) is the set of positive-definite symmetric matrices.
Quantum preliminaries
This section is devoted to describing the state space of the real analog of the usual complex qubit. The so-called rebit is a two-level real quantum system whose Hilbert's space is \(\mathbb{R}^{2}\).
Recalls on state spaces
The positive cone \(\mathcal{C}\) is the set of positive observables. A state of \(\mathcal{A}\) is a linear functional
$$ \langle \cdot \rangle :\mathcal{A}\longrightarrow \mathbb{R} $$
that is nonnegative: \(\langle a\rangle \geq 0\), \(\forall a\geq 0\), and normalized: \(\langle 1\rangle =1\). Given an element a of \(\mathcal{A}\), we denote by \(L(a)\) the endomorphism of \(\mathcal{A}\) defined by \(L(a)(b)=a\circ b\) and \(\operatorname{Trace} (a)\) its trace, i.e., \(\operatorname{Trace} (a):=\operatorname{Trace} (L(a))\). Since \(\mathcal{A}\) is formally real, the pairing
$$ \langle a,b\rangle =\operatorname{Trace}\bigl(\mathrm{L}(a) (b) \bigr)=\operatorname{Trace}(a\circ b) $$
is a real-valued inner product and one can identify any state with a unique element ρ of \(\mathcal{A}\) by setting
$$ \langle a\rangle =\operatorname{Trace}(\rho \circ a) , $$
where \(\rho \geq 0\) and \(\operatorname{Trace}(\rho )=1\). Such ρ, for \(\mathcal{A}=\mathcal{H}(2,\mathbb{R})\), is a so-called state density matrix [8]. Formula (13) gives the expectation value of the observable a in the state with density matrix ρ.
Regarding Proposition 1, the positive state density matrices of the algebra \(\mathcal{H}(2,\mathbb{R})\) are in one-to-one correspondence with the elements of the future lightcone
$$ \mathcal{L}^{+}=\bigl\{ a=(\alpha +\mathbf{v}), \alpha > 0, a\cdot a> 0\bigr\} $$
that are of the form \(a=(1+\mathbf{v})/2\), with \(\Vert {\mathbf{v}}\Vert \leq 1\). One way to qualify states is to introduce the Von Neumann entropy [8, 37]. It is given by
$$ S(\rho )=-\operatorname{Trace}(\rho \log \rho ) . $$
It appears that \(S(\rho )=0\) if and only if ρ satisfies \(\rho \circ \rho =\rho \). The zero entropy state density matrices characterize pure states that afford a maximum of information. Among the other state density matrices, one is of particular interest. It is given by \(\rho _{0}=\mathrm{Id}_{2}/2\) or \(\rho _{0}=(1+0)/2\) (\(\mathrm{Id}_{2}\) is the identity matrix) and is characterized by
$$ \rho _{0}=\mathop{\operatorname{argmax}}_{\rho } S(\rho ) . $$
The mixed state with density matrix \(\rho _{0}\) is called the state of maximum entropy, \(S(\rho _{0})\) being equal to log2. It provides the minimum of information. Using (13), we have
$$ \langle a\rangle _{0}=\frac{\operatorname{Trace}(a)}{2} . $$
Now, given an observable a, a acts on the state \(\langle \cdot \rangle _{0}\) by the formula
$$ a:\langle \cdot \rangle _{0}\longmapsto \langle a\circ \cdot \rangle = \langle \cdot \rangle _{0,a} . $$
Since for any state ρ the element 2ρ is an observable, we get
$$ \langle a\rangle _{0,2\rho }=\langle 2\rho \circ a\rangle _{0}=\operatorname{Trace}( \rho \circ a)=\langle a\rangle $$
for all observable a. This means that any state with density matrix ρ can be obtained from the state of maximal entropy with density matrix \(\rho _{0}\) using the action of the observable 2ρ.
The two-level real quantum system
The pure state density matrices of the algebra \(\mathcal{A}=\mathcal{H}(2,\mathbb{R})\) are of the form
$$ \frac{1}{2} \begin{pmatrix} 1+v_{1} & v_{2} \\ v_{2} & 1-v_{1}\end{pmatrix} , $$
where \(\mathbf{v}=v_{1}e_{1}+v_{2}e_{2}\) is a unit vector of \(\mathbb{R}^{2}\). They are in one-to-one correspondence with the light rays of the 3-dimensional Minkowski spacetime, see Proposition 2.
A classical representation of quantum states is the Bloch body [38]. An element ρ of \(\mathcal{H}(2,\mathbb{R})\) is a state density matrix if and only if it can be written as follows:
$$ \rho (v_{1},v_{2})=\frac{1}{2}( \mathrm{Id}_{2}+\mathbf{v}\cdot \sigma )=\frac{1}{2}( \mathrm{Id}_{2}+v_{1} \sigma _{1}+v_{2} \sigma _{2}) , $$
where \(\sigma =(\sigma _{1},\sigma _{2})\) with
$$ \sigma _{1}= \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix},\qquad \sigma _{2}= \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix} , $$
and \(\mathbf{v}=v_{1}e_{1}+v_{2}e_{2}\) is a vector of \(\mathbb{R}^{2}\) with \(\Vert {\mathbf{v}}\Vert \leq 1\). The matrices \(\sigma _{1}\) and \(\sigma _{2}\) are Pauli-like matrices. In the usual framework of quantum mechanics, that is, when the observable algebra is the algebra \(\mathcal{H}(2,\mathbb{C})\) of 2 by 2 hermitian matrices with complex entries, the Bloch body is the unit Bloch ball in \(\mathbb{R}^{3}\). It represents the states of the two-level quantum system of a spin-\(\frac{1}{2}\) particle, also called a qubit. In the present context, the Bloch body is the unit disk of \(\mathbb{R}^{2}\) associated with a rebit. We give now more details on this system using the classical Dirac notations [29], bra, ket, etc. Let us denote by \(|u_{1}\rangle \), \(|d_{1}\rangle \), \(|u_{2}\rangle \), and \(|d_{2}\rangle \) the four state vectors defined by
$$ |u_{1}\rangle = \begin{pmatrix} 1 \\ 0\end{pmatrix} ,\qquad |d_{1}\rangle = \begin{pmatrix} 0 \\ 1\end{pmatrix} ,\qquad |u_{2}\rangle =\frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1\end{pmatrix} ,\qquad |d_{2}\rangle =\frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 1\end{pmatrix} . $$
$$ \sigma _{1}= \vert u_{1}\rangle \langle u_{1} \vert - \vert d_{1}\rangle \langle d_{1} \vert ,\qquad \sigma _{2}= \vert u_{2} \rangle \langle u_{2} \vert - \vert d_{2}\rangle \langle d_{2} \vert . $$
The state vectors \(|u_{1}\rangle \) and \(|d_{1}\rangle \), resp. \(|u_{2}\rangle \) and \(|d_{2}\rangle \), are eigenstates of \(\sigma _{1}\), resp. \(\sigma _{2}\), with eigenvalues 1 and −1.
Using polar coordinates \(v_{1}=r\cos \theta \), \(v_{2}=r\sin \theta \), we can write \(\rho (v_{1},v_{2})\) as follows:
$$\begin{aligned} \rho (r,\theta ) =&\frac{1}{2} \begin{pmatrix} 1+r\cos \theta & r\sin \theta \\ r\sin \theta & 1-r\cos \theta \end{pmatrix} \end{aligned}$$
$$\begin{aligned} =&\frac{1}{2} \bigl\{ (1+r\cos \theta ) \vert u_{1} \rangle \langle u_{1} \vert +(1-r \cos \theta ) \vert d_{1}\rangle \langle d_{1} \vert +(r\sin \theta ) \vert u_{2} \rangle \langle u_{2} \vert \end{aligned}$$
$$\begin{aligned} &{}-(r\sin \theta ) \vert d_{2}\rangle \langle d_{2} \vert \bigr\} . \end{aligned}$$
This gives, for instance,
$$\begin{aligned}& \rho (1,0)= \vert u_{1}\rangle \langle u_{1} \vert = \begin{pmatrix} 1 & 0 \\ 0& 0 \end{pmatrix} , \end{aligned}$$
$$\begin{aligned}& \rho (1,\pi )= \vert d_{1}\rangle \langle d_{1} \vert = \begin{pmatrix} 0 & 0 \\ 0& 1 \end{pmatrix} , \end{aligned}$$
$$\begin{aligned}& \rho (1,\pi /2)= \vert u_{2}\rangle \langle u_{2} \vert =\frac{1}{2} \begin{pmatrix} 1 & 1 \\ 1& 1 \end{pmatrix} , \end{aligned}$$
$$\begin{aligned}& \rho (1,3\pi /2)= \vert d_{2}\rangle \langle d_{2} \vert =\frac{1}{2} \begin{pmatrix} 1 & -1 \\ -1& 1 \end{pmatrix} . \end{aligned}$$
$$ \rho (1,\theta )= \bigl\vert (1,\theta )\bigr\rangle \bigl\langle (1,\theta ) \bigr\vert , $$
$$ \bigl|(1,\theta )\bigr\rangle =\cos (\theta /2) \vert u_{1}\rangle + \sin (\theta /2) \vert d_{1} \rangle . $$
This means that we can identify the pure state density matrices \(\rho (1,\theta )\) with the state vectors \(|(1,\theta )\rangle \) and also with the points of the unit disk boundary with coordinate θ. The state of maximal entropy, given by the state density matrix
$$ \rho _{0}=\frac{1}{2} \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} , $$
is the mixture
$$\begin{aligned} \rho _{0} =&\frac{1}{4} \vert u_{1}\rangle \langle u_{1} \vert +\frac{1}{4} \vert d_{1} \rangle \langle d_{1} \vert + \frac{1}{4} \vert u_{2}\rangle \langle u_{2} \vert +\frac{1 }{4} \vert d_{2} \rangle \langle d_{2} \vert \end{aligned}$$
$$\begin{aligned} =&\frac{1}{4}\rho (1,0)+\frac{1}{4}\rho (1,\pi )+ \frac{1}{4}\rho (1,\pi /2)+\frac{1 }{4}\rho (1,3\pi /2) , \end{aligned}$$
with equal probabilities. Using (25), we can write every state density matrix as follows:
$$ \rho (r,\theta )=\rho _{0}+\frac{r\cos \theta }{2} \bigl(\rho (1,0)- \rho (1,\pi ) \bigr)+\frac{r\sin \theta }{2} \bigl(\rho (1,\pi /2)- \rho (1,3\pi /2) \bigr) . $$
Such a state density matrix is given by the point of the unit disk with polar coordinates \((r,\theta )\). It is important to notice that the four state density matrices \(\rho (1,0)\), \(\rho (1,\pi )\), \(\rho (1,\pi /2)\), and \(\rho (1,3\pi /2)\) correspond to two pairs of state vectors \((|u_{1}\rangle , |d_{1}\rangle )\), \((|u_{2}\rangle , |d_{2}\rangle )\), the state vectors \(|u_{i}\rangle \) and \(|d_{i}\rangle \), for \(i=1,2\), being linked by the "up and down" Pauli-like matrix \(\sigma _{i}\).
In the usual framework, that is, when \(\mathcal{A}\) is \(\mathcal{H}(2,\mathbb{C})\), the three Pauli matrices are associated with the three directions of rotations in \(\mathbb{R}^{3}\). In our case, there are only two Pauli-like matrices. The interpretation in terms of rotations ceases to be relevant since there is no space with a rotation group of dimension 2. This makes rebits somewhat strange. We explain in Sect. 7.1 that this real quantum system seems to be well adapted to provide a mathematical model of the opponency color mechanism of E. Hering.
The pure and mixed states play a crucial role in the measurements: "… that is, after the interaction with the apparatus, the system-plus-apparatus behaves like a mixture… It is in this sense, and in this sense alone, that a measurement is said to change a pure state into a mixture" [39] (see also the cited reference [40]). It seems actually that the problem of measurements in quantum mechanics was one of the main motivations of P. Jordan for the introduction of his new kind of algebras: "Observations not only disturb what has to be measured, they produce it… We ourselves produce the results of measurements" [39], p. 161, [41, 42].
The Riemannian geometry of \(\mathcal{C}\) and \({\mathcal{L}}^{+}\)
Now, we give further information on the underlying geometry of the Jordan algebra \(\mathcal{A}\) from both the points of view discussed above, that is, \(\mathcal{A}\) as the algebra \(\mathcal{H}(2,\mathbb{R})\) and \(\mathcal{A}\) as the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\). We first recall how to endow the positive cone \(\mathcal{C}\) of the algebra \(\mathcal{H}(2,\mathbb{R})\) with a metric that makes \(\mathcal{C}\) foliated by leaves isometric to the Poincaré disk. This is essentially the way followed by H.L. Resnikoff in [5] to obtain the geometric model \(\mathcal{P}_{2}\). It appears that this geometric structure is not well adapted to our quantum viewpoint since it does not take into account the specific role of the density matrices. Instead, we propose to equip the positive cone \(\mathcal{L}^{+}\) of the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\) with a metric that makes \(\mathcal{L}^{+}\) foliated by leaves isometric to the Klein disk. This geometric structure is more appropriate to our approach since the state space considered before is naturally embedded in \(\overline{\mathcal{L}^{+}}\) as a leaf, see (76).
The Poincaré geometry of \(\mathcal{C}\)
We consider the level set
$$ \mathcal{C}_{1}=\bigl\{ X\in \mathcal{H}^{+}(2, \mathbb{R}), \operatorname{Det}(X)=1\bigr\} . $$
Every X in \(\mathcal{C}_{1}\) can be written as follows:
$$ X= \begin{pmatrix} \alpha +v_{1} & v_{2} \\ v_{2} & \alpha -v_{1}\end{pmatrix} , $$
with \(\mathbf{v}=v_{1}e_{1}+v_{2}e_{2}\) a vector of \(\mathbb{R}^{2}\) satisfying \(\alpha ^{2}-\Vert {\mathbf{v}}\Vert ^{2}=1\) and \(\alpha >0\). Using the one-to-one correspondence
$$ X= \begin{pmatrix} \alpha +v_{1} & v_{2} \\ v_{2} & \alpha -v_{1}\end{pmatrix}\longmapsto (\alpha +\mathbf{v}) , $$
the level set \(\mathcal{C}_{1}\) is sent to the level set
$$ \mathcal{L}_{1}=\bigl\{ a=(\alpha +\mathbf{v})\in \mathcal{L}^{+}, a\cdot a=1\bigr\} $$
of the future lightcone \(\mathcal{L}^{+}\). It is well known that the projection
$$ \pi _{1}:\mathcal{L}_{1}\longrightarrow \{\alpha =0\} , $$
$$ \pi _{1}(\alpha +\mathbf{v})=(0+\mathbf{w}) , $$
with \(\mathbf{w}=w_{1}e_{1}+w_{2}e_{2}\) and
$$ w_{i}=\frac{v_{i}}{1+\alpha } , $$
for \(i=1,2\), is an isometry between the level set \(\mathcal{L}_{1}\) and the Poincaré disk \(\mathcal{D}\) [43]. Simple computations show that the matrix X can be written as follows:
$$ X= \begin{pmatrix} \frac{1+2w_{1}+(w_{1}^{2}+w_{2}^{2})}{1-(w_{1}^{2}+w_{2}^{2})} & \frac{2w_{2}}{1-(w_{1}^{2}+w_{2}^{2})} \\ \frac{2w_{2}}{1-(w_{1}^{2}+w_{2}^{2})} & \frac{1-2w_{1}+(w_{1}^{2}+w_{2}^{2}) }{1-(w_{1}^{2}+w_{2}^{2})}\end{pmatrix} $$
in the w-parametrization.
Let X be an element of \(\mathcal{C}_{1}\) written under the form (45), we have
$$ \frac{\operatorname{Trace} [(X^{-1}\,dX)^{2} ]}{2}=4 \biggl(\frac{(dw_{1})^{2}+(dw_{2})^{2} }{(1-(w_{1}^{2}+w_{2}^{2}))^{2}} \biggr)=ds^{2}_{\mathcal{D}} . $$
Cayley–Hamilton theorem implies the following equality, where A denotes a 2 by 2 matrix:
$$ \bigl(\operatorname{Trace}(A)\bigr)^{2}=\operatorname{Trace}\bigl( A^{2}\bigr)+2\operatorname{Det}(A) . $$
We apply this equality to the matrix \(A=X^{-1}\,dX\). The matrix X can be written as follows:
$$ X= \biggl(\frac{1+ \vert z \vert ^{2}}{1- \vert z \vert ^{2}} \biggr)I_{2}+\frac{X_{1}}{1- \vert z \vert ^{2}} , $$
$$ X_{1}= \begin{pmatrix} 2w_{1} & 2w_{2} \\ 2w_{2} &-2w_{1} \end{pmatrix} $$
and \(z=w_{1}+iw_{2}\). We have
$$ X^{-1}= \biggl(\frac{1+ \vert z \vert ^{2}}{1- \vert z \vert ^{2}} \biggr)I_{2}- \frac{X_{1}}{1- \vert z \vert ^{2}} $$
$$ dX=d \biggl(\frac{1+ \vert z \vert ^{2}}{1- \vert z \vert ^{2}} \biggr)I_{2}+\frac{dX_{1}}{1- \vert z \vert ^{2}}+ \frac{d( \vert z \vert ^{2}) }{(1- \vert z^{2} \vert )^{2}}X_{1} . $$
Consequently,
$$ X^{-1}\,dX=\frac{2\, d( \vert z \vert ^{2})}{(1- \vert z \vert ^{2})^{2}}I_{2} -\frac{d( \vert z \vert ^{2})}{(1- \vert z \vert ^{2})^{2}}X_{1}+ \frac{(1+ \vert z \vert ^{2})\,dX_{1} }{(1- \vert z \vert ^{2})^{2}}-\frac{X_{1}\,dX_{1}}{(1- \vert z \vert ^{2})^{2}} . $$
Since \(\operatorname{Trace}(X_{1})=\operatorname{Trace}(dX_{1})=0\) and \(\operatorname{Trace}(X_{1}\,dX_{1})=4d(|z|^{2})\), then
$$ \bigl[\operatorname{Trace}\bigl(X^{-1}\,dX\bigr) \bigr]^{2}=0 $$
$$ \operatorname{Trace} \bigl[\bigl(X^{-1}\,dX\bigr)^{2} \bigr]=-2\operatorname{Det}\bigl(X^{-1}\,dX\bigr)=-2 \operatorname{Det}(dX) . $$
We have also
$$\begin{aligned} dX =&\frac{d( \vert z \vert ^{2})}{(1- \vert z \vert ^{2})^{2}} \begin{pmatrix} 1+ \vert z \vert ^{2}+2w_{1} & 2w_{2} \\ 2w_{2} &1+ \vert z \vert ^{2}-2w_{1} \end{pmatrix} \end{aligned}$$
$$\begin{aligned} &{}+\frac{1}{(1- \vert z \vert ^{2})} \begin{pmatrix} d( \vert z \vert ^{2})+2\,dw_{1} & 2\,dw_{2} \\ 2\,dw_{2} &d( \vert z \vert ^{2})-2\,dw_{1} \end{pmatrix} . \end{aligned}$$
Simple computations lead to
$$ \operatorname{Det}(dX)=-4 \biggl(\frac{(dw_{1})^{2}+(dw_{2})^{2}}{(1- \vert z \vert ^{2})^{2}} \biggr) $$
and end the proof. □
This proposition means that \(\mathcal{C}_{1}\) equipped with the normalized Rao–Siegel metric, i.e., \(\operatorname{Trace} [(X^{-1}\,dX)^{2} ]/2\), is isometric to the Poincaré disk \(\mathcal{D}\) of constant negative curvature equal to −1. Actually, \(\mathcal{C}\) is foliated by the level sets of the determinant with leaves that are isometric to \(\mathcal{D}\). This description, which is analogous to the one considered by H.L. Resnikoff in [5], does not take into account the specific role of the state density matrices of the algebra \(\mathcal{H}(2,\mathbb{R})\).
The Klein geometry of \({\mathcal{L}}^{+}\)
Another classical result of hyperbolic geometry asserts that the projection
$$ \varpi _{1}:\mathcal{L}_{1}\longrightarrow \{\alpha =1 \} , $$
$$ \varpi _{1}(\alpha +\mathbf{v})=(1+x) , $$
with \(x=x_{1}e_{1}+x_{2}e_{2}\) and
$$ x_{i}=\frac{v_{i}}{\alpha } , $$
for \(i=1,2\), is an isometry between the level set \(\mathcal{L}_{1}\) and the Klein disk \(\mathcal{K}\), the Riemannian metric of which is given by
$$ ds^{2}_{\mathcal{K}}=\frac{(dx_{1})^{2}+(dx_{2})^{2}}{1-(x_{1}^{2}+x_{2}^{2})} +\frac{(x_{1}\,dx_{1}+x_{2}\,dx_{2})^{2} }{(1-(x_{1}^{2}+x_{2}^{2}))^{2}} , $$
see [43]. An isometry between \(\mathcal{K}\) and \(\mathcal{D}\) is defined by
$$\begin{aligned}& x_{i}=\frac{2w_{i}}{1+(w_{1}^{2}+w_{2}^{2})} , \end{aligned}$$
$$\begin{aligned}& w_{i}=\frac{x_{i}}{1+\sqrt{1-(x_{1}^{2}+x_{2}^{2})}} \end{aligned}$$
for \(i=1,2\). In other words, we have the following commutative diagram of isometries:
Let us recall that the state density matrices of the quantum system we consider can be identified with the elements
$$ a=(1+\mathbf{v})/2 $$
of the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\) with \(\Vert {\mathbf{v}}\Vert \leq 1\). Let us denote by
$$ \varpi _{1/2}:\mathcal{L}_{1/2}=\bigl\{ a=(\alpha + \mathbf{v})/2, \alpha >0, a \cdot a=1/4\bigr\} \longrightarrow \{\alpha =1/2\} $$
the projection given by
$$ \varpi _{1/2}\bigl((\alpha +\mathbf{v})/2\bigr)=(1+\mathbf{v}/\alpha )/2 . $$
$$ \varpi _{1/2}\bigl((\alpha +\mathbf{v})/2\bigr)=\varpi _{1}(\alpha +\mathbf{v})/2 . $$
This means that the map
$$ \varphi :\mathcal{K}\longmapsto \mathcal{K}_{1/2} , $$
$$ \varphi (x_{1},x_{2})=(x_{1},x_{2})/2 , $$
is an isometry between \(\mathcal{K}\) and
$$ \mathcal{K}_{1/2}=\bigl\{ x/2\in \mathbb{R}^{2}, \Vert x \Vert ^{2}< 1\bigr\} , $$
the Riemannian metric on the latter being given by
$$ ds^{2}_{\mathcal{K}_{1/2}}=\bigl(\varphi ^{-1} \bigr)^{*}\,ds^{2}_{\mathcal{K}}=\frac{(dx_{1})^{2}+(dx_{2})^{2} }{1/4-(x_{1}^{2}+x_{2}^{2})}+ \frac{(x_{1}\,dx_{1}+x_{2}\,dx_{2})^{2}}{(1/4-(x_{1}^{2}+x_{2}^{2}))^{2}} . $$
One can verify that \(\mathcal{L}^{+}\) is foliated by the level sets \(\alpha =\mathrm{constant}\) with leaves that are isometric to the Klein disk \(\mathcal{K}\). This description is more appropriate than the above one to characterize perceived colors from real quantum states since the state space \(\mathcal{S}\) is naturally embedded in \(\overline{\mathcal{L}^{+}}\), see (76) below.
Klein and Hilbert metrics
As an introduction to the discussions of Sects. 7.2 and 7.3, let us recall some basic facts about the geometry of the Klein disk. Contrary to the Poincaré disk, the geodesics of \(\mathcal{K}\) are straight lines and more precisely the chords of the unit disk. An important feature of the Klein metric is that it coincides with the Hilbert metric defined as follows. Let p and q be two interior points of the disk, and let r and s be the two points of the boundary of the disk such that the segment \([r,s]\) contains the segment \([p,q]\). The Hilbert distance between p and q is defined by
$$ d_{H}(p,q)=\frac{1}{2}\log [r,p,q,s] , $$
$$ [r,p,q,s]=\frac{ \Vert q-r \Vert }{ \Vert p-r \Vert } \times \frac{ \Vert p-s \Vert }{ \Vert q-s \Vert } $$
is the cross-ratio of the four points r, p, q, and s [44] (in (74), \(\Vert \cdot \Vert \) is the Euclidean norm).
Perceived colors and chromatic states
We describe the space \(\mathcal{P}\) of perceived colors under the only hypothesis that \(\mathcal{P}\) can be described from the state space of a quantum system characterized by the Jordan algebra \(\mathcal{H}(2,\mathbb{R})\). As explained before, we exploit the fact that \(\mathcal{H}(2,\mathbb{R})\) is isomorphic to the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\). Let us recall that this description does not involve any reference to physical colors or to an observer.
Perceived colors as quantum measurements
The state space \(\mathcal{S}\) is the unit disk embedded in the space of state density matrices by
$$ s=(v_{1},v_{2})\longmapsto \rho (v_{1},v_{2})= \frac{1}{2} \begin{pmatrix} 1+v_{1} & v_{2} \\ v_{2} & 1-v_{1}\end{pmatrix} $$
and in the Klein disk \(\mathcal{K}_{1/2}\) of the closure \(\overline{\mathcal{L}^{+}}\) of the future lightcone \(\mathcal{L}^{+}\) by
$$ s=(v_{1},v_{2})\longmapsto \frac{1}{2}(1+ \mathbf{v})=1/2+(v_{1}/2,v_{2}/2) . $$
In order to describe perceived colors, i.e., measured colors, it is necessary to characterize all the possible measurements that can be performed on the states. We adopt here the viewpoint of the generalized probability theory [45]. The reader may refer for instance to [46, 47], and [48] for more information on related topics.
We denote by \(\mathcal{C}(\mathcal{S})\) the state cone defined by
$$ \mathcal{C}(\mathcal{S})=\left \{ \alpha \begin{pmatrix} v_{1} \\ v_{2} \\ 1\end{pmatrix}, \alpha \geq 0, s=(v_{1},v_{2}) \in \mathcal{S}\right \} . $$
This cone is self-dual, that is,
$$ \mathcal{C}(\mathcal{S})=\mathcal{C}^{*}(\mathcal{S})=\bigl\{ a\in \mathcal{A}, \forall b\in \mathcal{C}(\mathcal{S}), \langle a,b\rangle \geq 0\bigr\} , $$
where \(\langle \cdot ,\cdot \rangle \) denotes the inner product of \(\mathcal{A}\) given by (12). By definition, an effect is an element e of \(\mathcal{C}^{*}(\mathcal{S})\) such that \(e(s)\leq 1\) for all s in \(\mathcal{S}\). Such an effect e can be seen as an affine function \(e:\mathcal{S}\longrightarrow [0,1]\) with \(0\leq e(s)\leq 1\) for all s. It is the most general way of assigning a probability to all states. Effects correspond to positive operator-valued measures. They also correspond to nonnegative symmetric matrices. In fact, the cone \(\mathcal{C}(\mathcal{S})\) is the positive domain of the algebra \(\mathcal{A}\). Considering \(\mathcal{A}\) as the algebra \(\mathcal{H}(2,\mathbb{R})\), this means that an effect e is a symmetric matrix such that \(\langle e,f\rangle =\operatorname{Trace}(ef)\geq 0\) for all nonnegative symmetric matrix f. In order to verify that e is nonnegative, let us suppose that this is not the case and thus that one of the eigenvalues of e is negative, the corresponding eigenvector being denoted by w. One can then check that the trace of the product \(e\mathbf{w}{\mathbf{w}}^{t}\) is negative. This gives a contradiction since the matrix \(f=\mathbf{w}{\mathbf{w}}^{t}\) is symmetric and nonnegative.
In the present settings, every effect is given by a vector \(e=(a_{1},a_{2},a_{3})\) such that
$$ 0\leq e\cdot \begin{pmatrix} v_{1} \\ v_{2} \\ 1\end{pmatrix}\leq 1 $$
for all \(s=(v_{1},v_{2})\) in \(\mathcal{S}\). The measurement effect associated with e is the operator
$$ E=a_{3}\,\mathrm{Id}_{2}+a_{1}\sigma _{1}+a_{2}\sigma _{2} $$
that must satisfy \(0\leq E\leq \mathrm{Id}_{2}\). This last condition implies that \(0\leq a_{3}\leq 1\), with \(a_{1}^{2}+a_{2}^{2}\leq a_{3}^{2}\) and \(a_{1}^{2}+a_{2}^{2}\leq (1-a_{3})^{2}\). We denote by \(\mathcal{E}(\mathcal{S})\) the effect space of \(\mathcal{S}\), that is, the set of all effects on \(\mathcal{S}\). As explained in Sect. 7.1, this space appears to coincide with the "double cone" depicted in Fig. 4.11 of [17], p. 123. Note that the so-called unit effect, \(e_{1}=(0,0,1)\), satisfies \(e_{1}(s)=1\) for all s in \(\mathcal{S}\).
Colorimetric definitions
A perceived color \(c=(a_{1},a_{2},a_{3})\) is by definition an effect on \(\mathcal{S}\), that is, an element of the effect space \(\mathcal{E}(\mathcal{S})\). Since \(\mathcal{C}(\mathcal{S})=\mathcal{C}^{*}(\mathcal{S})\), a perceived color c is an element of the state cone of \(\mathcal{S}\), this one being the closure \(\overline{\mathcal{L}^{+}}\) of the future lightcone \(\mathcal{L}^{+}\). The element \(c/(2 a_{3})=(a_{1}/2a_{3},a_{2}/2a_{3},1/2)\), \(a_{3}\neq 0\), belongs to the Klein disk \(\mathcal{K}_{1/2}\) of \(\overline{\mathcal{L}^{+}}\). This suggests to define the colorimetric attributes of c as follows:
The real \(a_{3}\), with \(0\leq a_{3}\leq 1\), is the magnitude of the perceived color c.
The element \(s_{c}=(a_{1}/a_{3},a_{2}/a_{3})\in \mathcal{S}\) is the chromatic state of c.
A perceived color with a unit chromatic state is a pure perceived color.
The saturation of a perceived color c is given by the Von Neumann entropy of its chromatic state.
A perceived color whose chromatic state is the state of maximal entropy is achromatic.
Given a state \((v_{1},v_{2})\in \mathcal{S}\), the perceived colors which have this state as chromatic state form the intersection
$$ c_{s}=\mathcal{E}(\mathcal{S})\cap \left \{ \begin{pmatrix} a_{3}v_{1} \\ a_{3}v_{2} \\ a_{3}\end{pmatrix}, 0\leq a_{3}\leq 1\right \} . $$
The maximum value of a perceived color \(c=(a_{1},a_{2},a_{3})\) is
$$ \begin{pmatrix} a_{1} \\ a_{2} \\ a_{3}\end{pmatrix}\cdot \begin{pmatrix} a_{1}/a_{3} \\ a_{2}/a_{3} \\ 1\end{pmatrix}=\frac{a_{1}^{2}+a_{2}^{2}}{a_{3}}+a_{3}=a_{3} \bigl(1+r^{2}\bigr) , $$
with \(r^{2}=(a_{1}^{2}+a_{2}^{2})/a_{3}^{2}\). We must have
$$ 0\leq r^{2}\leq \frac{1-a_{3}}{a_{3}}\leq 1 . $$
If \(0< a_{3}<1/2\), the measure of a perceived color \(c=(a_{1},a_{2},a_{3})\) on its chromatic state \((a_{1}/a_{3},a_{2}/a_{3})\) gives the probability \(a_{3}(1+r^{2})\), this one being well defined for all \(0< r\leq 1\). In particular, pure perceived colors are measured with the maximum probability \(2a_{3}\). In this case, the magnitude is not high enough to allow measurements with probability 1, and the perceived colors are under-estimated.
If \(a_{3}=1/2\), the measure of a perceived color \(c=(a_{1},a_{2},1/2)\) on its chromatic state \((2a_{1},2a_{2})\) gives the probability \((1+r^{2})/2\). This probability is well defined for all \(0< r\leq 1\). It is maximal, equal to 1, if and only if c is a pure perceived color. In this case, the perceived colors are ideally-estimated.
If \(1/2< a_{3}<1\), the measure of a perceived color \(c=(a_{1},a_{2},a_{3})\) on its chromatic state \((a_{1}/a_{3},a_{2}/a_{3})\) gives the probability \(a_{3}(1+r^{2})\). This probability is well defined if and only if equation (83) is satisfied. In particular, pure perceived colors cannot be measured on their chromatic states. For instance, if \(a_{3}=2/3\), then r should be less than or equal to \(\sqrt{2}/2\) and perceived colors with chromatic states of norm equal to \(\sqrt{2}/2\) are measured with probability 1. In this case, the perceived colors are over-estimated.
An achromatic perceived color \(c=(0,0,a_{3})\) measured on a chromatic state gives the probability \(a_{3}\) and this independently of the considered chromatic state. Such a perceived color does not take into account chromaticity. The unit perceived color \(c=e_{1}\) is the saturated achromatic perceived color.
Group actions and homogeneity
As already mentioned, Resnikoff's work is based on the fact that there should exist a linear group acting transitively on the space of perceived colors [5, 9]. The elements of this group are supposed to be background illuminant changes. Up to now, we have not taken into account such an action to obtain the description that we propose for the space of perceived colors. This section is mainly devoted to showing that one can characterize illumination changes by Lorentz boost maps starting from the quantum dynamics described above. We will see in Sect. 7.2 how to relate our results with those of H. Yilmaz [14, 15].
Recalls on the special Lorentz group
Let us first recall that the special Lorentz group \(\operatorname{SO}^{+}(1,2)\) is the identity component of the group \(O(1,2)\), this latter being the matrix Lie group that preserves the quadratic form
$$ \bigl\Vert (\alpha +\mathbf{v}) \bigr\Vert _{\mathcal{M}}=\alpha ^{2}- \Vert {\mathbf{v}} \Vert ^{2}, $$
where \((\alpha +\mathbf{v})\) belongs to the spin factor \(\mathbb{R}\oplus \mathbb{R}^{2}\). The fact that \(\operatorname{SO}^{+}(1,2)\) acts linearly on \(\mathcal{L}^{+}\) means that it acts projectively on the set of lines of \(\mathcal{L}^{+}\) and consequently on the points of the Klein disk \(\mathcal{K}_{1/2}\) [49]. Moreover, this projective action gives the isometries of \(\mathcal{K}_{1/2}\).
The subgroup of \(\operatorname{SO}^{+}(1,2)\) that fixes \((1+0)\) may be identified with the group of rotations \(\operatorname{SO}(2)\), and in fact every element g of \(\operatorname{SO}^{+}(1,2)\) can be decomposed in a unique way as follows [50]:
$$ g=b_{\zeta }r_{\xi } , $$
where \(b_{\zeta }\) is a boost map and \(r_{\xi }\) is a proper rotation. More precisely, if we consider the coordinates \((\alpha , v_{1}, v_{2})\) in \(\mathcal{L}^{+}\), the matrix associated with \(b_{\zeta }\) is given by
$$ M(b_{\zeta })= \begin{pmatrix} \cosh (\zeta _{0})&\zeta _{x}\sinh (\zeta _{0}) & \zeta _{y}\sinh ( \zeta _{0}) \\ \zeta _{x}\sinh (\zeta _{0}) & 1+\zeta _{x}^{2}(\cosh (\zeta _{0})-1) & \zeta _{x}\zeta _{y}(\cosh (\zeta _{0})-1) \\ \zeta _{y}\sinh (\zeta _{0}) & \zeta _{x}\zeta _{y}(\cosh (\zeta _{0})-1)& 1+\zeta _{y}^{2}(\cosh (\zeta _{0})-1) \end{pmatrix} , $$
where \((\zeta _{x},\zeta _{y})\) is a unit vector of \(\mathbb{R}^{2}\) and \(\zeta _{0}\) is the rapidity of the boost. It should be noted that the set of boosts is not a subgroup of the special Lorentz group. The matrix associated with \(r_{\xi }\) is given by
$$ M(r_{\xi })= \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos \xi & -\sin \xi \\ 0 & \sin \xi & \cos \xi \end{pmatrix} . $$
In order to illustrate the projective action of the boost \(b_{\zeta }\) on the Klein disk \(\mathcal{K}_{1/2}\), let us consider a simple example (quite more complicated computations give similar results in the general case). We choose \(\zeta _{x}=1\), \(\zeta _{y}=0\), and denote \(\overline{\zeta }=\tanh (\zeta _{0})\). The image \((\alpha ,w_{1},w_{2})\) of a vector \((1/2,\cos \theta /2,\sin \theta /2)\) is given by
$$ \textstyle\begin{cases} 2\alpha = \cosh (\zeta _{0})+\sinh (\zeta _{0})\cos \theta, \\ 2w_{1} = \sinh (\zeta _{0})+\cosh (\zeta _{0})\cos \theta, \\ 2w_{2}=\sin \theta . \end{cases} $$
This means that the image of the boundary point \((\cos \theta /2,\sin \theta /2)\) is the boundary point \((v_{1},v_{2})\) with
$$ \textstyle\begin{cases} 2v_{1}= \frac{\overline{\zeta }+\cos \theta }{1+ \overline{\zeta }\cos \theta }, \\ 2v_{2}= \frac{(1-\overline{\zeta }^{2})^{1/2}\sin \theta }{1+ \overline{\zeta }\cos \theta } . \end{cases} $$
One may notice that the map sending the point \((\cos \theta /2,\sin \theta /2)\) to the point \((v_{1},v_{2})\) is an element of the group \(\operatorname{PSL}(2,\mathbb{R})\). Transformation (89) will be used in Sect. 7.2 to interpret Yilmaz third experiment as a colorimetric analog of the relativistic aberration effect.
Pure states and one parameter subgroups of Lorentz boosts
The fact that boost maps act on pure states with \(\operatorname{PSL}(2,\mathbb{R})\) transformations is not surprising in view on the following result.Footnote 2
Every pure state generates a one-parameter subgroup of boosts.
As seen before, the state density matrix of a pure state is given by
$$ \rho (v_{1},v_{2})=\frac{1}{2}( \mathrm{Id}_{2}+\mathbf{v}\cdot \sigma ) , $$
where \(\mathbf{v}=(v_{1},v_{2})\) is a unit vector and \(\sigma =(\sigma _{1},\sigma _{2})\). The matrices \(\sigma _{1}\) and \(\sigma _{2}\) are symmetric traceless matrices that are usually chosen to be the two first generators of the Lie algebra \(\operatorname{sl}(2,\mathbb{R})\) of the group \(\operatorname{SL}(2,\mathbb{R})\). Note that they do not generate a sub-Lie algebra of \(\operatorname{sl}(2,\mathbb{R})\). The matrix
$$ A(\rho ,\zeta _{0})=\exp \biggl(\zeta _{0} \frac{\mathbf{v}\cdot \sigma }{2} \biggr) $$
is a symmetric element of \(\operatorname{PSL}(2,\mathbb{R})\), \(\zeta _{0}\) being a real parameter. Let us recall that the \(\operatorname{PSL}(2,\mathbb{R})\) action on \(\mathcal{H}(2,\mathbb{R})\) is defined by
$$ X\longmapsto AXA^{t} . $$
We have clearly \(\operatorname{Det}(AXA^{t})=\operatorname{Det}(X)\). Since \(\sigma _{1}\) and \(\sigma _{2}\) are elements of \(\mathcal{H}(2,\mathbb{R})\), we can consider the matrices given by
$$ \sigma _{i}\longmapsto A(\rho ,\zeta _{0})\sigma _{i}A(\rho ,\zeta _{0}) $$
for \(i=0,1,2\), with \(\sigma _{0}=\mathrm{Id}_{2}\). It can be shown that the \(3\times 3\) matrix with coefficients
$$ M(\rho ,\zeta _{0})_{ij}=\frac{1}{2}{ \operatorname{Trace}} \bigl(\sigma _{i}A( \rho ,\zeta _{0}) \sigma _{j}A(\rho ,\zeta _{0}) \bigr) $$
is a boost \(b_{\zeta }\) with \(\zeta =\tanh (\zeta _{0})(v_{1},v_{2})\). Let us verify it on a simple example where \(v_{1}=1\) and \(v_{2}=0\). In this case,
$$ A(\rho ,\zeta _{0})=\exp \biggl(\zeta _{0} \frac{v_{1}\sigma _{1}}{2} \biggr)=\exp \biggl(\zeta _{0} \frac{\sigma _{1}}{2} \biggr)= \begin{pmatrix} e^{\zeta _{0}/2} & 0 \\ 0 & e^{-\zeta _{0}/2}\end{pmatrix} . $$
We only need to compute the coefficient \(B(\sigma ,\zeta _{0})_{i,j}\) for \(i\leq j\). We have
$$ \textstyle\begin{cases} M(\sigma ,\zeta _{0})_{00}=\frac{1}{2}\operatorname{Trace} (A^{2}(\rho , \zeta _{0}) )=\cosh (\zeta _{0}), \\ M(\sigma ,\zeta _{0})_{01}=\frac{1}{2}\operatorname{Trace} (A(\rho ,\zeta _{0}) \sigma _{1}A(\rho ,\zeta _{0}) )=\sinh (\zeta _{0}), \\ M(\sigma ,\zeta _{0})_{02}=\frac{1}{2}\operatorname{Trace} (A(\rho ,\zeta _{0}) \sigma _{2}A(\rho ,\zeta _{0}) )=0, \\ M(\sigma ,\zeta _{0})_{11}=\frac{1}{2}\operatorname{Trace} (\sigma _{1} A( \rho ,\zeta _{0})\sigma _{1}A(\rho ,\zeta _{0}) )=\cosh ( \zeta _{0}), \\ M(\sigma ,\zeta _{0})_{12}=\frac{1}{2}\operatorname{Trace} ( \sigma _{1}A( \rho ,\zeta _{0})\sigma _{2}A(\rho ,\zeta _{0}) )=0, \\ M(\sigma ,\zeta _{0})_{22}=\frac{1}{2}\operatorname{Trace} ( \sigma _{2}A( \rho ,\zeta _{0})\sigma _{2}A(\rho ,\zeta _{0}) )=1 . \end{cases} $$
This means that \(M(\rho ,\zeta _{0})=M(b_{\zeta })\) with \(\zeta =\tanh (\zeta _{0})(1,0)\), see equation (86). □
One can easily verify that the image of the vector \((1/2,0,0)\) of \(\mathcal{L}^{+}\) by the boost \(b_{\zeta }=\tanh (\zeta _{0})(1,0)\) is the vector \((\cosh (\zeta _{0})/2,\sinh (\zeta _{0})/2,0)\). In consequence, the state of maximal entropy \(\rho _{0}=(0,0)\) is sent to the state \((\tanh (\zeta _{0})/2,0)\). This extends to general boosts.
We can summarize these computations in the following way. As before, we consider the state space \(\mathcal{S}\) as the Klein disk \(\mathcal{K}_{1/2}\) of the closure \(\overline{\mathcal{L}^{+}}\) of the future lightcone \(\mathcal{L}^{+}\) by using the map
Every pure state ρ generates a one-parameter subgroup of boosts, the parameter \(\zeta _{0}\) being the rapidity. Actually, every boost can be obtained in this way. Boost maps act on the Klein disk \(\mathcal{K}_{1/2}\) by isometries. If we consider \(\mathcal{S}\) as embedded in the space of state density matrices by
$$ s=(v_{1},v_{2})\longmapsto \rho (v_{1},v_{2})= \frac{1}{2} \begin{pmatrix} 1+v_{1} & v_{2} \\ v_{2} & 1-v_{1}\end{pmatrix} , $$
the one-parameter subgroup of boosts is obtained by considering the action of \(\operatorname{PSL}(2,\mathbb{R})\) on \(\mathcal{H}(2,\mathbb{R})\). It is important to notice that we use only the matrices \(\sigma _{0}\), \(\sigma _{1}\), and \(\sigma _{2}\), i.e., only information from \(\mathcal{S}\). Since every state can be obtained from the state of maximal entropy, boosts, or equivalently pure states, act transitively on S. However, one has to pay attention to the fact that boost maps do not form a subgroup of the special Lorentz group, which is reflected by the fact that \(\sigma _{1}\) and \(\sigma _{2}\) do not form a sub-Lie algebra of the Lie algebra \(\operatorname{sl}(2,\mathbb{R})\).
About homogeneity
This point of view is quite different from the approach adopted in [5]. As mentioned in the introduction and explained in [9], one of the key arguments of H.L. Resnikoff is the existence of a transitive action of the group denoted \(\operatorname{GL}(\mathcal{P})\) on the space \(\mathcal{P}\) of perceived colors. This group is supposed to be composed of all linear changes of background illumination. In what precedes, we make use of the action of \(\operatorname{PSL}(2,\mathbb{R})\) on \(\mathcal{H}(2,\mathbb{R})\), see (92). But the matrices \(A(\rho ,\zeta _{0})\) of \(\operatorname{PSL}(2,\mathbb{R})\) that are used are also symmetric, due to the fact that \(\sigma _{1}\) and \(\sigma _{2}\) are symmetric. Actually, the action (92) can be also viewed as the action
$$ X\longmapsto AXA $$
of the Jordan algebra \(\mathcal{H}(2,\mathbb{R})\) on itself. This is precisely the action
$$ Q(A):X\longmapsto \bigl(2L(A)^{2}-L\bigl(A^{2}\bigr) \bigr)X $$
of the quadratic representation of A on X [10]. But once again, the matrices X that we consider are \(\sigma _{0}\), \(\sigma _{1}\), and \(\sigma _{2}\). The matrices \(\sigma _{1}\) and \(\sigma _{2}\) are not elements of the positive cone \(\mathcal{H}^{+}(2,\mathbb{R})\). It appears in consequence that the homogeneity of \(\mathcal{H}^{+}(2,\mathbb{R})\) is not so important in our approach. Instead of postulating the existence of a group of linear changes of background illumination, we have shown that the quantum description that we propose naturally leads to considering boost maps as illumination changes. These illumination changes are isometries of the Klein disk \(\mathcal{K}_{1/2}\).
Consequences and perspectives
We discuss now some consequences and perspectives of our results regarding color perception.
Neural coding of colors and Hering's rebit
D.H. Krantz describes in [51] Hering's color opponency mechanism [52] as follows: "E. Hering noted that colors can be classified as reddish or greenish or neither, but that redness and greenness are not simultaneously attributes of a color. If we add increasing amounts of a green light to a reddish light, the redness of the mixture decreases, disappears, and gives way to greenness. At the point where redness is gone and greenness is not yet present, the color may be yellowish, bluish, or achromatic. We speak of a partial chromatic equilibrium, with respect to red/green… Similarly, yellow and blue are identified as opponent hues…"
Let us rename \(|g\rangle =|u_{1}\rangle \), \(|r\rangle =|d_{1}\rangle \), \(|b\rangle =|u_{2}\rangle \), and \(|y\rangle =|d_{2}\rangle \) as the four state vectors characterizing the rebit. The opponency mechanism is given by the two matrices \(\sigma _{1}\) and \(\sigma _{2}\). More precisely, the state vector
$$ \bigl|(1,\theta )\bigr\rangle =\cos (\theta /2) \vert g\rangle +\sin (\theta /2) \vert r \rangle $$
satisfies
$$ \bigl\langle (1,\theta ) \bigr\vert \sigma _{1} \bigl\vert (1,\theta ) \bigr\rangle =\cos \theta , \qquad \bigl\langle (1,\theta ) \bigr\vert \sigma _{2} \bigl\vert (1,\theta )\bigr\rangle =\sin \theta . $$
This means that if \(\cos \theta > 0\), then the pure chromatic state \(s(\theta )\) of the Bloch disk with coordinate θ is greenish, and if \(\cos \theta < 0\), then \(s(\theta )\) is reddish. For \(\theta =\pi /2\), or \(\theta =3\pi /2\), \(s(\theta )\) is achromatic in the opposition green/red. In the same way, if sinθ is positive, then \(s(\theta )\) is bluish, and if sinθ is negative, then \(s(\theta )\) is yellowish. For \(\theta =0\), or \(\theta =\pi \), \(s(\theta )\) is achromatic in the blue/yellow opposition. The phenomenon "that redness and greenness are not simultaneously attributes of a color", for instance, is a trivial consequence of the fact that \(\langle (1,\theta )|\sigma _{1}|(1,\theta )\rangle \) cannot be simultaneously positive and negative.
The mathematical description of the opponency that we propose seems to be relevant regarding the physiological mechanisms of the neural coding of colors [16] and [17]. These mechanisms involve both three separate receptor types (the L, M, and S cones) and spectrally opponent and nonopponent interactions. These latter, which take place at a higher level in the processing pipeline, result essentially from the activity rates of ganglion and lateral geniculate nucleus cells [53, 54]. Roughly speaking, color information is obtained by detecting and magnifying the differences between the various receptor type outputs.
Ganglion cells take their inputs from the bipolar and amacrine cells and relay the information to the lateral geniculate nucleus through ganglion axons. Most of the ganglion cells are on-center and off-surround, which means that they are activated if light falls in the center of their receptive fields and inhibited if light falls in the surround of their receptive fields. There exist also off-center and on-surround ganglion cells. One distinguishes two types of spectrally opponent interactions. The first one is given by the activity rate of midget ganglion cells located in the fovea. These cells fire when the difference of the spectral sensitivities of the L and M cones is greatest. This mechanism produces the L-M and M-L spectral opposition [53]. The second type is given by the activity rate of bistratified ganglion cells [54]. These cells fire when the difference between the spectral sensitivity of the S cone and both the spectral sensitivities of the L and M cones is greatest. This second mechanism produces the S-(L+M) and (L+M)-S spectral opposition. Besides the spectrally opponent interactions, there exists one type of spectrally nonopponent interaction given by the activity rate of parasol ganglion cells. These cells carry essentially the L+M and -(L+M) information.
As summarized in [17], the two main types of neural interactions seen in the precortical visual system are thus due to four spectrally opponent cells,Footnote 3 R-G, G-R, B-Y, and Y-B, and to two spectrally nonopponent cells Bl and Wh. The hue of a perceived color is determined by the activity rates among the four spectrally opponent cell types, the lightness by the two activity rates of the spectrally nonopponent cells, and the saturation by the relative rates of the opponent and nonopponent cells. This description is clearly coherent with our results. As already mentioned, the "double cone" depicted in [17], Fig. 4.11, p. 123, is nothing else than the effect space of the real quantum system of Sect. 3.2. This justifies the terminology Hering's rebit. This also shows that rebits, with only two opposition directions, can be relevant to model nonphysical phenomena related to perception.
It appears in consequence that the quantum model that we propose allows to recover axiomatically, starting from the sole trichromacy axiom, that a chromatic pure state, that is, a hue, is given by a pair of splittings similar to the two spin up and down inversions of a rebit. Following L.E.J. Brouwer, "Newton's theory of color analyzed light rays in their medium, but Goethe and Schopenhauer, more sensitive to the truth, considered color to be the polar splitting by the human eye" [55] (see also [56] and [57]).
Yilmaz's relativity of color perception from the trichromacy axiom
Yilmaz contributions [14] and [15] are devoted to deriving colorimetric analogs of the relativistic Lorentz transformations from three basic experiments. The first experiment is supposed to show that color perception is a relativistic phenomenon; the second one to show that there exists a limiting saturation invariant under illumination changes; and the third one to show that there exists a colorimetric analog of the relativistic aberration effect [58]. These experiments involve observers located in two different rooms and who perform color matching according to illuminant changes. In particular, the interpretation of the third experiment is crucial for the derivation of the transformations since it avoids introducing a perceptually invariant quadratic form whose existence is very difficult to justify experimentally.Footnote 4
Our objective here is to explain how to recover the result and the interpretation of this third experiment with the only use of the trichromacy axiom. The reader will find a more complete and detailed exposition in the forthcoming paper [59]. We have shown in Sect. 6 how to obtain the expression of the illuminant changes as Lorentz boost maps from the trichromacy axiom. We have also described the projective action of these transformations on the Klein disk \(\mathcal{K}_{1/2}\) in the particular case that interests us here, see (89).
Under transformation (89), the image of the point \(\overline{R}=(1/2,0)\) is the point \(\overline{R}'=\overline{R}=(1/2,0)\) (we use the notations of [15]). So, this point remains unchanged. The image of the point \(\overline{Y}=(0,1/2)\) is the point \(\overline{Y}'=(\overline{\zeta }/2,(1-\overline{\zeta }^{2})^{1/2}/2)\) and the point Y̅ has moved on the boundary, the angle ϕ, the one reported in [15], p. 12, being given by \(\sin \phi =\overline{\zeta }\). When the rapidity \(\zeta _{0}\) increases, ζ̅ approaches 1 and the point \((\overline{\zeta },(1-\overline{\zeta }^{2})^{1/2})/2\) approaches the point \((1,0)/2\). At the limit \(\overline{\zeta }=1\), every point \((\cos \theta ,\sin \theta )/2\) is sent to the point \((1,0)/2\), except the point \((-1,0)/2\). This means that every pure chromatic state, except the green pure chromatic state, can be transformed to a pure chromatic state arbitrarily close to the red pure chromatic state under the Lorentz boost if the rapidity \(\zeta _{0}\) is sufficiently great. To explain the results of Yilmaz third experiment, note that \(v_{1}\) in (89) is the cosine of the angle of the ray from the achromatic state to the image of the chromatic state \((\cos \theta ,\sin \theta )/2\) viewed under the initial illuminant I, whereas
$$ \overline{v}_{1}=\frac{-\overline{\zeta }+\cos \theta }{1-\overline{\zeta }\cos \theta } $$
is the cosine of the angle of the ray from the achromatic state to the image of the chromatic state \((\cos \theta ,\sin \theta )/2\) viewed under the illuminant \(I'\). In consequence, under the illuminant \(I'\), the expected yellow chromatic state given by \(\theta =\pi /2\) is in fact the greenish chromatic state given by \(\cos \theta =-\tanh (\zeta _{0})\).
We have already remarked in Sect. 4.3 that the hyperbolic Klein metric on \(\mathcal{K}_{1/2}\) is given by the Hilbert metric. The relativistic viewpoint allows to better understand the relevance of this latter. One can first show that chromatic vectors satisfy a colorimetric analog of the Einstein–Poincaré addition law. More precisely, given to perceived colors c and d with chromatic vectors \(\mathbf{v}_{c}=(v_{c},0)\) and \(\mathbf{v}_{d}=(v_{d},0)\), the chromatic vector \(\mathbf{v}^{c}_{d}=(v^{c}_{d},0)\) that describes the perceived color c with respect to d satisfies [59]
$$ {\mathbf{v}}_{c}=\frac{\mathbf{v}^{c}_{d}+\mathbf{v}_{d}}{1+4{\mathbf{v}}^{c}_{d}{ \mathbf{v}}_{d}} . $$
Then, this addition law can be related to an invariance property of the Hilbert metric. It is proven in [59] that
$$ d_{H}\bigl(\mathbf{0},\mathbf{v}^{c}_{d} \bigr)=d_{H}(\mathbf{v}_{d},\mathbf{v}_{c}) \quad \iff\quad { \mathbf{v}}_{c}=\frac{\mathbf{v}^{c}_{d}+\mathbf{v}_{d}}{1+4{\mathbf{v}}^{c}_{d}{ \mathbf{v}}_{d}} . $$
This last equivalence expresses the constancy of the Hilbert metric regarding illumination changes.
On MacAdam ellipses and Hilbert metric
Hilbert's metric is in fact defined on every convex set Ω and is always a Finsler metricFootnote 5 [60, 61], whose asymmetric norm is given by
$$ \Vert {\mathbf{v}} \Vert _{p}=\frac{1}{2} \biggl( \frac{1}{ \Vert p-p^{-} \Vert }+\frac{1 }{ \Vert p-p^{+} \Vert } \biggr) \Vert {\mathbf{v}} \Vert , $$
where \(p\in \varOmega \) and \(p^{\pm }\) are the intersection points with the boundary ∂Ω of the oriented line in Ω defined by the vector v with Euclidean norm \(\Vert {\mathbf{v}}\Vert \) based at the point p. It is well known that this Finsler metric is Riemannian if and only if the boundary ∂Ω is an ellipse.
The perceived color space that we have described is an ideal space that does not involve specific characteristics of a human observer. It is natural to envisage to characterize every human observer's capability regarding color perception by a convex subset Ω of the state space \(\mathcal{S}\), or equivalently of the Klein disk \(\mathcal{K}_{1/2}\), endowed with the Finsler metric given by the Hilbert distance. This convex subset Ω is in some sense the restriction of the ideal chromatic state space due to the limitation of the observer perception. Work in progress is devoted to identifying, for each observer, the convex subset Ω by comparing the balls of the Finsler metric with the MacAdam ellipses drawn by the observer [62], which seem very similar.
Let us also mention that the problem of discernibility of perceived colors is reminiscent of the problem of distinguishability of quantum states [63, 64].
Contexts and open quantum systems
It is important to notice that our study does not take into account so-called contextual effects, e.g., spatial context effects, that are involved in various well-known color perception phenomena such as the Helmholtz–Kohlrausch phenomenon [7, 65]. This corresponds to the fact that the quantum system of the rebit is closed, i.e., with no interactions with its environment. As opposite, open quantum systems may be interacting with other quantum systems as part of a larger system [66]. The resulting modification of the initial state space, i.e., of the space of density matrices, can be described by linear, trace-preserving, completely positive maps [67]. One may envisage explaining the phenomena mentioned above by such mechanisms.
An alternative approach to deal with context effects, based on the nonlocal theory of fiber bundles and connections, has been suggested by E. Provenzi in [68].
Finally, our work can be recasted in a much broader emerging field of research whose goal is to model general perceptual and cognitive phenomena from quantum theory, see for instance [69] or [70].
That is the mathematical structure of the space of perceived colors.
It is tempting to draw parallels between this result and the more general point of view of [71].
We adopt here the notations of [17], although the spectral opponencies correspond in practice to the oppositions pinkish-red/cyan and violet/greenish-yellow. One can recover the oppositions red/green and blue/yellow with the multi-stage color model of R.L. and K.K. de Valois [16].
The usual way to derive the relativistic Lorentz transformations is to compute the linear transformations that leave invariant Minkowski's quadratic form.
The boundary of Ω is supposed to be sufficiently regular.
Weyl H. Mind and nature. In: Mind and nature, selected writings on philosophy, mathematics, and physics. Princeton: Princeton University Press; 2009.
Grassmann HG. Zur theorie der farbenmischung. Ann Phys Chem. 1853;89:69–84.
Maxwell JC. On color vision. Proc R Inst GB. 1872;6:260–71.
Schrödinger E. Grundlinien einer theorie der farbenmetrik in tagessehen. Ann Phys. 1920;63(4):397–520.
Resnikoff HL. Differential geometry and color perception. J Math Biol. 1974;1:97–131.
MathSciNet MATH Google Scholar
Riemann B. Über die hypothesen, welche der geometric zu grunde liegen. In: The collected works of Bernhard Riemann. New York: Dover Books on Mathematics; 2017.
Von Helmholtz H. Treatise on physiological optics. Rochester: Optical Society of America; 1924. English trans. of first German edition.
Bengtsson I, Zyczkowski K. Geometry of quantum states, an introduction to quantum entanglement. 2nd ed. Cambridge: Cambridge University Press; 2017.
Provenzi E. Geometry of color perception. Part 1: structures and metrics of a homogeneous color space. J Math Neurosci. 2020;10:7.
Faraut J, Koranyi A. Analysis on symmetric cones. Oxford: Clarendon; 1994.
Wyszecki G, Stiles WS. Color science, concepts and methods, quantitative data and formulae. New York: Wiley; 2000.
Calvo M, Oller JM. A distance between multivariate normal distributions based in an embedding into the Siegel group. J Multivar Anal. 1990;35:223–42.
Siegel CL. Symplectic geometry. Am J Math. 1943;65:1–86.
Yilmaz H. Color vision and a new approach to general perception. In: Bernard EE, Kare MR, editors. Biological prototypes and synthetic systems. Boston: Springer; 1962.
Yilmaz H. On color perception. Bull Math Biophys. 1962;24:5–29.
Reinhard E, Arif Khan E, Oguz Akyuz A, Johnson G. Color imaging, fundamentals and applications. Wellesley: AK Peters; 2008.
de Valois RL, de Valois KK. Neural coding of color. In: Byrne A, Hilbert DR, editors. Readings on color, the science of color. vol. 2. A Bradford book. Cambridge: MIT Press; 1997. p. 93–140.
Koecher M. Jordan algebras and differential geometry. In: Actes, congrès intern. math. vol. 1. 1970. p. 279–83.
Koecher M. The Minnesota notes on Jordan algebras and their applications. Lecture notes in mathematics. vol. 1710. Berlin: Springer; 1999.
McCrimmon K. Jordan algebras and their applications. Bull Am Math Soc. 1978;84:612–27.
McCrimmon K. A taste of Jordan algebras. New York: Springer; 2004.
Jordan P, Von Neumann J, Wigner E. On an algebraic generalization of the quantum mechanical formalism. Ann Math. 1934;35(1):29–64.
Carinena JF, Celemente-Gallardo J, Marmo G. Geometrization of quantum mechanics. Theor Math Phys. 2007;152(1):894–903.
Gunson J. On the algebraic structure of quantum mechanics. Commun Math Phys. 1967;6:262–85.
Jordan P. Ueber verallgemeinerungsmöglichkeiten des formalismus der quantenmechanik. Nachr Akad Wiss Gött Math-Phys Kl. 1933;41:209–17.
MATH Google Scholar
Chevalley C. The algebraic theory of spinors and Clifford algebras. In: Collected works of Claude Chevalley. vol. 2. Berlin: Springer; 1996.
Ashtekar A, Corichi A, Pierri M. Geometry in color perception. In: Iyer B, Bhawal B, editors. Black holes, gravitational radiation and the universe. Dordrecht: Springer; 1999. p. 535–50.
Patterson SS, Neitz M, Neitz J. Reconciling color vision models with midget ganglion cell receptive fields. Front Neurosci. 2019;13:865.
Dirac PAM. The principles of quantum mechanics. 4th ed. International series of monographs on physics. vol. 27. Oxford: Oxford University Press; 1982.
Wootters WK. Optimal information transfer and real-vector-space quantum theory. In: Chiribella G, Spekkens R, editors. Quantum theory: informational foundations and foils. Fundamental theories of physics. vol. 181. Dordrecht: Springer; 2016. p. 21–43.
Berthier M. Spin geometry and image processing. Adv Appl Clifford Algebras. 2014;24(2):293–312.
Baez JC. Division algebras and quantum theory. 2011. arXiv:1101.5690v3 [quant-ph].
Jordan P. Über eine klasse nichtassociativer hyperkomplexer algebren. Nachr Ges Wiss Göttingen. 1932; 569–575.
Baez JC. The octonions. Bull, New Ser, Am Math Soc. 2001;39(2):145–205.
Topping DM. Jordan algebras of self-adjoint operators. Mem Am Math Soc. 1965;53:1–48.
Minkowski H. The principle of relativity. Calcutta: University Press; 1920. p. 70–88. English translation of Raum und Zeit by Saha, M.
Fano U. Description of states in quantum mechanics by density matrix and operator techniques. Rev Mod Phys. 1957;29(1):74–93.
Appleby DM. Symmetric informationally complete measurements of arbitrary rank. 2006. arXiv:quant-ph/0611260v1.
Jammer M. The philosophy of quantum mechanics. The interpretations of QM in historical perspectives. New York: Wiley; 1974.
Landau LD. Das dämpfungsproblem in der wellenmechanik. Z Phys. 1927;45:430–41.
Jordan P. Quantenphysikalische bemerkungen über biologie und psychologie. Erkenntnis. 1934;4:215–52.
Mermin ND. Is the moon there when nobody looks? Reality and the quantum theory. Phys Today. 1985;38:38–47.
Cannon JW, Floyd WJ, Kenyon R, Parry WR. Hyperbolic geometry. Flavors of geometry. MSRI Publ. 1997;31:59–115.
Beardon AF. The Klein, Hilbert and Poincaré metrics of a domain. J Comput Appl Math. 1999;105:155–62.
Janotta P, Hinrichsen H. Generalized probability theories: what determines the structure of quantum theory? J Phys A, Math Theor. 2014;47:323001.
Holevo AS. Probabilistic and statistical aspects of quantum theory. North-Holland series in statistics and probability. vol. 1. Amsterdam: North-Holland; 1982.
Kraus K. States, effects, and operations. Fundamental notions of quantum theory. Lecture notes in physics. vol. 190. Berlin: Springer; 1983.
Gudder SP. Quantum probability. San Diego: Academic Press; 1988.
Ghys E. Groups acting on the circle. Enseign Math. 2001;47:329–407.
Johns O. Analytical mechanics for relativity and quantum mechanics. New York: Oxford University Press; 2005.
Krantz DH. Color measurement and color theory: II. Opponent-colors theory. J Math Psychol. 1975;12:304–27.
Hering E. Zur lehre vom lichtsinne. Vienna: C. Gerold's Sohn; 1878.
Dacey DM, Petersen MR. Functional architecture of cone signal pathways in the primate retina. In: Gegenfurtner KR, Sharpe L, editors. Color vision: from genes to perception. Cambridge: Cambridge University Press; 1999. p. 181–202.
Dacey DM, Lee BB. The 'blue-on' opponent pathways in primate retina originates from a distinct bistratified ganglion cell. Nature. 1994;367(6465):731–5.
Brouwer LEJ. Life, art, and mysticism. Notre Dame J Form Log. 1996;37:389–429.
MathSciNet Google Scholar
Goethe JW. Theory of colours. Cambridge: MIT Press; 1970.
Schopenhauer A. On vision and colors. Oxford: Berg Publishers; 1994.
Landau LD, Lifschitz EM. The classical theory of fields. Course of theoretical physics. vol. 2. Oxford: Pergamon; 1971.
Berthier M, Garcin V, Prencipe N, Provenzi E. The relativity of color perception. Preprint. 2020.
Shen Z. Lectures on Finsler geometry. Singapore: World Scientific; 2001.
Cartan E. Les espaces de Finsler. Paris: Hermann; 1934.
MacAdam DL. Visual sensitivities to color differences in daylight. J Opt Soc Am. 1942;32(5):247–74.
Wootters W. A measure of the distinguishability of quantum states. In: Meystre P, Scully MO, editors. Quantum optics, experimental gravity, and measurement theory. NATO advanced science institutes series. vol. 94. Boston: Springer; 1983.
Wootters W. Statistical distance and Hilbert space. Phys Rev D. 1981;23(2):357–62.
Fairchild MD. Color appearance models. 1st ed. Reading: Addison-Wesley; 1998.
Breuer HP, Petruccione F. The theory of open quantum systems. London: Oxford University Press; 2002.
Ruskai MB, Szarek S, Werner E. An analysis of completely positive trace-preserving maps on \(\mathcal {M}_{2}\). Linear Algebra Appl. 2002;347:159–87.
Provenzi E. Color space axioms and fiber bundles. Sens Transducers J. 2017;25(8):43–6.
Yearsley JM, Busemeyer JR. Quantum cognition and decision theories: a tutorial. J Math Psychol. 2016;74:99–116.
Conte E. On the possibility that we think in a quantum probabilistic manner. Neuroquantology. 2010;8(4):S3–S47.
Connes A, Rovelli C. Von Neumann algebra automorphisms and time-thermodynamics relation in generally covariant quantum theories. Class Quantum Gravity. 1994;11(12):2899–917.
The author would like to thank C. Choquet, E. Provenzi, and the anonymous reviewers for helpful comments on earlier versions of this article.
This work was partially supported by the French CNRS through the project GOALVISION and by the French region Nouvelle Aquitaine through the project RECOGER.
Laboratoire MIA, La Rochelle Université, Avenue Albert Einstein, BP 33060, 17031, La Rochelle, France
M. Berthier
All authors read and approved the final manuscript.
Correspondence to M. Berthier.
The author declares that he has no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Berthier, M. Geometry of color perception. Part 2: perceived colors from real quantum states and Hering's rebit. J. Math. Neurosc. 10, 14 (2020). https://doi.org/10.1186/s13408-020-00092-x
Accepted: 28 August 2020
DOI: https://doi.org/10.1186/s13408-020-00092-x
Quantum states
Jordan algebras
Quantum rebit
Color representation and cortical-inspired image processing
|
CommonCrawl
|
I Confused by nonlocal models and relativity
Thread starter SteveF
bohmian mechanics nonlocality relativity
Mentz114
DrChinese said:
How about this quote: "Of course there's no change of photons 1&4 due to manipulations on photons 2&3." See anything like that?
I have represented my position (which is standard, generally accepted physics) as best I can at this point. In fact, I have probably repeated myself too much already. I will leave it to the readers to make their own judgments and assessments of these fabulous and groundbreaking experiments. Consequently, I will bow out of this thread and thank everyone for their time and comments - especially for helping me sharpen my LaTeX from terrible to poor.
Well, thank you. You have stated your position coherently it is just superluminal influences that are hard to digest.
For me it is very instructive to see the QED approach.
There the solution is to expand the state that exists after the two pairs 1,2 and 3,4 are created (creation operations on the vacuum) using Bell state bases. The result shows that the 'final' state is already in there by preparation. When the entangled pairs are created, the result is inevitable and does not require any further interaction. The symmetries ensure that.
I used to find that unsatisfactory but now it seems harmonious.
But I am easily led by equations.
Reactions: vanhees71, morrobay, DrChinese and 2 others
vanhees71
Science Advisor
Insights Author
Indeed, that's what I'm saying. There's no change of photons 1&4 due to manipulations on photons 2&3. All you do is to select a partial ensemble based on measurements on photons 2&3, and due to the preparation in the initial state, describing the full ensemble the so selected partial ensemble is described by the state ket $$|\psi_{14}^- \rangle \otimes |\psi_{23}^- \rangle.$$
That's indeed standard, generally accepted physics.
Mentz114 said:
Well, there's no other chance to understand what's going on in the physical world than math and equations (most importantly group theory, which is the guiding principle; all the rest is just calculational techique).
Reactions: Mentz114
Lord Jestocost
2018 Award
vanhees71 said:
The only thing she does is to select a subensemble....
One doesn't select a subensemble, a measurement "creates" - so to speak - a subensemble. Thinking about the post-measurement situation cannot be conveyed to the pre-measurement situation.
Tendex
PeterDonis said:
The states @vanhees71 has been writing down also include all four photons.
Yes, and he has also been arguing about the selected subsystem of photons 1&4 that DrChinese keeps saying is affected in a "spooky" way by what happens to 2&3, so they are both singling out 1&4 but the relevant thing is doing it with the right math.
Now the only way to mathematically speak coherently about the subsystem 1&4 other than in the Hilbert space of the 4 photons that everyone agrees about is using a reduced density matrix for 1&4, and this reduced density matrix is not affected by 2&3, as vanhees says and DrChinese rejects.
My hunch is that DrChinese is clinging to the 4 photon entangled pure state when he says the subsystem 1&4 is affected by 2&3 but by definition as mentioned before the entangled 4 photon state is inseparable from preparation to measurement and it doesn't matter that the particular photons 1&4 never interacted before, they are prepared as entangled in the 4 photon entangled state.
DrChinese himself said in a previous post:
"Weinberg goes on to say as follows: "Of course, according to present ideas a measurement in one subsystem does change the state vector for a distant isolated subsystem - it just doesn't change the density matrix." Which is what I assert: A measurement on Alice's particle changes the physical state of Bob's remote entangled particle (what is observed)."
apparently mixing the mathematical sense of state vector that refers to the entangled total system(that being in superposition makes not well defined mathematically what a subsystem would be, Weinberg clearly takes an ordinary language license there) with some "physical state" that can't be atributed to 1&4 alone excep using the right math of the density matrix that Weinberg , of course correctly, says doesn't change in the same quote.
Last edited: Saturday, 5:41 AM
Reactions: vanhees71
Lord Jestocost said:
See my last post. It can be conveyed as long as we are talking about the prepaired entangled state in the system of 4 photons that is inseparable by definition. Knowledge of the measurement effectively subselects information applying quantum probabilities.
Of course, one is selecting a subensemble. That's the whole purpose of the experiment. As the authors (Jennewein, Zeilinger et al) have demonstrated, it doesn't matter who measures their photon pair first. They can even measure it at space-like separation of the measurement events. It also doesn't matter, when Victor chooses the subensemble from the measurement protocol.
The facts are the following:
The total ensemble (which Victor of course also can consider from the measurement protocol) is described by the initial state and is unaffected by the measurements of A on photons 2&3 and B on photons 1&4. Due to the entanglement between the photons 1&2 as well as 3&4 (i.e., the maximal Bell-state entanglement beyond the minimal entanglement bosons that are not in the same single-particle state) the subensemble chosen based on the measurement of A leads to the entanglement between B's photons 1&4 in the subensemble. There's nowhere any causal effect of A's measurement on B's photons. This becomes particularly clear by the fact shown in the experiment in the Jennewein at al paper, where A's measurement is done clearly after B's measuremet. All one does is to choose a subensemble.
Another very nice paper I stumbled over is
https://link.springer.com/article/10.1007/s10701-019-00278-8
However one has to read this paper such that it is about two spin-entangled distinguishable particles (say an electron and positron from ##\pi^0 \rightarrow \mathrm{e}^+ + \mathrm{e}^-## decay).
Saturday, 10:03 AM
The paper is available on arXiv
[1905.03137] The 'Delayed Choice Quantum Eraser' Neither Erases Nor Delays
PeterDonis
Tendex said:
this reduced density matrix is not affected by 2&3, as vanhees says and DrChinese rejects.
I don't think @DrChinese rejects that the reduced density matrix of 1&4 is not changed by the measurement that is made on 2&3. He agreed with me that the measurement on 2&3 doesn't affect the probabilities for measurements on 1&4, which is the same thing.
I think what @DrChinese is rejecting is the claim that the measurement on 2&3 does not change photons 1&4, i.e., he does not agree that "no change to the reduced density matrix" means "no change at all". The measurement on 2&3 certainly changes the wave function of the system as a whole, and 1&4 are part of the system as a whole. It also changes the entanglement relationships--before the measurement, 1&2 are entangled and 3&4 are entangled; after the measurement, 1&4 are entangled and 2&3 are entangled. That change shows up in the correlations between the appropriate pairs of photons violating the Bell inequalities.
At least some of the disagreement might be just interpretation. When @vanhees71 talks about post-selection of a particular sub-ensemble, he is taking an ensemble interpretation viewpoint, in which QM does not describe individual photons or individual experimental runs, but only ensembles of them. When @DrChinese talks about the entanglement relations changing in an individual run of the experiment, he is taking the opposite viewpoint, that QM describes individual quantum systems and measurements.
I'll also repeat once more my suggestion to not use vague ordinary language but instead look at the math. The math is unambiguous, but there are many different ways of describing in ordinary language what the math is telling us, and those ways often seem like they contradict each other, even though they're all describing the same math and the same predictions for experimental results. To me that just means we should stop arguing about the ordinary language since it's superfluous anyway. But not everyone takes that view.
Reactions: DrChinese, vanhees71, martinbn and 1 other person
Bell's inequalities and their violations are all about probabilities and ensembles so I don't know what else he might be arguing mathematically by the "spooky" change to 1&4.
I think what @DrChinese is rejecting is the claim that the measurement on 2&3 does not change photons 1&4, i.e., he does not agree that "no change to the reduced density matrix" means "no change at all".
I haven't seen anyone saying " no change at all". There is consensus about change in the 1&2&3&4 entangled state.
The measurement on 2&3 certainly changes the wave function of the system as a whole, and 1&4 are part of the system as a whole. It also changes the entanglement relationships--before the measurement, 1&2 are entangled and 3&4 are entangled; after the measurement, 1&4 are entangled and 2&3 are entangled. That change shows up in the correlations between the appropriate pairs of photons violating the Bell inequalities.
As you stress this is all change in the system as a whole, the product Hilbert space ##\mathcal H = \mathcal H_1 \otimes \mathcal H_2 \otimes \mathcal H_3 \otimes \mathcal H_4##, the system where we can have 4 entangled photons and different subsystems selected depending on the measurement setup.
Again, this is a fine viewpoint but Bell's violations are about ensembles, an individual run is not relevant here except for selling pop-sci books about QM mysteries.
Bell's inequalities and their violations are all about probabilities and ensembles
Bell's particular inequalities are, yes. But a lot of work in this area has been done since Bell, including finding cases where QM predicts results that are impossible according to local hidden variable models, so that the latter models can be ruled out with 100% certainty by observing such an "impossible" result, with no probabilities or statistics or ensembles required. For example, the GHZ experiment:
GHZ experiment - Wikipedia
I haven't seen anyone saying " no change at all".
More precisely, this:
There's no change of photons 1&4 due to manipulations on photons 2&3.
Which, mathematically, refers to the fact that the reduced density matrix of 1&4 does not change. But the wave function does.
I thought they agreed on the wave function.
They do. But @DrChinese thinks that the change in the wave function (and the entanglement relations) is sufficient to say that something changes about photons 1&4 when photons 2&3 get measured, while @vanhees71 does not. At least, that's my understanding of the viewpoints they have been expressing.
Reactions: DrChinese
Sunday, 3:07 AM
The 4-photon space is NOT this product space but it is the subspace spanned by all totally symmetrized (bosonic) product-basis states.
Despite the fact that there are no wave functions for photons, of course nothing done locally on photons 2&3 can change photons 1&4, at least not instantly in a "spooky action at a distance".
Concerning my interpretation, it's indeed the minimal statistical interpretation, including the frequentist interpretation of probabilities: All that's done is to select (or even post-select) a sub-ensemble. The observed correlations (entanglement) between photons 1&4 for this subensemble is implied by the initially prepared four-photon state.
In my opinion there's no other way to interpret QT without violating the one or the other fundamental property of physical theories, particularly Einstein causality in special-relativistic spacetime.
There's no wave function for photons!
What changes is the description of the sub-ensemble based on the selection depending on Alice's specific measurement of the pair 2&3.
Also in classical applications of statistics and probability theory a subensemble usually has some different probability distribution for some of its properties than the full ensemble. There's nothing mysterious in this.
I don't see this, why can't the tensor product be used for the 4-photon system?
Because bosons are bosons. It's a well-established fact of nature.
I fail to see how the boson-fermion distinction is relevant to this discussion, we could be talking about electrons and spin instead of photons and polarization and the situation would be essentially the same.
Sure, but it's important to treat the photons as bosons (or electrons as fermions) particularly when it comes to entanglement.
Ok, I agree it is important to always be precise in math descriptions, but in this particular experiment, does the Fock space instead of the tensor product description of the system add anything to the conceptual issue of swapping and entanglement we've been discussing with DrChinese? I just want to make sure I'm not missing something important.
In this case it doesn't make much of a difference.
Thanks. I have seen such simplified descriptions in notes describing Zeilinger's experiment like in https://www.physik.hu-berlin.de/de/nano/lehre/copy_of_quantenoptik09/Chapter6 in 6.4.3
.......he is taking an ensemble interpretation viewpoint, in which QM does not describe individual photons or individual experimental runs, but only ensembles of them.
...........I'll also repeat once more my suggestion to not use vague ordinary language but instead look at the math. The math is unambiguous, but there are many different ways of describing in ordinary language what the math is telling us......
Regarding the "Ensemble Interpretation", one can clearly speak out how "murky" this interpretation is. See comment https://www.physicsforums.com/threads/confused-by-nonlocal-models-and-relativity.973876/post-6202479 or listen to Maximilian Schlosshauer in "Decoherence, the measurement problem, and interpretations of quantum Mechanics", Section B. 1. Superpositions and ensembles (https://arxiv.org/abs/quant-ph/0312059):
"Put differently, if an ensemble interpretation could be attached to a superposition, the latter would simply represent an ensemble of more fundamentally determined states, and based on the additional knowledge brought about by the results of measurements, we could simply choose a subensemble consisting of the definite pointer state obtained in the measurement. But then, since the time evolution has been strictly deterministic according to the Schrödinger equation, we could backtrack this subensemble in time and thus also specify the initial state more completely ("postselection"), and therefore this state necessarily could not be physically identical to the initially prepared state on the left-hand side of Eq. (2.1)."
I don't understand this criticism of the ensemble interpretation. Of course QED is T symmetric and thus all interactions of photons with charged matter are in principle reversible, but what has this to do with any specific interpretation?
"Confused by nonlocal models and relativity" You must log in or register to reply here.
Facebook Twitter Reddit Pinterest WhatsApp Email Link
Related Threads for: Confused by nonlocal models and relativity
What is nonlocality in relativity ?
I Do nonlocal QM interpretations violate relativity?
Nickyv2423
"Experimental nonlocal and surreal Bohmian trajectories"
StevieTNZ
I Quantum nonlocality in relation to FTL Communication
I Causality and nonlocality
Derek P
Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
A Copenhagen: Restriction on knowledge or restriction on ontology?
Started by Demystifier
Started by SteveF
A Jürg Fröhlich on the deeper meaning of Quantum Mechanics
Started by A. Neumaier
A Realism from locality?
A Is the wavefunction subjective? How?
Started by fluidistic
|
CommonCrawl
|
Search Results: 1 - 10 of 226308 matches for " R Nathan "
Application of ANN and MLR Models on Groundwater Quality Using CWQI at Lawspet, Puducherry in India [PDF]
N. Suresh Nathan, R. Saravanane, T. Sundararajan
Journal of Geoscience and Environment Protection (GEP) , 2017, DOI: 10.4236/gep.2017.53008
Abstract: With respect to groundwater deterioration from human activities a unique situation of co-disposal of non-engineered Municipal Solid Waste (MSW) dumping and Secondary Wastewater (SWW) disposal on land prevails simultaneously within the same campus at Puducherry in India. Broadly the objective of the study is to apply and compare Artificial Neural Network (ANN) and Multi Linear Regression (MLR) models on groundwater quality applying Canadian Water Quality Index (CWQI). Totally, 1065 water samples from 68 bore wells were collected for two years on monthly basis and tested for 17 physio-chemical and bacteriological parameters. However the study was restricted to the pollution aspects of 10 physio-chemical parameters such as EC, TDS, TH, , Cl-, , Na+, Ca2+, Mg2+ and K+. As there is wide spatial variation (2 to 3 km radius) with ground elevation (more than 45 m) among the bore wells it is appropriate to study the groundwater quality using Multivariate Statistical Analysis and ANN. The selected ten parameters were subjected to Hierarchical Cluster Analysis (HCA) and the clustering procedure generated three well defined clusters. Cluster wise important physio-chemical attributes which were altered by MSW and SWW operations, are statistically assessed. The CWQI was evolved with the objective to deliver a mechanism for interpreting the water quality data for all three clusters. The ANOVA test results viz., F-statistic (F = 134.55) and p-value (p = 0.000 < 0.05) showed that there are significant changes in the average values of CWQI among the three clusters, thereby confirming the formation of clusters due to anthropogenic activities. The CWQI simulation was performed using MLR and ANN models for all three clusters. Totally, 1 MLR and 9 ANN models were considered for simulation. Further the performances of ten models were compared using R2, RMSE and MAE (quantitative indicators). The analyses of the results revealed that both MLR and ANN models were fairly good in predicting the CWQI in Clusters 1 and 2 with high R2, low RMSE and MAE values but in Cluster 3 only ANN model fared well. Thus this study will be very useful to decision makers in solving water quality problems.
Spatial Variability of Ground Water Quality Using HCA, PCA and MANOVA at Lawspet, Puducherry in India [PDF]
Computational Water, Energy, and Environmental Engineering (CWEEE) , 2017, DOI: 10.4236/cweee.2017.63017
Abstract: In ground water quality studies multivariate statistical techniques like Hierarchical Cluster Analysis (HCA), Principal Component Analysis (PCA), Factor Analysis (FA) and Multivariate Analysis of Variance (MANOVA) were employed to evaluate the principal factors and mechanisms governing the spatial variations and to assess source apportionment at Lawspet area in Puducherry, India. PCA/FA has made the first known factor which showed the anthropogenic impact on ground water quality and this dominant factor explained 82.79% of the total variance. The other four factors identified geogenic and hardness components. The distribution of first factor scores portray high loading for EC, TDS, Na+ and Cl− (anthropogenic) in south east and south west parts of the study area, whereas other factor scores depict high loading for HCO3−, Mg2+, Ca2+ and TH (hardness and geogenic) in the north west and south west parts of the study area. K+ and SO42− (geogenic) are dominant in south eastern direction. Further MANOVA showed that there are significant differences between ground water quality parameters. The spatial distribution maps of water quality parameters have rendered a powerful and practical visual tool for defining, interpreting, and distinguishing the anthropogenic, hardness and geogenic factors in the study area. Further the study indicated that multivariate statistical methods have successfully assessed the ground water qualitatively and spatially with a more effective step towards ground water quality management.
Risks identied in implementation of district clinical specialist teams
R Nathan, P Rautenbach
South African Medical Journal , 2013,
Abstract: The District Clinical Specialist Team (DCST) is a strategy implemented by the South African National Department of Health to strengthen district health systems. An amount of R396 million per annum will be required to fund posts in all 52 districts. During implementation, numerous risks were identified, the major one being the most expensive category of DCST personnel, i.e. Head of Clinical Unit. Similar risks will probably apply to other categories of personnel within the DCST. To achieve the objectives of the DCST strategy, risk reduction strategies need to be promptly applied.
Le Danemark au c ur de l'axe euro-scandinave
Nathan R. Grison
M@ppemonde , 2011,
Abstract: En donnant, en janvier 2011, son accord final en vue de la construction du tunnel du Fehmarn Belt, le gouvernement danois a réaffirmé son souhait de faire du royaume nordique le centre névralgique des échanges entre le continent européen et la Scandinavie…
Visualizing the Past: The Design of a Temporally Enabled Map for Presentation (TEMPO)
Nathan Prestopnik,Alan R. Foley
International Journal of Designs for Learning , 2012,
Abstract: We present a design case for a prototype visualization tool called the Temporally Enabled Map for Presentation (TEMPO). Designed for use in the lecture classroom, TEMPO is an interactive animated map that addressed a common problem in military history: the shortcomings of traditional static (non-interactive, non-animated) maps. Static maps show spatial elements well, but cannot do more than approximate temporal events using multiple views, movement arrows, and the like. TEMPO provides a more complete view of past historical events by showing them from start to finish. In our design case we describe our development process, which included consultation with military history domain experts, classroom observations, application of techniques derived from visualization and Human-Computer Interaction (HCI) literature and theory. Our design case shows how the design of an educational tool can motivate scholarly evaluation, and we describe how some theories were first embraced and then rejected as design circumstances required. Finally, we explore a future direction for TEMPO, tools to support creative interactions with visualizations where students or instructors can learn by visualizing historical events for themselves. A working version of the finished TEMPO artifact is included as an interactive element in this document.
Technology that Touches Lives: Teleconsultation to Benefit Persons with Upper Limb Loss
Lynsay R. Whelan,Nathan Wagner
International Journal of Telerehabilitation , 2011, DOI: 10.5195/ijt.2011.6080
Abstract: While over 1.5 million individuals are living with limb loss in the United States (Ziegler-Graham et al., 2008), only 10% of these individuals have a loss that affects an upper limb. Coincident with the relatively low incidence of upper limb loss, is a shortage of the community-based prosthetic rehabilitation experts that can help prosthetic users to more fully integrate their devices into their daily routines. This article describes how expert prosthetists and occupational therapists at Touch Bionics, a manufacturer of advanced upper limb prosthetic devices, employ Voice over the Internet Protocol (VoIP) videoconferencing software telehealth technologies to engage in remote consultation with users of prosthetic devices and/or their local practitioners. The Touch Bionics staff provide follow-up expertise to local prosthetists, occupational therapists, and other health professionals. Contrasted with prior telephone-based consultations, the video-enabled approach provides enhanced capabilities to benefit persons with upper limb loss. Currently, the opportunities for Touch Bionics occupational therapists to fully engage in patient-based services delivered through telehealth technologies are significantly reduced by their need to obtain and maintain professional licenses in multiple states.
Detection and tracing of the medical radioisotope 131I in the Canberra environment
Gilfillan Nathan R.,Timmers Heiko
EPJ Web of Conferences , 2012, DOI: 10.1051/epjconf/20123504002
Abstract: The transport and radioecology of the therapeutical radioisotope 131I has been studied in Canberra, Australia. The isotope has been detected in water samples and its activity quantified via characteristic J-ray photo peaks. A comparison of measurements on samples from upstream and downstream of the Canberra waste water treatment plant shows that 131I is discharged from the plant outflow into the local Molonglo river. This is consistent with observations in other urban environments. A time-correlation between the measured activities in the outflow and the therapeutical treatment cycle at the hospital identifies the medical treatment as the source of the isotope. Enhanced activity levels of 131I have been measured for fish samples. This may permit conclusions on 131I uptake by the biosphere. Due to the well-defined and intermittent input of 131I into the sewage, the Canberra situation is ideally suited for radioecological studies. Furthermore, the 131I activity may be applied in tracer studies of sewage transport to and through the treatment plant and as an indicator of outflow dilution following discharge to the environment.
Contextual and Perceptual Brain Processes Underlying Moral Cognition: A Quantitative Meta-Analysis of Moral Reasoning and Moral Emotions
Gunes Sevinc, R. Nathan Spreng
PLOS ONE , 2014, DOI: 10.1371/journal.pone.0087427
Abstract: Background and Objectives Human morality has been investigated using a variety of tasks ranging from judgments of hypothetical dilemmas to viewing morally salient stimuli. These experiments have provided insight into neural correlates of moral judgments and emotions, yet these approaches reveal important differences in moral cognition. Moral reasoning tasks require active deliberation while moral emotion tasks involve the perception of stimuli with moral implications. We examined convergent and divergent brain activity associated with these experimental paradigms taking a quantitative meta-analytic approach. Data Source A systematic search of the literature yielded 40 studies. Studies involving explicit decisions in a moral situation were categorized as active (n = 22); studies evoking moral emotions were categorized as passive (n = 18). We conducted a coordinate-based meta-analysis using the Activation Likelihood Estimation to determine reliable patterns of brain activity. Results & Conclusions Results revealed a convergent pattern of reliable brain activity for both task categories in regions of the default network, consistent with the social and contextual information processes supported by this brain network. Active tasks revealed more reliable activity in the temporoparietal junction, angular gyrus and temporal pole. Active tasks demand deliberative reasoning and may disproportionately involve the retrieval of social knowledge from memory, mental state attribution, and construction of the context through associative processes. In contrast, passive tasks reliably engaged regions associated with visual and emotional information processing, including lingual gyrus and the amygdala. A laterality effect was observed in dorsomedial prefrontal cortex, with active tasks engaging the left, and passive tasks engaging the right. While overlapping activity patterns suggest a shared neural network for both tasks, differential activity suggests that processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience.
Extremal Transitions and Five-Dimensional Supersymmetric Field Theories
David R. Morrison,Nathan Seiberg
Physics , 1996, DOI: 10.1016/S0550-3213(96)00592-5
Abstract: We study five-dimensional supersymmetric field theories with one-dimensional Coulomb branch. We extend a previous analysis which led to non-trivial fixed points with $E_n$ symmetry ($E_8$, $E_7$, $E_6$, $E_5=Spin(10)$, $E_4=SU(5)$, $E_3=SU(3)\times SU(2)$, $E_2=SU(2)\times U(1)$ and $E_1=SU(2)$) by finding two new theories: $\tilde E_1$ with $U(1)$ symmetry and $E_0$ with no symmetry. The latter is a non-trivial theory with no relevant operators preserving the super-Poincar\'e symmetry. In terms of string theory these new field theories enable us to describe compactifications of the type I' theory on $S^1/Z_2$ with 16, 17 or 18 background D8-branes. These theories also play a crucial role in compactifications of M-theory on Calabi--Yau spaces, providing physical models for the contractions of del Pezzo surfaces to points (thereby completing the classification of singularities which can occur at codimension one in K\"ahler moduli). The structure of the Higgs branch yields a prediction which unifies the known mathematical facts about del Pezzo transitions in a quite remarkable way.
A model for African trypanosome cell motility and quantitative description of flagellar dynamics
Nathan R. Hutchings,Andrei Ludu
Physics , 2004,
Abstract: A quantitative description of the flagellar dynamics in the procyclic T. brucei is presented in terms of stationary oscillations and traveling waves. By using digital video microscopy to quantify the kinematics of trypanosome flagellar waveforms. A theoretical model is build starting from a Bernoulli-Euler flexural-torsional model of an elastic string with internal distribution of force and torque. The dynamics is internally driven by the action of the molecular motors along the string, which is proportional to the local shift and consequently to the local curvature. The model equation is a nonlinear partial differential wave equation of order four, containing nonlinear terms specific to the Korteweg-de Vries (KdV) equation and the modified-KdV equation. For different ranges of parameters we obtained kink-like solitons, breather solitons, and a new class of solutions constructed by smoothly piece-wise connected conic functions arcs (e.g. ellipse). The predicted amplitude and wavelengths are in good match with experiments. We also present the hypotheses for a step-wise kinematical model of swimming of procyclic African trypanosome.
|
CommonCrawl
|
Eigenvalues and Eigenvectors relating to orthogonal basis and diagonal matrices
Find the eigenvalues and eigenvectors of the matrix.
$$A = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & -1\\ 0 & -1 & 1 \end{bmatrix}$$
As we have seen in the lectures, these eigenvectors form an orthogonal basis with respect to the standard inner product $\mathbb{C}^3$ . By considering a basis transformation to an orthonormal basis of eigenvectors find a diagonalizing matrix $P$, and hence $B = P^{-1}AP$ where $B$ is diagonal. (Hint: $P^{-1} = P^{T}$ for an orthonormal basis to another.)
I've only got 1 eigenvalue to be $\lambda = 1, -1, 2$ with their corresponding eigenvectors. I am not sure where to go from here. Any help would be appreciated!! Thank you.
eigenvalues-eigenvectors diagonalization orthogonality orthonormal change-of-basis
gnbosmagnbosma
To find the eigenvalues you can use the characteristic polynom : $$det \left( \begin{bmatrix} 1-X & 1 & 0 \\ 1 & 0-X & -1\\ 0 & -1 & 1-X \end{bmatrix}\right)=det \left( \begin{bmatrix} 1-X & 0 & 1-X \\ 1 & -X & -1\\ 0 & -1 & 1-X \end{bmatrix}\right)=(1-X)det \left( \begin{bmatrix} 1 & 0 & 1 \\ 1 & -X & -1\\ 0 & -1 & 1-X \end{bmatrix}\right)=(1-X)det \left( \begin{bmatrix} 1 & 0 & 0 \\ 1 & -X & -2\\ 0 & -1 & 1-X \end{bmatrix}\right)=(1-X)(-X(1-X)-2)=(1-X)(X^2-X+2)=-(X-1)(X+1)(X-2)$$
So the eigen values are $-1,1,2$.
Can you find the eigen vectors from there ?
For $-1$. Let $x \in \mathbb{C}^3$ such as : $$Ax=-x\iff A\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}=\begin{bmatrix} -x_1 \\ -x_2 \\ -x_3 \end{bmatrix} \iff \begin{bmatrix} x_1+x_2 \\ x_1-x_3 \\ -x_2+x_3 \end{bmatrix}=\begin{bmatrix} -x_1 \\ -x_2 \\ -x_3 \end{bmatrix} $$
So $x_2=-2x_1$ and $x_3=-x_1$. So the eigenvector associated to $-1$ is $\begin{bmatrix} 1 \\ -2 \\ -1 \end{bmatrix}$.
Do the same for the two other eigenvalues($1$ then $2$), concatenate the vectore you obtain in the order of obtention (first column is the vector associated with $-1$) and you obtain $P$ such as $A=PDP^{-1}$, where $D=\begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & 0 & 2 \end{bmatrix}$.
JenniferJennifer
$\begingroup$ Yes. I found an error in my work for eigenvalues so i have those values as well. Im not sure how to find the diagonalizing matrix P $\endgroup$ – gnbosma Apr 21 '16 at 20:10
$\begingroup$ You need to find all the eigenvectors and by concatenating them you obtain $P$. I will do it in a minute :). $\endgroup$ – Jennifer Apr 21 '16 at 20:13
Not the answer you're looking for? Browse other questions tagged eigenvalues-eigenvectors diagonalization orthogonality orthonormal change-of-basis or ask your own question.
Looking for orthogonal basis of eigenvectors using Gram Schmidt process
Getting a basis of eigenvalues
Eigenvalues and eigenvectors of similar matrices.
Finding kernel and image of a linear transformation, given eigenvalues and eigenvectors?
Orthogonal eigenvectors of a symmetric matrix
Finding orthogonal eigenvectors
Finding eigenvectors to eigenvalues, and diagonalization
How to diagonalize matrices with repeated eigenvalues?
|
CommonCrawl
|
Corporate Finance & Accounting Financial Ratios
Adam Hayes
Adam Hayes is a financial writer with 15+ years Wall Street experience as a derivatives trader. Besides his extensive derivative trading expertise, Adam is an expert in economics and behavioral finance. Adam received his master's in economics from The New School for Social Research and his Ph.D. from the University of Wisconsin-Madison in sociology. He is a CFA charterholder as well as holding FINRA Series 7 & 63 licenses. He currently researches and teaches at the Hebrew University in Jerusalem.
Eric Estevez
Reviewed by Eric Estevez
Eric is currently a duly licensed Independent Insurance Broker licensed in Life, Health, Property, and Casualty insurance. He has worked more than 13 years in both public and private accounting jobs and more than four years licensed as an insurance producer. His background in tax accounting has served as a solid base supporting his current book of business.
Skylar Clarine
Fact checked by Skylar Clarine
Skylar Clarine is a fact-checker and expert in personal finance with a range of experience including veterinary technology and film studies.
Guide to Financial Ratios
What Is the Debt Ratio?
What Does It Tell You?
Long-Term Debt to Asset Ratio
Debt Ratios FAQs
The term debt ratio refers to a financial ratio that measures the extent of a company's leverage. The debt ratio is defined as the ratio of total debt to total assets, expressed as a decimal or percentage. It can be interpreted as the proportion of a company's assets that are financed by debt. A ratio greater than 1 shows that a considerable portion of a company's debt is funded by assets, which means the company has more liabilities than assets. A high ratio indicates that a company may be at risk of default on its loans if interest rates suddenly rise. A ratio below 1 means that a greater portion of a company's assets is funded by equity.
A debt ratio measures the amount of leverage used by a company in terms of total debt to total assets.
This ratio varies widely across industries, such that capital-intensive businesses tend to have much higher debt ratios than others.
A company's debt ratio can be calculated by dividing total debt by total assets.
A debt ratio of greater than 1.0 or 100% means a company has more debt than assets while a debt ratio of less than 100% indicates that a company has more assets than debt.
Some sources consider the debt ratio to be total liabilities divided by total assets.
Understanding Debt Ratios
As noted above, a company's debt ratio is a measure of the extent of its financial leverage. This ratio varies widely across industries. Capital-intensive businesses, such as utilities and pipelines tend to have much higher debt ratios than others like the technology sector.
The formula for calculating a company's debt ratio is:
Debt ratio = Total debt Total assets \begin{aligned} &\text{Debt ratio} = \frac{\text{Total debt}}{\text{Total assets}} \end{aligned} Debt ratio=Total assetsTotal debt
So if a company has total assets of $100 million and total debt of $30 million, its debt ratio is 0.3 or 30%. Is this company in a better financial situation than one with a debt ratio of 40%? The answer depends on the industry.
A debt ratio of 30% may be too high for an industry with volatile cash flows, in which most businesses take on little debt. A company with a high debt ratio relative to its peers would probably find it expensive to borrow and could find itself in a crunch if circumstances change. Conversely, a debt level of 40% may be easily manageable for a company in a sector such as utilities, where cash flows are stable and higher debt ratios are the norm.
A debt ratio greater than 1.0 (100%) tells you that a company has more debt than assets. Meanwhile, a debt ratio of less than 100% indicates that a company has more assets than debt. Used in conjunction with other measures of financial health, the debt ratio can help investors determine a company's risk level.
The fracking industry experienced tough times beginning in the summer of 2014 due to high levels of debt and plummeting energy prices.
Some sources consider the debt ratio to be total liabilities divided by total assets. This reflects a certain ambiguity between the terms debt and liabilities that depends on the circumstance. The debt-to-equity ratio, for example, is closely related to and more common than the debt ratio, instead, using total liabilities as the numerator.
Financial data providers calculate it using only long-term and short-term debt (including current portions of long-term debt), excluding liabilities such as accounts payable, negative goodwill, and others.
In the consumer lending and mortgages business, two common debt ratios used to assess a borrower's ability to repay a loan or mortgage are the gross debt service ratio and the total debt service ratio.
The gross debt ratio is defined as the ratio of monthly housing costs (including mortgage payments, home insurance, and property costs) to monthly income, while the total debt service ratio is the ratio of monthly housing costs plus other debt such as car payments and credit card borrowings to monthly income. Acceptable levels of the total debt service ratio range from the mid-30s to the low-40s in percentage terms.
The higher the debt ratio, the more leveraged a company is, implying greater financial risk. At the same time, leverage is an important tool that companies use to grow, and many businesses find sustainable uses for debt.
Debt Ratio vs. Long-Term Debt to Asset Ratio
While the total debt to total assets ratio includes all debts, the long-term debt to assets ratio only takes into account long-term debts. The debt ratio (total debt to assets) measure takes into account both long-term debts, such as mortgages and securities, and current or short-term debts such as rent, utilities, and loans maturing in less than 12 months.
Both ratios, however, encompass all of a business's assets, including tangible assets such as equipment and inventory and intangible assets such as accounts receivables. Because the total debt to assets ratio includes more of a company's liabilities, this number is almost always higher than a company's long-term debt to assets ratio.
Examples of the Debt Ratio
Let's look at a few examples from different industries to contextualize the debt ratio.
Starbucks (SBUX) listed $0 in short-term and current portion of long-term debt on its balance sheet for the fiscal year ended Oct. 1, 2017, and $3.93 billion in long-term debt. The company's total assets were $14.37 billion. This gives us a debt ratio of $3.93 billion ÷ $14.37 billion = 0.2734, or 27.34%.
To assess whether this is high, we should consider the capital expenditures that go into opening a Starbucks, including leasing commercial space, renovating it to fit a certain layout, and purchasing expensive specialty equipment, much of which is used infrequently. The company must also hire and train employees in an industry with exceptionally high employee turnover, adhere to food safety regulations for its more than 27,000 locations in 75 countries in 2017.
Perhaps 27% isn't so bad after all when you consider that the industry average was about 65% in 2017. The result is that Starbucks has an easy time borrowing money—creditors trust that it is in a solid financial position and can be expected to pay them back in full.
What about a technology company? For the fiscal year ended Dec. 31, 2016, Meta (FB), formerly Facebook, reported:
Short-term and current portion of long-term debt as $280 million
Long-term debt as $5.77 billion
Total assets as $64.96 billion
Using these figures, Meta's debt ratio can be calculated as ($280 million + $5.7 billion) ÷ $64.96 billion = 0.092, or 9.2%. The company does not borrow from the corporate bond market. It has an easy enough time raising capital through stock.
Arch Coal
Now let's look at a basic materials company. For the fiscal year ended Dec. 31, 2016, St. Louis-based miner Arch Coal (ARCH) posted short-term and current portions of long-term debt of $11 million, long-term debt of $351.84 million, and total assets of $2.14 billion.
Coal mining is extremely capital-intensive, so the industry is forgiving of leverage: The average debt ratio was 61% in 2016. Even in this cohort, Arch Coal's debt ratio of ($11 million + $351.84 million) ÷ $2.14 billion = 16.95% is well below average.
What Are Some Common Debt Ratios?
All debt ratios analyze a company's relative debt position. Common debt ratios include debt-to-equity, debt-to-assets, long-term debt-to-assets, and leverage and gearing ratios.
What Is a Good Debt Ratio?
What counts as a good debt ratio will depend on the nature of the business and its industry. Generally speaking, a debt-to-equity or debt-to-assets ratio below 1.0 would be seen as relatively safe, whereas ratios of 2.0 or higher would be considered risky. Some industries, such as banking, are known for having much higher debt-to-equity ratios than others.
What Does a Debt-to-Equity Ratio of 1.5 Indicate?
A debt-to-equity ratio of 1.5 would indicate that the company in question has $1.50 of debt for every $1 of equity. To illustrate, suppose the company had assets of $2 million and liabilities of $1.2 million. Since equity is equal to assets minus liabilities, the company's equity would be $800,000. Its debt-to-equity ratio would therefore be $1.2 million divided by $800,000, or 1.5.
Can a Debt Ratio Be Negative?
If a company has a negative debt ratio, this would mean that the company has negative shareholder equity. In other words, the company's liabilities outnumber its assets. In most cases, this is considered a very risky sign, indicating that the company may be at risk of bankruptcy.
Accounting Tools. "Debt ratios." Accessed Nov. 2, 2021.
CFI. "Debt to Asset Ratio." Accessed Nov. 2, 2021.
Debitoor. "Debt ratio." Accessed Nov. 2, 2021.
Federal Reserve Bank of St. Louis. "Crude Oil Prices: West Texas Intermediate (WTI) - Cushing, Oklahoma." Accessed Nov. 2, 2021.
FRED Economic Data. "Household Debt Service Payments as a Percent of Disposable Personal Income." Accessed Nov. 2, 2021.
Consumer Financial Protection Bureau. "What is a debt-to-income ratio? Why is the 43% debt-to-income ratio important?" Accessed Nov. 2, 2021.
Starbucks. "Fiscal 2017 Annual Report," Page 24. Accessed Nov. 2, 2021.
Starbucks. "Fiscal 2017 Annual Report," Pages 6–7. Accessed Nov. 2, 2021.
ReadyRatios. "Eating and Drinking Places: Average Industry Financial Ratios for U.S. Listed Companies." Accessed Nov. 2, 2021.
Facebook. "Annual Report 2016," Page 58. Accessed Nov. 2, 2021.
Facebook. "2019 Annual Report," Page 93. Accessed Nov. 2, 2021.
Arch Coal. "2016 Annual Report," Page 53. Accessed Nov. 2, 2021.
ReadyRatios. "Coal Mining: Average Industry Financial Ratios for U.S. Listed Companies." Accessed Nov. 2, 2021.
Debt-to-Equity (D/E) Ratio
The debt-to-equity (D/E) ratio indicates how much debt a company is using to finance its assets relative to the value of shareholders' equity.
Understanding Total-Debt-to-Total-Assets
Total-debt-to-total-assets is a leverage ratio that shows the total amount of debt a company has relative to its assets.
What Is a Solvency Ratio?
A solvency ratio is a key metric used to measure an enterprise's ability to meet its debt and other obligations.
What Is the Current Ratio?
The current ratio is a liquidity ratio that measures a company's ability to cover its short-term obligations with its current assets.
What Are Current Liabilities?
Current liabilities are a company's debts or obligations that are due to be paid to creditors within one year.
Working Capital Management Definition
Working capital management is a strategy that requires monitoring a company's current assets and liabilities to ensure its efficient operation.
Analyzing Investments With Solvency Ratios
Key Financial Ratios for Airline Companies
What Is Considered a High Debt-To-Equity Ratio?
Understanding Coca-Cola's Capital Structure (KO)
Average D/E Ratio for the Food and Beverage Sector
4 Ratios to Evaluate Dividend Stocks
|
CommonCrawl
|
Cambridge Insitu Limited
Home»Technical Reference»The relationship between simple strain and true strain
The relationship between simple strain and true strain
techref_citr1002_strain.pdf
Techref Number:
CITR1002
Robert Whittle
Simple (or unit) strain is the change in length over the original length, so that for a pressuremeter measuring radius it can be expressed as
...[1]
$$\xi _s=\frac{r_i - r_o}{r_o}$$
where $\xi _s$ is simple strain
$r_i$ is the current radius of the cavity
$r_o$ is the original radius of the cavity
From equation [1] it follows that
$$\frac{r_i}{r_o}=1+\xi _s$$
True (natural, or logarithmic) strain is defined as the sum of each incremental increase in radius divided by the current radius, so
$$\begin{matrix} \xi _t &=& \int _{r_o}^{r_i}(1/r)\mathit{dr}\\ \\ &=& \left[\ln (r)\right]_{r_o}^{r_i}\\ \\ &=& \ln (r_i)\text{–}\ln (r_o)\\ \\ &=& \ln (r_i/r_o) \end{matrix}$$
where $\xi _t$ is true strain
Substituting equation [2] into [3] gives
$$\xi _t=\ln (1+\xi _s)$$
It is well known that for instruments which measure the radius of the cavity the following expression can be used to derive estimates for shear modulus from the test curve whenever the response from the ground is elastic:
$$G=\frac 1 2\left(\frac{r_i}{r_o}\right)\left(\frac{\mathit{dP}}{d\xi _c}\right)$$
where $G$ is the shear modulus
$P$ is the change in pressure
$\xi_s$ is cavity strain and is simple strain
This is sometimes expressed in a simplified form as
$$G=\frac 1 2\left(\frac{\mathit{dP}}{d\xi _c}\right)$$
but this approximation can only be justified for very small strains.
The multiplier $\frac{r_i}{r_o}$ has the effect of converting an expression in simple strain to one in terms of true strain, as the following argument shows:
Differentiate equation [4] with respect to $\xi _s$
$$\begin{matrix}\frac{d\xi _t}{d\xi _s} &=& \frac 1{1+\xi _s}\\ \\ \therefore d\xi _s &=& d\xi _t(1+\xi _s)\end{matrix}$$
Substitute equations [2] and [7] into [5] to give
$$G=\frac 1 2(1+\xi _s)\left(\frac{\mathit{dP}}{d\xi _t(1+\xi _s)}\right)$$
which simplifies to
$$G=\frac 1 2\left(\frac{\mathit{dP}}{d\xi _t}\right)$$
Hence the simplified version of the shear modulus expression shown in equation [6] is good for all strains as long as true strain is being used. Plotting true strain rather than simple strain makes it easier to compare modulus parameters taken from rebound cycles at different cavity strains, and makes it easier to compare rebound cycles between instruments which strain the soil to different magnitudes.
Website Content - © Copyright 1972 - 2022 Cambridge Insitu Limited
2022 Cambridge Insitu Limited- This is a Free Drupal Theme
|
CommonCrawl
|
Explainable Machine Learning in Credit Risk Management
Niklas Bussmann1,
Paolo Giudici ORCID: orcid.org/0000-0002-4198-01271,
Dimitri Marinelli2 &
Jochen Papenbrock3
Computational Economics volume 57, pages 203–216 (2021)Cite this article
The paper proposes an explainable Artificial Intelligence model that can be used in credit risk management and, in particular, in measuring the risks that arise when credit is borrowed employing peer to peer lending platforms. The model applies correlation networks to Shapley values so that Artificial Intelligence predictions are grouped according to the similarity in the underlying explanations. The empirical analysis of 15,000 small and medium companies asking for credit reveals that both risky and not risky borrowers can be grouped according to a set of similar financial characteristics, which can be employed to explain their credit score and, therefore, to predict their future behaviour.
Black box Artificial Intelligence (AI) is not suitable in regulated financial services. To overcome this problem, Explainable AI models, which provide details or reasons to make the functioning of AI clear or easy to understand, are necessary.
To develop such models, we first need to understand what "Explainable" means. Recently, some important insitutional definitions have been provided. For example, Bracke et al. (2019) states that "Explanations can answer different kinds of questions about a model's operation depending on the stakeholder they are addressed to and Croxson et al. (2019)" 'interpretability' will be the focus will be the focus—generally taken to mean that an interested stakeholder can comprehend the main drivers of a model-driven decision".
Explainability means that an interested stakeholder can comprehend the main drivers of a model-driven decision; FSB (2017) suggests that "lack of interpretability and auditability of AI and Machine Learning (ML) methods could become a macro-level risk"; Croxson et al. (2019) establishes that "in some cases, the law itself may dictate a degree of explainability."
The European GDPR EU (2016) regulation states that "the existence of automated decision-making should carry meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject." Under the GDPR regulation, the data subject is therefore, under certain circumstances, entitled to receive meaningful information about the logic of automated decision-making.
Finally, the European Commission High-Level Expert Group on AI presented the Ethics Guidelines for Trustworthy Artificial Intelligence in April 2019. Such guidelines put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy. Among them three relate to the concept of "eXplainable Artificial Intelligence (XAI)" , and are the following.
Human agency and oversight: decisions must be informed, and there must be a human-in-the-loop oversight.
Transparency: AI systems and their decisions should be explained in a manner adapted to the concerned stakeholder. Humans need to be aware that they are interacting with an AI system.
Accountability: AI systems should develop mechanisms for responsibility and accountability, auditability, assessment of algorithms, data and design processes.
Following the need to explain AI models, stated by legislators and regulators of different countries, many established and startup companies have started to embrace Explainable AI models. In addition, more and more people are searching information about what "Explainable Artificial Intelligence" means.
In this respect, Fig. 1 represents the evolution of Google searches for explainable AI related terms.
From a mathematical viewpoint, it is well known that "simple" statistical learning models, such as linear and logistic regression models, provide a high interpretability but, possibly, a limited predictive accuracy. On the other hand, "complex" machine learning models, such as neural networks and tree models, provide a high predictive accuracy at the expense of a limited interpretability.
To solve this trade-off, we propose to boost machine learning models, that are highly accurate, with a novel methodology, that can explain their predictive output. Our proposed methodology acts in the post processing phase of the analysis, rather than in the preprocessing part. It is agnostic (technologically neutral) as it is applied to the predictive output, regardless of which model generated it: a linear regression, a classification tree or a neural network model.
The machine learning procedure proposed in the paper processes the outcomes of any other arbitrary machine learning model. It provides more insight, control and transparency to a trained, potentially black box machine learning model. It utilises a model-agnostic method aiming at identifying the decision-making criteria of an AI system in the form of variable importance (individual input variable contributions).
A key concept of our model is the Shapley value decomposition of a model, a pay-off concept from cooperative game theory. To the best of our knowledge this is the only explainable AI approach rooted in an economic foundation. It offers a breakdown of variable contributions so that every data point (e.g. a credit or loan customer in a portfolio) is not only represented by input features (the input of the machine learning model) but also by variable contributions to the prediction of the trained machine learning model.
More precisely, our proposed methodology is based on the combination of network analysis with Shapley values [see Lundberg and Lee (2017), Joseph (2019), and references therein]. Shapley values were originally introduced by Shapley (1953) as a solution concept in cooperative game theory. They correspond to the average of the marginal contributions of the players associated with all their possible orders. The advantage of Shapley values, over alternative XAI models, is that they can be exploited to measure the contribution of each explanatory variable for each point prediction of a machine learning model, regardless of the underlying model itself [see, e.g. Lundberg and Lee (2017)]. In other words, Shapley based XAI models combine generality of application (they are model agnostic) with the personalisation of their results (they can explain any single point prediction).
Our original contribution is to improve Shapley values, improving the interpretation of the predictive output of a machine learning model by means of correlation network models. To exemplify our proposal, we consider one area of the financial industry in which Artificial Intelligence methods are increasingly being applied: credit risk management [see for instance the review by Giudici (2018)].
Correlation networks, also known as similarity networks, have been introduced by Mantegna and Stanley (1999) to show how time series of asset prices can be clustered in groups on the basis of their correlation matrix. Correlation patterns between companies can similarly be extracted from cross-sectional features, based on balance sheet data, and they can be used in credit risk modelling. To account for such similarities we can rely on centrality measures, following Giudici et al. (2019) , who have shown that the inclusion of centrality measures in credit scoring models does improve their predictive utility. Here we propose a different use of similarity networks. Instead of applying network models in a pre-processing phase, as in Giudici et al. (2019) , who extract from them additional features to be included in a statistical learning model, we use them in a post-processing phase, to interpret the predictive output from a highly performing machine learning model. In this way we achieve both predictive accuracy and explainability.
We apply our proposed method to predict the credit risk of a large sample of small and medium enterprises. The obtained empirical evidence shows that, while improving the predictive accuracy with respect to a standard logistic regression model, we improve, the interpretability (explainability) of the results.
The rest of the paper is organized as follows: Sect. 2 introduces the proposed methodology. Section 3 shows the results of the analysis in the credit risk context. Section 4 concludes and presents possible future research developments.
Statistical Learning of Credit Risk
Credit risk models are usually employed to estimate the expected financial loss that a credit institution (such as a bank or a peer-to-peer lender) suffers, if a borrower defaults to pay back a loan. The most important component of a credit risk model is the probability of default, which is usually estimated statistically employing credit scoring models.
Borrowers could be individuals, companies, or other credit institutions. Here we focus, without loss of generality, on small and medium enterprises, whose financial data are publicly available in the form of yearly balance sheets.
For each company, n, define a response variable \(Y_{n}\) to indicate whether it has defaulted on its loans or not, i.e. \(Y_{n}=1\) if company defaults, \(Y_{n}=0\) otherwise. And let \(X_{n}\) indicate a vector of explanatory variables. Credit scoring models assume that the response variable \(Y_{n}\) may be affected ("caused") by the explanatory variables \(X_{n}\).
The most commonly employed model of credit scoring is the logistic regression model. It assumes that
$$\begin{aligned} ln\left( \frac{p_{n}}{1-p_{n}}\right) =\alpha +\sum _{j=1}^{J}\beta _{j}x_{nj} \end{aligned}$$
where \(p_{n}\) is the probability of default for company n; \({\mathbf {x}}_{n}=(x_{i,1},\ldots ,x_{i,J})\) is a J-dimensional vector containing the values that the J explanatory variables assume for company n; the parameter \(\alpha \) represents an intercept; \(\beta _{j}\) is the jth regression coefficient.
Once the parameters \(\alpha \) and \(\beta _{j}\) are estimated using the available data, It the probability of default can be estimated, inverting the logistic regression model, from:
$$\begin{aligned} p_{n}=\left( 1+exp\left( \alpha +\sum _{j=1}^{J}\beta _{j}x_{nj}\right) \right) ^{-1} \end{aligned}$$
Machine Learning of Credit Risk
Alternatively, credit risk can be measured with Machine Learning (ML) models, able to extract non-linear relations among the financial information contained in the balance sheets. In a standard data science life cycle, models are chosen to optimise the predictive accuracy. In highly regulated sectors, like finance or medicine, models should be chosen balancing accuracy with explainability (Murdoch et al. 2019). We improve the choice selecting models based on their predictive accuracy, and employing a posteriori an algorithm that achieves explanability. This does not limit the choice of the best performing models.
To exemplify our approach we consider, without loss of generality, the Extreme Gradient Boost model, one of the most popular and fast machine learning algorithms [see e.g. Chen and Guestrin (2016)].
Extreme Gradient Boosting (XGBoost) is a supervised model based on the combination of tree models with Gradient Boosting. Gradient Boosting is an optimisation technique able to support different learning tasks, such as classification, ranking and prediction. A tree model is a supervised classification model that searches for the partition of the explanatory variables that best classify a response (supervisor) variable. Extreme Gradient Boosting improves tree models strengthening their classification performance, as shown by Chen and Guestrin (2016). The same authors also show that XGBoost is faster than tree model algorithms.
In practice, a tree classification algorithm is applied successively to "training" samples of the data set. In each iteration, a sample of observations is drawn from the available data, using sampling weights which change over time, weighting more the observations with the worst fit. Once a sequence of trees is fit, and classifications made, a weighted majority vote is taken. For a more detailed description of the algorithm see, for instance (Friedman et al. 2000).
Learning Model Comparison
Once a default probability estimation model is chosen, it should be measured in terms of predictive accuracy, and compared with other models, so to select the best one. The most common approach to measure predictive accuracy of credit scoring models is to randomly split the available data in two parts: a "train" and a "test" set; build the model using data in the train set, and compare the predictions the model obtains on the test set, \(\hat{Y_n}\), with the actual values of \(Y_n\).
To obtain \(\hat{Y_n}\) the estimated default probability is rounded into a "default" or "non default", depending on whether a threshold is passed or not. For a given threshold T, one can then count the frequency of the four possible outputs, namely: False Positives (FP): companies predicted to default, that do not; True Positives (TP): companies predicted to default, which do; False Negatives (FN): companies predicted not to default, which do; True Negatives (TN): companies predicted not to default, which do not.
The misclassification rate of a model can be computed as:
$$\begin{aligned} \frac{FP+FN}{TP+TN+FP+FN} \end{aligned}$$
and it characterizes the proportion of wrong predictions among the total number of cases.
The misclassification rate depends on the chosen threshold and it is not, therefore, a generally agreed measure of predictive accuracy. A common practice is to use the Receiver Operating Characteristics (ROC) curve, which plots the false positive rate (FPR) on the Y axis against the true positive rate (TPR) on the X axis, for a range of threshold values (usually percentile values). FPR and TPR are then calculated as follows:
$$\begin{aligned} FPR&= \frac{FP}{FP+TN} \end{aligned}$$
$$\begin{aligned} TPR&= \frac{TP}{TP+FN} \end{aligned}$$
The ideal ROC curve coincides with the Y axis, a situation which cannot be realistically achieved. The best model will be the one closest to it. The ROC curve is usually summarised with the Area Under the ROC curve value (AUROC), a number between 0 and 1. The higher the AUROC, the better the model.
Explaining Model Predictions
We now explain how to exploit the information contained in the explanatory variables to localise and cluster the position of each individual (company) in the sample. This information, coupled with the predicted default probabilities, allows a very insightful explanation of the determinant of each individual's creditworthiness. In our specific context, information on the explanatory variables is derived from the financial statements of borrowing companies, collected in a vector \({\mathbf {x}}_{n}\), representing the financial composition of the balance sheet of institution n.
We propose to calculate the Shapley value associated with each company. In this way we provide an agnostic tool that can interpret in a technologically neutral way the output from a highly accurate machine learning model. As suggested in Joseph (2019), the Shapley values of a model can be used as a tool to transfer predictive inferences into a linear space, opening a wide possibility of applying to them a variety of multivariate statistical methods.
We develop our Shapley approach using the SHAP Lundberg and Lee (2017) computational framework, which allows to estimate Shapley values expressing predictions as linear combinations of binary variables that describe whether each single variable is included or not in the model.
More formally, the explanation model \(g(x')\) for the prediction f(x) is constructed by an additive feature attribution method, which decomposes the prediction into a linear function of the binary variables \(z' \in \{0,1\}^M\) and the quantities \(\phi _i \in {\mathbb {R}}\):
$$\begin{aligned} g(z') = \phi _0 + \sum _{i = 1}^M \phi _i z_i'. \end{aligned}$$
In other terms, \(g'(z')\approx f(h_x (z'))\) is a local approximation of the predictions where the local function \(h_x (x')=x\) maps the simplified variables \(x'\) into x, \(z'\approx x\) and M is the number of the selected input variables.
Indeed, Lundberg and Lee (2017) prove that the only additive feature attribution method that satisfies the properties of local accuracy, missingness and consistency is obtained attributing to each feature \(x'_i\) an effect \(\phi _i\) called Shapley value, defined as
$$\begin{aligned} \phi _i(f,x) = \sum _{z' \subseteq x'} \frac{|z'|!(M - |z'| -1)!}{M!} \left[ f_x(z') - f_x(z' {\setminus } i) \right] \end{aligned}$$
where f is the trained model, x the vector of inputs (features), \(x'\) the vector of the M selected input features. The quantity \(f_x(z') - f_x(z' {\setminus } i) \) is the contribution of a variable i and expresses, for each single prediction, the deviation of Shapley values from their mean.
In other words, a Shapley value represents a unique quantity able to construct an explanatory model that locally linearly approximate the original model, for a specific input x,(local accuracy). With the property that, whenever a feature is locally zero, the Shapley value is zero (missingness) and if in a second model the contribution of a feature is higher, so will be its Shapley value (consistency).
Once Shapley values are calculated, we propose to employ similarity networks, defining a metric that provides the relative distance between companies by applying the Euclidean distance between each pair \(({\mathbf {x}}_{i},{\mathbf {x}}_{j})\) of company predicted vectors, as in Giudici et al. (2019).
We then derive the Minimal Spanning Tree (MST) representation of the companies, employing the correlation network method suggested by Mantegna and Stanley (1999). The MST is a tree without cycles of a complex network, that joins pairs of vertices with the minimum total "distance".
The choice is motivated by the consideration that, to represent all pairwise correlations between N companies in a graph, we need \(N*(N-1)/2\) edges, a number that quickly grows, making the corresponding graph not understandable. The Minimal Spanning Tree simplifies the graph into a tree of \(N-1\) edges, which takes \(N-1\) steps to be completed. At each step, it joins the two companies that are closest, in terms of the Euclidean distance between the corresponding explanatory variables.
In our Shapley value context, the similarity of variable contributions is expressed as a symmetric matrix of dimension n × n, where n Is the number of data points in the (train) data set. Each entry of the matrix measures how similar or distant a pair of data points is in terms of variable contributions. The MST representation associates to each point its closest neighbour. To generate the MST we have used the EMST Dual-Tree Boruvka algorithm, and its implementation in the R package "emstreeR".
The same matrix can also be used, in a second step, for a further merging of the nodes, through cluster analysis. This extra step can reveal segmentations of data points with very similar variable contributions, corresponding to similar credit scoring decision making.
We test our proposed model to data supplied by European External Credit Assessment Institution (ECAI) that specializes in credit scoring for P2P platforms focused on SME commercial lending. The data is described by Giudici et al. (2019) to which we refer for further details. In summary, the analysis relies on a dataset composed of official financial information (balance-sheet variables) on 15,045 SMEs, mostly based in Southern Europe, for the year 2015. The information about the status (0 = active, 1 = defaulted) of each company one year later (2016) is also provided. The proportion of defaulted companies within this dataset is 10.9%.
Using this data, Giudici et al. (2019) have constructed logistic regression scoring models that aim at estimating the probability of default of each company, using the available financial data from the balance sheets and, in addition, network centrality measures that are obtained from similarity networks.
Here we aim to improve the predictive performance of the model and, for this purpose, we run an XGBoost tree algorithm [see e.g. Chen and Guestrin (2016)]. To explain the results from the model, typically highly predictive, we employ similarity network models, in a post-processing step. In particular, we employ the cluster dendrogram representation that corresponds to the application of the Minimum Spanning Tree algorithm.
We first split the data in a training set (80%) and a test set (20%), using random sampling without replacement.
We then estimate the XGBoost model on the training set, apply the obtained model to the test set and compare it with the best logistic regression model. The ROC curves of the two models are contained in Fig. 1 below.
Receiver Operating Characteristic (ROC) curves for the logistic credit risk model and for the XGBoost model. In blue, we show the results related to the logistic models while in red we show the results related to the XGBoost model
From Fig. 1 note that the XGBoost clearly improves predictive accuracy. Indeed the comparison of the Area Under the ROC curve (AUROC) for the two models indicate an increase from 0.81 (best logistic regression model) to 0.93 (best XGBoost model).
We then calculate the Shapley value explanations of the companies in the test set, using the values of their explanatory variables. In particular, we use TreeSHAP method (Lundberg et al. 2020) [see e.g. Murdoch et al. (2019); Molnar 2019)] in combination with XGBoost. The Minimal Spanning Tree (a single linkage cluster) is used to simplify and interpret the structure present among Shapley values. We can also "colour" the MST graph in terms of the associated response variables values: default, not default.
Figures 2 and 3 present the MST representation. While in Fig. 3 company nodes are colored according to the cluster to which they belong, in Fig. 4 they are colored according to their status: not defaulted (grey); defaulted (red).
Minimal Spanning Tree representation of the borrowing companies. Companies are colored according to their cluster of belonging
In Fig. 2, nodes are colored according to the cluster in which they are classified. The figure shows that clusters are quite scattered along the correlation network.
To construct the colored communities in Fig. 2, we used the algorithm implemented in the R package "igraph" that directly optimizes a modularity score. The algorithm is very efficient and easily scales to very large networks (Clauset et al. 2004).
In Fig. 3, nodes are colored in a simpler binary way: whether the corresponding company has defaulted or not.
Minimal Spanning Tree representation of the borrowing companies. Clustering has been performed using the standardized Euclidean distance between institutions. Companies are colored according to their default status: red = defaulted; grey = not defaulted
From Fig. 3 note that default nodes appear grouped together in the MST representation, particularly along the bottom left branch. In general, defaulted institutions occupy precise portion of the network, usually to the leafs of the tree, and form clusters. This suggests that those companies form communities, characterised by similar predictor variables' importances. It also suggests that not defaulted companies that are close to default ones have a high risk of becoming defaulted as well, being the importance of their predictor variables very similar to those of the defaulted companies.
To better explain the explainability of our results, in Fig. 4 we provide the interpretation of the estimated credit scoring of four companies: two that actually defaulted and two that did not.
Contribution of each explanatory variable to the Shapley's decomposition of four predicted default probabilities, for two defaulted and two non defaulted companies. The more red the color the higher the negative importance, and the more blue the color the higher the positive importance
Figure 4 clearly shows the advantage of our explainable model. It can indicate which variables contribute more to the prediction of default. Not only in general, as is typically done by statistical and machine learning models, but differently and specifically for each company in the test set. Indeed, Fig. 4 clearly shows how the explanations are different ("personalised") for each of the four considered companies.
The most important variables, for the two non defaulted companies (left boxes) regard: profits before taxes plus interests paid, and earnings before income tax and depreciation (EBITDA), which are common to both; trade receivables, for company 1; total assets, for company 2.
Economically, a high proficiency decreases the probability of default, for both companies; whereas a high stock of outstanding invoices, not yet paid, or a large stock of assets, helps reducing the same probability.
On the other hand, Fig. 4 shows that the most important variables, for the two defaulted companies (right boxes) concern: total assets, for both companies; shareholders funds plus non current liabilities, for company 3; profits before taxes plus interests paid, for company 4.
In other words, lower total assets coupled, in one case, with limited shareholder funds and, in the other, with low proficiency, increase the probability of default of these two companies.
The above results are consistent with previous analysis of the same data: both Giudici et al. (2019) select, as most important variables in several models, the return on equity, related to both EBITDA and profit before taxes plus interests paid; the leverage, related to total assets and shareholders' funds; and the solvency ratio, related to trade payables.
We remark that Fig. 4 contains a "local" explanation of the predictive power of the explanatory variables, and it is the most important contribution of Shapley value theory. If we average Shapley values across all observations we get an "overall" or "global" explanation, similar to what already available in the statistical and machine learning literature. Figure 5 below provides the global explanation in our context: the ten most important explanatory variables, over the whole sample.
Mean contribution of each explanatory variable to the Shapley's decomposition. The more red the color the higher the negative importance, and the more blue the color the higher the positive importance
From Fig. 5 note that total assets to total liabilities (the leverage) is the most important variable, followed by the EBITDA, along with profit before taxes plus interest paid, measures of operational efficiency; and by trade receivables, related to solvency, in line with the previous comments.
Conclusions and Future Research
The need to leverage the high predictive accuracy brought by sophisticated machine learning models, making them interpretable, has motivated us to introduce an agnostic, post-processing methodology, based on correlation network models. The model can explain, from a substantial viewpoint, any single prediction in terms of the Shapley value contribution of each explanatory variables.
For the implementation of our model, we have used TreeSHAP, a consistent and accurate method, available in open-source packages. TreeSHAP is a fast algorithm that can compute SHapley Additive exPlanation for trees in polynomial time instead of the classical exponential runtime. For the xgboost part of our model we have used NVIDIA GPUs to considerably speed up the computations. In this way, the TreeSHAP method can quickly extract the information from the xgboost model.
Our research has important policy implications for policy makers and regulators who are in their attempt to protect the consumers of artificial intelligence services. While artificial intelligence effectively improve the convenience and accessibility of financial services, they also trigger new risks. Our research suggests that network based explainable AI models can effectively advance the understanding of the determinants of financial risks and, specifically, of credit risks. The same models can be applied to forecast the probability of default, which is critical for risk monitoring and prevention.
Future research should extend the proposed methodology to other datasets and, in particular, to imbalanced ones, for which the occurrence of defaults tends to be rare, even more than what observed for the analysed data. The presence of rare events may inflate the predictive accuracy of such events [as shown in Bracke et al. (2019)]. Indeed, Thomas and Crook (1997) suggests to deal with this problem via oversampling and it would be interesting to see what this implies in the proposed correlation network Shapley value context.
Bracke, P., Datta, A., Jung, C., & Sen, S. (2019). Machine learning explainability in finance: an application to default risk analysis. Bank of England staff working paper no. 816.
Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 785–794). ACM.
Clauset, A., Newman, M. E., & Moore, C. (2004). Finding community structure in very large networks. Physical review E, 70(6), 066111.
Croxson, K., Bracke, P., & Jung, C. (2019). Explaining why the computer says 'no'. FCA-Insight.
EU. (2016). Regulation (EU) 2016/679—general data protection regulation (GDPR). Official Journal of the European Union.
Friedman, J., Hastie, T., & Tibshirani, R. (2000). Additive logistic regression: A statistical view of boosting (with discussion and a rejoinder by the authors). Annals of Statistics, 28(2), 337–407.
FSB. (2017). Artificial intelligence and machine learning in financial services—market developments and financial stability implication. Technical report, Financial Stability Board.
Giudici, P. (2018). Financial data science. Statistics and Probability Letters, 136, 160–164.
Giudici, P., Hadji-Misheva, B., & Spelta, A. (2019). Network based credit risk models. Quality Engineering, 32(2), 1–13.
Joseph, A. (2019). Shapley regressions: a framework for statistical inference on machine learning models. Research report 784, Bank of England.
Lundberg, S., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, & R. Garnett (Eds.), Advances in neural information processing systems 30 (pp. 4765–4774). Curran Associates, Inc.
Lundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., & Nair, B., et al. (2020). From local explanations to global understanding with explainable ai for trees. Nature machine intelligence, 2(1), 2522–5839.
Mantegna, R. N., & Stanley, H. E. (1999). Introduction to econophysics: Correlations and complexity in finance. Cambridge: Cambridge University Press.
Molnar, C. (2019). Interpretable machine learning: A guide for making black box models explainable.
Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080.
Shapley, L. (1953). A value for n-person games. Contributions to the Theory of Games, 28(2), 307–317.
Thomas, L., & Crook, J. (1997). Credit scoring and its applications. SIAM Monographs.
This research has received funding from the European Union's Horizon 2020 research and innovation program "FIN-TECH: A Financial supervision and Technology compliance training programme" under the Grant Agreement No 825215 (Topic: ICT-35-2018, Type of action: CSA), and from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No.750961. Firamis acknowledges the NVIDIA Inception DACH program for the computational GPU resources. In addition, the Authors thank ModeFinance, a European ECAI, for the data; the partners of the FIN-TECH European project, for useful comments and discussions. The authors also thank the Guest Editor, and two anonymous referees, for the useful comments and suggestions.
Open access funding provided by Università degli Studi di Pavia within the CRUI-CARE Agreement.
University of Pavia, Pavia, Italy
Niklas Bussmann & Paolo Giudici
FinNet-Project, Frankfurt, Germany
Dimitri Marinelli
FIRAMIS, Frankfurt, Germany
Jochen Papenbrock
Niklas Bussmann
Paolo Giudici
Correspondence to Paolo Giudici.
Niklas Bussmann, Dimitri Marinelli and Jochen Papenbrock have been, or are, employed by the company FIRAMIS. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The paper is the result of a close collaboration between all four authors. However, JP is the main reference for use case identification, method and process ideation and conception as well as fast and controllable implementation, whereas PG is the main reference for statistical modelling, literature benchmarking and paper writing.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Bussmann, N., Giudici, P., Marinelli, D. et al. Explainable Machine Learning in Credit Risk Management. Comput Econ 57, 203–216 (2021). https://doi.org/10.1007/s10614-020-10042-0
Issue Date: January 2021
Financial technologies
Similarity networks
|
CommonCrawl
|
Effectiveness of mindfulness-based stress reduction on depression, anxiety, and stress of women with the early loss of pregnancy in southeast Iran: a randomized control trial
Masumeh Nasrollahi1,
Masumeh Ghazanfar Pour2,
Atefeh Ahmadi3,
Mogaddameh Mirzaee4 &
Katayoun Alidousti ORCID: orcid.org/0000-0002-1206-49125
Reproductive Health volume 19, Article number: 233 (2022) Cite this article
The loss of the fetus may cause mental health problems in women. The present study aimed to determine the effect of mindfulness-based stress reduction (MBSR) on anxiety, depression, and stress in women with early pregnancy loss.
This study was performed on 106 women with early pregnancy loss in Shiraz, Iran. The intervention group underwent eight counselling sessions. Pre-test and post-test were performed in both groups with the Depression, Anxiety, and Stress Scale (DASS) 21 questionnaire. Data were analyzed by SPSS 23.
There was a statistically significant difference between the mean scores in the intervention group vs. the control group in terms of anxiety (7.9 ± 1.07 vs. 13.79 ± 5.36, respectively), stress (9.26 ± 1.25 vs.18.13 ± 7.66, respectively), and depression (7.83 ± 1.05 vs.16.26 ± 11.06, respectively) (P < 0.0001).
MBSR can be suggested to promote women's mental health.
Women who lose their pregnancies are more at risk than others for mental disorders such as anxiety, depression, and grief. Assessing stress, anxiety, and depression is critical to maintain women's overall health so that timely supportive interventions can be pursued if necessary. Eight MBSR intervention sessions were performed for 53 women with early pregnancy loss. The total intervention for all participants took 4 months. Results showed that the anxiety, depression, and stress of most participants decreased, which suggests that MBSR can be an effective non-pharmacological method to improve mental health.
Vaginal bleeding is a relatively common event that occurs in one-third of cases in the first trimester of pregnancy. It can increase the risk of premature rupture of the membranes and the onset of miscarriage, and it can be a sign of a pathological condition such as ectopic pregnancy, moles, or gestational trophoblastic disease [1]. Spontaneous abortion, moles, and ectopic pregnancy are adverse events that lead to depression and anxiety by affecting the quality of social life [2, 3]. Psychological complications in women with early pregnancy loss include PTSD, anxiety, and depression at rates of 28%, 32%, 16% one month and 38%, 20% and 5% three months later, respectively [4].
Kulathilaka et al. reported a prevalence of 26.6% for depression and 18.6% for depression in spontaneous abortion [5]. About 40% of miscarrying women have been found to be suffering from symptoms of grief shortly after miscarriage, and major depressive disorder has been reported in 10–50% of women after miscarriage [6]. Previous research has shown that 92% of women who experience pregnancy loss require high levels of psychological after-care, but only one-third of them receive this care as more attention is given to their physical needs [7]. Bilardi reported that in Australia, more than half of women (59%) were not offered any information about miscarriage or pregnancy loss support. Although almost all reported they would have liked some form of support, more than half (57%) did not receive follow-up care or emotional support [8].
Immediately after the early termination of pregnancy women experience disturbing dreams and sleep problems [9]. When bleeding causes a miscarriage, it usually causes significant emotional disturbance in marital satisfaction [1].
Assessing stress, anxiety, and depression is critical to maintaining women's overall health so that supportive interventions can be pursued if necessary. One type of non-pharmacological intervention is the MBSR program. The mindfulness approach, rooted in religious traditions such as those of Buddhism, is used as a behavioral intervention in clinical problems. First, Jan Kabat Zayn used it to treat chronic pain. Unlike other clinical approaches that emphasize changing unpleasant pressures, mindfulness increases a person's acceptance of undesirable phenomena and thus leads to resilience. MBSR is a counseling method that teaches a person to be present in the moment without worrying about the future and worrying about past events [10]. A study by Jazayeri et al. showed that MBSR may be suitable for people who do not want conventional therapies. In their study, the effectiveness of MBSR was shown to immediately reduce anxiety, stress, and depression and increase mental well-being after the last intervention [11]. Khajooyee also revealed in her study that MBSR reduced anxiety, depression, and stress in women with unwanted pregnancies [12].
The high prevalence of first trimester bleeding, the need for intervention to reduce related psychological problems, and the newness of the mindfulness counseling method led us to design a study aimed to determine the effect of mindfulness-based stress reduction (MBSR) on anxiety, depression, and stress of women with early loss of pregnancy.
This randomized control trial aimed to determine the effect of MBSR on anxiety, depression, and stress in women who had inevitable first trimester miscarriage in Zeinabieh Hospital in Shiraz in 2020. Patients were recruited during the four months between August and November 2020.
Zeinabieh Hospital in Shiraz is a specialized women's hospital and is a referral center for women from other cities and even neighboring provinces.
The study's objectives were described to all women who came to the study site with bleeding leading to the termination of pregnancy in the first trimester of their pregnancy. If they met the inclusion criteria and were willing to participate in the study, written informed consent was obtained. According to a random allocation table, the participants were placed in the control or intervention group. Randomization was done by the block method. We considered the intervention group as "A" and the control group as "B", and then 27 blocks of four combinations of "A, B" were randomly selected using R statistical software version 3.2.1. A sequence of the letters A and B was produced, and each person who was referred to Zeinabieh Hospital was placed in either the intervention or control group based on the sequence produced.
Inclusion criteria included willingness to participate in the study, planned pregnancy, no history of known mental illness, no use of psychiatric drugs in the two months before the beginning of the study, and no event that would cause anxiety, depression, or stress.
Exclusion criteria included unwillingness to continue participation in the research, occurrence of an unfortunate incident that caused anxiety, stress, or depression during the study, and absence in more than one intervention session.
Based on a previous study [13], and 10% dropout:
$$\alpha =0.05, \quad 1-\beta =0.8$$
$${\sigma }_{1}=5, \quad {\sigma }_{2}=4.04, \quad d = 2.5$$
and the formula below:
$$n=\frac{{\left({z}_{1-\frac{\alpha }{2}}+{z}_{1-\beta }\right)}^{2}({\sigma }_{1}^{2}+{\sigma }_{2}^{2})}{{d}^{2}}$$
fifty-seven people were considered for each intervention and control group.
Demographic characteristics questionnaire collecting information about age, sex, level of education, occupation, blood type, willingness or unwillingness to become pregnant, history of infertility and IVF, number of pregnancies, number of children, history and type of previous deliveries, history of abortion, and hemoglobin level.
The DASS21 questionnaire (Depression, Anxiety, and Stress Scales)
The DASS21 questionnaire is an international and valid questionnaire that includes 21 questions that evaluates the three factors of anxiety, depression, and stress. Seven questions are assigned to each of the variables. The items are assessed on a 4-point Likert scale ranging from 0 to 3 (0 = "no" and 3 = "most of the time"). The intensity/frequency of the 4 points represent the degree to which participants have experienced each state in the last week. The subject's total score in this questionnaire is within the range of zero to 63, and higher scores show higher stress, depression, and anxiety.
This study was approved by the Ethics Committee of Kerman University of Medical Sciences, Iran (ethics code No. Kmu.ac.ir.1398.217). Written informed consent was obtained as a requirement to enter the study and participants were able to withdraw from the study whenever they wanted. Unique codes were used for each of the participants to ensure information confidentiality.
After obtaining informed written consent, both groups performed a pre-test. Then intervention was performed in eight sessions of 2 h once a week for the intervention group. Interventions were performed in a counseling room that was quietly ventilated with suitable lighting and adequate space for practical exercises. Some daily exercises had to be done at home, and participants were trained to exercise 40 min every day. Participants were recommended to divide the 40 min into shorter sections and spread them throughout the day, integrating the exercises with their everyday activities to increase the awareness of their minds. Homework reminders were sent via text message. The intervention was done by a researcher who had taken a counseling course in midwifery and was also trained in mindfulness. Table 1 describes a summary of the counseling sessions.
Table 1 Summary of counseling sessions based on the MBSR approach for reduction of anxiety, depression, and stress in women with early pregnancy loss
During the consultation, the control group received routine post-pregnancy care. The two groups completed the post-test questionnaire two weeks after the last session. After the post-test, a pamphlet containing a summary of the counseling sessions was provided to the control group for ethical considerations. During the study, four people in the intervention group and four people in the control group were excluded. One person was replaced due to re-pregnancy in the intervention group, and another person attended the meetings instead of her and three people refused to participate in more than two counseling sessions. Two people declined to answer the posttest in the control group. A third person was excluded because of participating in the psychological counseling sessions and another one because of re-pregnancy. Therefore, the original 57 participants in each group were reduced to 53 in both groups.
Chi-square or Fisher tests were used to compare the intervention and control groups in terms of demographic variables and obstetric variables. The Mann–Whitney test was used to compare anxiety, depression, and stress between the intervention and control groups before the intervention. ANOVA test was used to compare these variables after the intervention, due to the significant difference in the "willingness to get pregnant again" variable between the intervention and control groups. Wilcoxon test was used to compare anxiety, depression, and stress before and after the intervention. Effect size was calculated using Cohen's effect size formula.
The results showed that the mean age of women in the intervention and control groups was 28.93 ± 5.62 and 29.30 ± 6.32 respectively. Blood type O was seen in 37.7% of the intervention group and 47.3% of the control group. The hemoglobin level of most participants (53.75%) was 11–12 gr/dl. The majority of participants (96.25%) had no history of infertility. The cause of bleeding in most participants in both groups (60.4%) was abortion. The Chi-square and Fisher tests showed no statistically significant difference between the demographic variables and obstetric history of the two groups, and the two groups were homogeneous in the beginning of the study (Table 2).
Table 2 Comparison of distribution of demographic and midwifery variables of intervention and control group participants
The Mann–Whitney test showed that the mean scores of anxiety, depression, and stress in the two intervention and control groups before the MBSR intervention were not statistically significant, i.e., the two groups were homogeneous in these three variables (Table 3).
Table 3 Comparison of anxiety, depression, and stress in the two groups before and after the intervention
Findings indicated that after eight MBSR sessions, the mean score of anxiety in the intervention group (7.9 ± 1.07) compared to the control group (13.79 ± 5.36), stress in the intervention group (9.26 ± 1.25) compared to the control group (18.13 ± 7.66), and depression in the intervention group (7.83 ± 1.05) compared to the control group (16.26 ± 11.06) were statistically significant, which indicates the effectiveness of counseling (P < 0.0001).
By comparing the pre- and post-test of each variable in each group, it was found that in the intervention group, the average score of anxiety decreased from 14.34 ± 5.35 before counseling to 9.7 ± 1.07 after counseling, i.e. counseling was able to reduce the level of anxiety (P < 0.0001). Also, the mean score of depression decreased from 15.11 ± 6.34 before counseling to 7.83 ± 1.05 after counseling (P < 0.0001). Regarding stress, we found that the average stress score decreased from 18.39 ± 5.47 before counseling to 9.26 ± 1.25 after counseling (P < 0.0001).
The total scores of the DASS questionnaire, which included stress, anxiety, and depression, were compared between the two groups. The comparison showed a significant difference between the two groups after intervention (P < 0.0001) (Table 3).
This study aimed to determine the effect of MBSR on anxiety, depression, and stress in women with bleeding leading to termination of pregnancy in the first trimester of pregnancy. The results showed a significant reduction in anxiety in the intervention group after counseling. Previous studies have shown that MBSR can reduce anxiety in pregnant mothers [15]. Also, an interventional study reported the effect of MBSR on reducing the symptoms of social anxiety in the dimensions of fear, avoidance, and physiological distress, which is in line with the findings of the present study. This may be due to the same consultation method used in both studies [16]. By using MBSR, people know themselves better by recognizing their strengths and weaknesses. They learn coping strategies, commitment, and acceptance, helping them move toward their desired goals, accept their mistakes and decisions without judgment, and deal with them promptly by identifying stressful stages and events, ultimately reducing stress and anxiety before they become depressive [14]. In a systematic review, Shi showed that mindfulness can reduce pregnancy-related stress and anxiety [17]. However, the reduction in anxiety in their control group was not consistent with the present study probably because their control group used books to increase information about pregnancy to help manage stress and some of them attended yoga classes.
In the present study, we see a significant reduction in stress after counseling in the intervention group. In one clinical trial, it was found that reducing stress based on mindfulness improves life orientation and reduces perceived stress in the experimental group. According to these results, mindfulness can create positive changes in happiness and well-being by combining vitality and the clear observation of experiences [18]. MBSR can also indirectly reduce stress by improving sleep conditions and sleep quality [19]. A study in Egypt showed the positive effects of mindfulness intervention and women's stress and mindfulness skills on managing the stressful stages of pregnancy [20]. First trimester bleeding and premature miscarriage are also stressful pregnancy events that we could be reduced by MBSR. A study conducted in 2018 by Krusche et al. Found that MBSR did not affect stress in pregnant women and that there was no significant difference between the intervention and control groups probably due to the online teaching method [21].
The present study's findings showed that MBSR was able to reduce depression after the termination of pregnancy in the first trimester. A 2019 study in China showed that MBSR was able to reduce pregnancy anxiety and stress but had no effect on depression [22]. However, a happiness-based research was able to reduce depression in women with recurrent miscarriages. A happiness training program improves patients' cognitive and emotional states. It allows patients to adopt a more positive attitude towards life events and respond to challenges optimistically by adapting to changing circumstances [23]. People with a conscious mind think about the past, analyze events to let go of bad habits, look at things in a new way, and see events as they are, increasing acceptance and satisfaction in their lives [24].
pregnancy can be a complex issue for many women and families. If it leads to pregnancy loss, it becomes an awful and irreparable experience that deeply affects their bodies and minds [25]. Sleep disorders and anxiety have been reported by 11.1% of women within the four weeks after the abortion. Psychologic healing after abortion takes five years, which is relatively long [26]. The feelings of loss remaining from previous pregnancies can last for years, and the healing process is complex and different for all women. The resulting anxiety may remain in couples who are planning for another pregnancy. It sometimes leads them to resort to destructive behaviors, which necessitates support and monitoring of couples after abortion. Abortion in women may be associated with increased risk of addiction to nicotine, alcohol, cannabis and other illegal drugs or suicidal thoughts and suicide attempts [27, 28]. Lack of counseling and follow-up leads to post-abortion anxiety disorder and increases and exacerbates mental disorders [26]. In mindfulness, a person learns how to deal with their ineffective and irrational thoughts and negative emotions. A mindful mind protects the person against stressors from psychological traumas by reducing irrational sensitivities. Self-control and continuous flow of awareness, being in the moment, and avoiding judgement increase long-lasting self-confidence and problem-solving skills. Mindfulness breaks the cycle of resistance to change. Self-reported body awareness, self-regulation, and well-being, positive changes in attitudes toward others, enlightened self-view, and development of critical consciousness are among the states and traits associated with mindfulness [29, 30].
considering that one of the practical and influential factors in improving the mental condition of women during pregnancy and afterward is social support and mindfulness counseling, this new and more practical method can be a basis for further research to be compared with other psychological methods. Social and familial support are different in different societies and cultures, and women are considered to be the guilty party and the cause of abortion in some. In some cultures, spontaneous abortion is considered a punishment for ungratefulness to God. Therefore, studies in communities with different cultures and religious beliefs can provide a better perspective for the provision of supportive counseling services by health care workers [31, 32]. In addition, the history of infertility, repeated abortions, chronic illness of the mother, the type of occupation of the father and mother, and the amount of family income can increase anxiety and stress in subsequent pregnancies [33, 34]. Therefore, studies can be conducted in people with different obstetric histories and demographic characteristics.
One of the limitations of the study was that it was not possible to check the rate and manner of family support of the women under study, and there is a possibility that the intervention group had better social and familial support. Another limitation of the study was that we could not categorize women based on the causes of spontaneous abortion, and women's attitudes may be different depending on the cause of abortion. The study location, which was a public hospital, also caused limitations. Because people of higher socio-economic status tend to go to private hospitals to receive care and treatment. Therefore, the study did not include all strata of society, and the results should be generalized with caution.
According to the theoretical foundations and findings of the present study, it is clear that sessions of mindfulness training programs based on stress reduction can reduce anxiety, depression, and stress in women with bleeding leading to termination of pregnancy in the first trimester. Since the loss of pregnancy is an adverse event that leads to depression and anxiety and decreases quality of life, improving these women's mental health should be one of the priorities in the health system.
Data are available from the authors upon reasonable request and with permission of Ethics Committee.
MBSR:
Mindfulness-based stress reduction
PTSD:
Bhatu JJ, Prajapati DS. A study of feto-maternal outcome in bleeding per vaginum in first trimester of pregnancy. Int J Reprod Contracept Obstet Gynecol. 2020;9(3):1191–5. https://doi.org/10.18203/2320-1770.ijrcog20200898.
Jarahi L, Zavar A, Neamat SM. Evaluation of depression and related factors in pregnant women referred to urban and rural health centers of Sarakhs. J Midwifery Reprod Health. 2015;3(2):343–8.
Vameghi R, Amir Ali Akbari S, AlaviMajd H, Sajedi F, Sajjadi H. The comparison of socioeconomic status, perceived social support and mental status in women of reproductive age experiencing and not experiencing domestic violence in Iran. J Inj Violence Res. 2018;10(1):35–44. https://doi.org/10.5249/jivr.v10i1.983.
Farren J, Jalmbrant M, Ameye L, Joash K, Mitchell-Jones N, Tapp S, et al. Post-traumatic stress, anxiety and depression following miscarriage or ectopic pregnancy: a prospective cohort study. BMJ Open. 2016;6(11):2016–011864. https://doi.org/10.1136/bmjopen-2016-011864.
Kulathilaka S, Hanwella R, de Silva VA. Depressive disorder and grief following spontaneous abortion. BMC Psychiatry. 2016;16(1):100. https://doi.org/10.1186/s12888-016-0812-y.
Lok IH, Neugebauer R. Psychological morbidity following miscarriage. Best Pract Res Clin Obstet Gynaecol. 2007;21(2):229–47. https://doi.org/10.1016/j.bpobgyn.2006.11.007.
Sanaati F, Mohammad-Alizadeh Charandabi S, Farrokh Eslamlo H, Mirghafourvand M, Alizadeh SF. The effect of lifestyle-based education to women and their husbands on the anxiety and depression during pregnancy: a randomized controlled trial. J Matern Fetal Neonatal Med. 2017;30(7):870–6. https://doi.org/10.1080/14767058.2016.1190821.
Bilardi JE, Sharp G, Payne S, Temple-Smith MJ. The need for improved emotional support: A pilot online survey of Australian women's access to healthcare services and support at the time of miscarriage. Women Birth. 2021;34(4):362–9. https://doi.org/10.1016/j.wombi.2020.06.011.
Gholami A, Ahmadpoor S, Baghban B, Kheirtalab S, Foroozanfar Z. Prevalence of Depression Symptoms and its Effective Factors in Pregnant Women. J Holist Nurs Midwifery. 2016;26(3):65–73.
Salari-Moghaddam S, Ranjbar AR, Fathi-Ashtiani A. Validity and Reliability measurement of the Persian version of anxiety Control Questionnaire. J Clin Psychol. 2018;9(4):33–43. https://doi.org/10.22075/JCP.2018.11010.1073.
Jazaieri H, Goldin PR, Werner K, Ziv M, Gross JJ. A randomized trial of MBSR versus aerobic exercise for social anxiety disorder. J Clin Psychol. 2012;68(7):715–31. https://doi.org/10.1002/jclp.21863.
Nejad FK, Shahraki KA, Nejad PS, Moghaddam NK, Jahani Y, Divsalar P. The influence of mindfulness-based stress reduction (MBSR) on stress, anxiety and depression due to unwanted pregnancy: a randomized clinical trial. J Prev Med Hyg. 2021;62(1):E82–8. https://doi.org/10.15167/2421-4248/jpmh2021.62.1.1691.
Ahmadi L, Bagheri F. The effectiveness of educating mindfulness on anxiety, fear of delivery, pain catastrophizing and selecting caesarian section as the delivery method among nulliparous pregnant women. NPT. 2017;4(1):52–63.
Nasirnejhad F, Poyamanesh J, FathiAgdam G, Jafari A. The effectiveness of mindfulness-based therapy and short-term solution-focused therapy on the resilience and happiness of women with Multiple Sclerosis. Feyz J Kashan Univ Med Sci. 2020;24(5):536–44.
Zarenejad M, Yazdkhasti M, Rahimzadeh M, MehdizadehTourzani Z, Esmaelzadeh-Saeieh S. The effect of mindfulness-based stress reduction on maternal anxiety and self-efficacy: a randomized controlled trial. Brain Behav. 2020;10(4):11. https://doi.org/10.1002/brb3.1561.
Razian S, HeydariNasab L, Shairi MR, Zahrabi S. To investigate the effectiveness of indfulness-based stress reduction program (MBSR) on reducing the symptoms of patients with social anxiety. CPAP. 2015;13(12):37–50.
Shi Z, MacBeth A. The effectiveness of mindfulness-based interventions on maternal perinatal mental health outcomes: a systematic review. Mindfulness. 2017;8(4):823–47. https://doi.org/10.1007/s12671-016-0673-y.
Sanaei H, Mousavi SAM, Moradi A, Parhoon H, Sanaei S. The effectiveness of mindfulness-based stress reduction on self-efficacy, perceived stress and life orientation of women with breast cancer. Thoughts Behav Clin Psychol. 2017;12(44):57–66.
Cox RC, Olatunji BO. A systematic review of sleep disturbance in anxiety and related disorders. J Anxiety Disord. 2016;37:104–29. https://doi.org/10.1016/j.janxdis.2015.12.001.
Eltelt RMH, Mostafa MM. Mindfulness-based intervention program on stress reduction during pregnancy. Am J Nurs Res. 2019;7(3):375–86. https://doi.org/10.12691/ajnr-7-3-19.
Krusche A, Dymond M, Murphy SE, Crane C. Mindfulness for pregnancy: a randomised controlled study of online mindfulness during pregnancy. Midwifery. 2018;65:51–7. https://doi.org/10.1016/j.midw.2018.07.005.
Zhang JY, Cui YX, Zhou YQ, Li YL. Effects of mindfulness-based stress reduction on prenatal stress, anxiety and depression. Psychol Health Med. 2019;24(1):51–8. https://doi.org/10.1080/13548506.2018.1468028.
Elsharkawy NB, Mohamed SM, Awad MH, Ouda MMA. Effect of happiness counseling on depression, anxiety, and stress in women with recurrent miscarriage. Int J Women's Health. 2021;13:287–95. https://doi.org/10.2147/IJWH.S283946.
Farb N, Anderson A, Ravindran A, Hawley L, Irving J, Mancuso E, et al. Prevention of relapse/recurrence in major depressive disorder with either mindfulness-based cognitive therapy or cognitive therapy. J Consult Clin Psychol. 2018;86(2):200–4. https://doi.org/10.1037/ccp0000266.
Khodakarami B, Shobeiri F, Mefakheri B, Soltanian A, Mohagheghi H. The effect of counseling based on Fordyce's pattern of happiness on the anxiety of women with spontaneous abortion. AJNMC. 2019;26(6):377–88.
Hajnasiri H, Behbodimoghddam Z, Ghasemzadeh S, Ranjkesh F, Geranmayeh M. The study of the consultation effect on depression and anxiety after legal abortion. J Nurs Educ. 2016;4(1):64–72.
Pedersen W. Childbirth, abortion and subsequent substance use in young women: a population-based longitudinal study. Addiction. 2007;102(12):1971–8. https://doi.org/10.1111/j.1360-0443.2007.02040.x.
Mota NP, Burnett M, Sareen J. Associations between abortion, mental disorders, and suicidal behaviour in a nationally representative sample. Can J Psychiatry. 2010;55(4):239–47. https://doi.org/10.1177/070674371005500407.
Treves IN, Tello LY, Davidson RJ, et al. The relationship between mindfulness and objective measures of body awareness: a meta-analysis. Sci Rep. 2019;9:17386. https://doi.org/10.1038/s41598-019-53978-6.
Leggett W. Can Mindfulness really change the world? The political character of meditative practices. Crit Policy Stud. 2022;3:261–78. https://doi.org/10.1080/19460171.2021.1932541.
Omar N, Major S, Mohsen M, Al Tamimi H, El Taher F, Kilshaw S. Culpability, blame, and stigma after pregnancy loss in Qatar. BMC Pregnancy Childbirth. 2019;19(1):215. https://doi.org/10.1186/s12884-019-2354-z.
Kilshaw S, Miller D, Al Tamimi H, El-Taher F, Mohsen M, Omar N, Major S, Sole K. Calm vessels: cultural expectations of pregnant women in Qatar. Anthropol Middle East. 2016;2:39–59.
Moradinazar M, Najafi F, Nazar ZM, Hamzeh B, Pasdar Y, Shakiba E. Lifetime prevalence of abortion and risk factors in women: evidence from a cohort study. J Pregnancy. 2020;27:4871494. https://doi.org/10.1155/2020/4871494.
Ribeiro MR, Silva AA, Alves MT, et al. Effects of socioeconomic status and social support on violence against pregnant women: a structural equation modeling analysis. PLoS ONE. 2017;12(1): e0170469. https://doi.org/10.1371/journal.pone.0170469.
The authors appreciate all women who kindly participated in this study. We also appreciate the hospital staffs who cooperated in carrying out this study.
This study did not receive any specific funding from any community funding organization, commercial or non-profit sectors.
Student Research Committee, Department of Midwifery, Razi Faculty of Nursing and Midwifery, Kerman University of Medical Sciences, Kerman, Iran
Masumeh Nasrollahi
Department of Midwifery, Razi Faculty of Nursing and Midwifery, Kerman University of Medical Sciences, Kerman, Iran
Masumeh Ghazanfar Pour
Department of Guidance and Counselling/Medical Practitioner, Nursing Research Center, Razi Faculty of Nursing and Midwifery, Kerman University of Medical Sciences, Kerman, Iran
Atefeh Ahmadi
Department of Biostatistics Modeling in Health Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
Mogaddameh Mirzaee
Department of Midwifery, Neuroscience Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
Katayoun Alidousti
MN and KA designed the work and drafted the manuscript. AA had prepared counselling package. MGP had full access to all of the data and took responsibility for the integrity of the data. MM was responsible for accuracy of the data analysis. All authors read and approved the final manuscript.
Correspondence to Katayoun Alidousti.
This manuscript was derived from a master counselling in midwifery thesis (project code No. 97001015) and was approved by the Ethics Committee of Kerman University of Medical Sciences, Iran (ethics code No. Kmu.ac.ir.1398.217) and Iranian Registry of Clinical Trials (IRCT20151103024866N17). Written informed consent was obtained as a requirement to enter the study and participants were able to withdraw from the study whenever they wanted. At the request of the ethics committee, the study was conducted in accordance with the Declaration of Helsinki and Ethics Publication on Committee (COPE). Unique codes were used for each of the participants to ensure information confidentiality.
Authors mention that there is no conflict of interest in this study.
Nasrollahi, M., Ghazanfar Pour, M., Ahmadi, A. et al. Effectiveness of mindfulness-based stress reduction on depression, anxiety, and stress of women with the early loss of pregnancy in southeast Iran: a randomized control trial. Reprod Health 19, 233 (2022). https://doi.org/10.1186/s12978-022-01543-2
Loss of pregnancy
|
CommonCrawl
|
Could this be a plausible way to find out how an electron moves around the nucleus in a hydrogen atom?
Any flaw in this experiment to determine the momentum vector of the electron in a hydrogen atom?
How to calculate the resonant florescent spectrum of 2-level atoms
What is the exact relationship between scale invariance and renormalizability of a theory?
Is there a general relationship between the conformal weight of a field and its (classical) scaling dimension?
Are there fields corresponding to the composite particles (e.g. hydrogen atom field)?
How can we prove that 1 electron was in 2 places at one time. Theoretically, how could we ever prove this? How would we distinguish between one electron being destoryed and 2 "partial" electrons being created with different properties than the first?
The scaling limit of a quantum field theory
Can an exchange-correlation functional be derived from a many body solvable Schrödinger equation?
How to distinguish Bose glass and superfluid phases in a harmonic trap?
How to distinguish between the spectrum of an atom in motion and the one of a scaled atom ?
Galaxies are moving dragged by the space expansion. When atoms are in motion the doppler effect will shift the spectra of the emitted photons.
The proton-to-electron mass ratio, $\frac{m_e}{m_p}$ has been measured constant along the history of the universe, but nothing can be said about the constancy of the electron or proton's masses.
The photon's energy obey the Sommerfeld relation, $E_{jn}=-m_e*f(j,n,\alpha,c)$, as seen here, and it is evident that a shifted (1) spectrum is obtained with a larger $m_e$.
The spectra lines are not only due to the Hydrogen atom; there are other spectral lines due to molecular interactions, due to electric/magnetic dipoles, etc, and so the electromagnetic interaction,the Coulomb's law, $F_{}=\frac{1}{4\cdot \pi\cdot \varepsilon}\cdot \frac{q1\cdot q2}{d^2}$ must be analyzed.
If we scale all masses by the relation $\alpha(t)$ (not related with the above fine structure constant), where $t$ is time (past), and also scale the charges and the distances and time by the same factor, gives exactly the same value $F_{}=\frac{1}{4\cdot \pi\cdot \varepsilon}\cdot \frac{q_1\cdot q_2\cdot \alpha^ 2(t)}{d^2\cdot \alpha^2(t)}$. Thus the system with and without the transformation behaves in the same manner. The same procedure shows that the universal gravitational law is also insensitive to the scaling of the atom (2). This should not be a complete surprise because the scaling of masses, charges, time units and distances is routinely used on computer simulations that mimic the universe in a consistent way.
The conclusion is that there is no easy way to distinguish between the spectrum of an atom in motion and the one of a scaled atom.
The photons that were emitted by a larger atom in the past are received now without any change in its wavelength.
The mainstream viewpoint, not being aware that scaling the atom gave the same observational results, adopted the receding interpretation long time ago. As a consequence the models derived from that interpretation (BB, Inflation, DE, DM, ) do not obey the general laws of the universe, namely the energy conservation principle.
My viewpoint offers a cause for the space expansion. Most physicists are comfortable with: 'space expands', period, without a known cause.
Physics is about causes and whys, backed by proper references.
I used the most basic laws to show that another viewpoint is inscribed in the laws of nature.
When I graduated as electronic engineer, long time ago, I accepted naively that the fields (electrostatic and gravitational) are sourced by the particles, and expand at $c$ speed, without being drained. But now, older but not senile, I assume without exception, that in the universe there are no 'free lunches' and thus the energy must be transferred from the particles (shrinking) to the fields (growing).
This new viewpoint is formalized and compared to the $\Lambda CDM$ model in a rigourous document, with the derivation of the scale relation $\alpha(t)$ that corresponds to the universe's evolution, at:
A self-similar model of the Universe unveils the nature of dark energy
preceded by older documents at arxiv:
Cosmological Principle and Relativity - Part I
A relativistic time variation of matter/space fits both local and cosmic data
Can someone provide a way to distinguish between the spectrum of an atom in motion and the one of a scaled atom ?
maybe by probing the atom's nucleus and find the isotope ratio's abundance, D/H evolution and other isotopes as Mr Webb did with Mg (1998 paper) when in search of the $\alpha$ variability.
PS: To simplify the argument this question skips some details and to avoid misrepresentations a short resumé is annexed.
(1) - after the details are done it is a redshifted spectrum at the reception
(2) - LMTQ units scale equally by $\alpha(t)$, $c,G,\varepsilon$ and the fine structure constant $\alpha$ are invariant.
A short introduction to the self-similar dilation model
The self-similar model arises from an analysis of a fundamental question: is it the space that expands or standard length unit that decreases? That analysis is not an alternative cosmological model derived from some new hypothesis; on the contrary, it does not depend on hypotheses, it has no parameters besides Hubble parameter; it is simply the identification of the phenomenon behind the data, obtained by deduction from consensual observational results. The phenomenon identified is the following: in invariant space, matter is transforming in field in a self-similar way, feeding field expansion. As a consequence of this phenomenon, matter and field evanesce while field expands since the moment when matter appeared. As we use units in which matter is invariant, i.e., units intrinsic to matter, we cannot detect locally the evanescence of matter; but, as a consequence of our decreasing units, we detect an expanding space. So, like the explanation for the apparent rotation of the cosmic bodies, also the explanation for another global cosmic phenomenon (the apparent receding of the cosmic bodies) lays in us.
In units where space is invariant, named Space or S units, matter and field evanesce: bodies decrease in size, the velocity of atomic phenomena increases because light speed is invariant but distances within bounded systems of particles decrease. In standard units, intrinsic to matter, here called Atomic or A units, matter and all its phenomena have invariant properties; however, the distance between non-bounded bodies increases, and the wavelength of distant radiations is red-shifted (they were emitted when atoms were greater). The ratios between Atomic and Space units, represented by M for mass, Q for charge, L for length and T for Time, are the following:
$$M=L=T=Q=\alpha(t)$$
The scaling function $\alpha(t)$ is exponential in S units, as is typical in self-similar phenomena: $$\alpha(t_S)=e^{-H_0 \cdot t_S}$$
Mass and charge decrease exponentially in S units, and the size of atoms decrease at the same ratio, implying that the phenomena runs faster in the inverse ratio; as A units are such that hold invariant the measures of mass, charge, length of bodies (Einstein concept of reference body) and light speed, they vary all with the same ratio in relation to S units. In A units, space appears to expand at the inverse ratio of the decrease of A length unit; the space scale factor in A units, a, is: $$a=1+H_0 \cdot t_A$$
Therefore, space expands linearly in A units.
In what concerns physical laws, those that do not depend on time or space (local laws), like Planck law, are not affected by the evolution of matter/field and hold the same in both systems of units. The laws for static field relate field and its source in the same moment; as both vary at the same ratio, their relation holds invariant and so the laws – the classic laws are valid both in A and S units. The electromagnetic induction laws can be treated as if they were local, therefore holding valid in both systems, and then consider that, in S, the energy of waves decreases with the square of field evanescence (the energy of electromagnetic waves is proportional to the square of the field). In A, due to the relationship between units, the energy decreases at the inverse ratio of space expansion while the wavelength increases proportionally. This decrease of the energy of the waves (or of the photons) is a mystery for an A observer because in A it is supposed to exist energy conservation, which is violated by electromagnetic waves. The phenomenon is observed in the cosmic microwave background (CMB) because the temperature shift of a Planck radiation implies a decrease of the density of the energy of the radiation with the fourth power of the wavelength increase and space expansion only accounts for the third power (this is perhaps the biggest problem of Big Bang models, so big that only seldom is mentioned). Note that induction laws can be treated as time-dependent laws and the evanescence of the radiation be directly obtained from the laws; that introduces an unnecessary formal complication. Finally, there are the conservation laws of mechanics, which require a little more attention.
Although it is not usually mentioned, the independence of physical laws in relation to the inertial motion implies the conservation of the weighted mass summation of velocities and square velocities of the particles of a system of a particles. As the A measure of mass is proportional to the weighted mass, these two properties are understood as the conservation of linear momentum and of kinetic energy. Therefore, the correct physical formula of these conservation laws depends on the weighted mass and is valid in both systems of units; for simplicity, the A measure of mass can be used instead of the weighted mass; for instance, for the conservation of square velocity: $$\sum_i{\frac{m_i}{m_{total}}\cdot v_i^2=\text{const}}\Leftrightarrow \frac{1}{2}\sum_i{(m_A)\cdot v_i^2}$$
In the first expression the system of units is not indicated because the equation is valid in both systems; in the second equation, velocity can be measured in A or S (has the same value in both) but the measure of mass has to be the A measure. The second expression is the conservation of kinetic energy in A.
The conservation of the angular momentum is the only law modified in A units because the relevant measure of curvature radius is the S measure; the angular momentum L can be written as
$$\textrm{L}=\mathbf{r}_s\times m_\text{A}\mathbf{v}$$
This is the quantity that holds invariant in an isolated system; the usual A angular momentum, function of $r_A$
$$\textrm{L}_\text{A}=\mathbf{r}_\text{A}\times m_\text{A}{v}$$, of an isolated system increases with time:
$$\left ( \frac{d\textrm{L}_\text{A}}{dt_\text{A}} \right )_0=H_0\textrm{L}_0$$
This means that the rotation of an isolated rotating body increases with time. For an S observer, this is consequence of the decrease of the size of the body; an A observer can explain this considering that it is consequence of the local expansion of space, that tends to drag the matter.
Note that there is no conflict with the data that support standard physics because this effect is not directly measurable by now; yet, this insignificant alteration has an important consequence: the expansion of planetary orbits.
A note on the value of constants in both systems of units. To change from one system to the other is like any change between two systems of units, but there is a difference: units are time changing one another. As a consequence, a constant in A may not be constant in S. For instance, the Planck constant change in S with the square of the scaling law (a simple image of the physical reason is the following: orbits radii are decreasing in S and also their associated energy; so, the wavelength of emitted radiation is decreasing with orbit radii and also the energy, which implies a Planck constant decreasing with the square of the scaling). Field constants are the exception: they hold constant in both systems of units. The Hubble constant is different: it is constant in S but not in A (in A, the Hubble constant is just the present value of the Hubble parameter). In short, the Hubble constant is the only time constant and is a S constant, field constants are constant is A and S and the other constants, including Planck constant, are relative to atomic phenomena and are constant in A; naturally, the dimensionless relations, like the fine structure constant, are independent of units.
Only the classical fundamental physical laws have been considered because that is what is required, the ground on which the analysis of all the rest of physics can be made, special and general relativity included.
atomic-physics
asked Jun 10, 2014 in Theoretical Physics by HelderVelez (-10 points) [ revision history ]
edited Jun 12, 2014 by HelderVelez
Hi @HelderVelez,
it seems this post is intended to discuss and review a paper of yours by the PhysicsOverflow community, which would make it a perfect submission to our Reviews section, which is dedicated to exactly such applications. If you like, you can ask for such a submission to be created by a superadministrator here. First, the submission would contain not much more than a link to and the important data of the paper, but you can then claim authorship and edit the sumbission by adding a summary, such as for example the content of the above post. Other people will then write reviews for you in the answers and additional discussions can take place in the comments.
Specifically, concerning your ideas, I think you have to be a bit careful to properly distinguish between (theoretical) transformations of the system (the scaling transformation can be among them), and real dynamical motions of objects.
commented Jun 10, 2014 by Dilaton (5,440 points) [ no revision ]
reshown Jun 10, 2014 by Dilaton
Ok, as it is clear that you are the author, we could probably also simply convert this question into a submission and add the content as the summary of the paper, if you like ?
I'm friend and a sort of colaborator of the author and I will ask his oppinion. I knew this model since 1992 but the formal derivation of the model was completed by 2011. This post is only an application to show the potential of this viewpoint. The linked doc is a fully derived formal new theory without any ad-hoc hypotheses. The author is from outside of the academia and it is very difficult to be accepted for review in such conditions. Imo, it would be interesting to submit it to review in this comunity.
commented Jun 10, 2014 by HelderVelez (-10 points) [ no revision ]
Ok, I realized later that the author was not you ... So I think we could create a submission for the paper of your friend pretty fast and he could claim authorship later, if you like.
Yes, you can proceed with the submission and I'will be around and will tell him about this development. Thanks.
@Dilaton: I've a more recent version, extended with the implications on the electromagnetism. May be that I should upload this version, after the approval of my friend. (coincidently it's file name is Dilation3.pdf ;) )
The author needs some time, until monday, to revise the last version and we will focus our attention on that. The first link in the post, version of 2011, is correct and self-contained, and it has plenty of material to discuss and it is a good option but, If you can wait, we would like to upload the most recent version. I've with me the new Abstract that I can post here as an 'answer' to antecipate some thoughts.
m_e needs to be smaller, not larger, to make a redshift with the scaling law you give.
commented Jun 11, 2014 by Ron Maimon (7,720 points) [ no revision ]
You spotted it correctly. The energy at emission is greater, but at the reception it is redshifted. I posted a short introduction to the theory to provide more context. The simple post was intended to to show that physical laws included the possibility of the scaling of the atom, as I think you understood, and the reading of the linked document is beneficial.
Just scaling down m_e only works to rescale spectra for monatomic atoms, and then only if you ignore nuclear recoil (see V. Kalitvianski's answer). Rescaling m_e doesn't work for molecules, where the rotation and vibration spectra depend on the nuclear mass and moment of inertia. The vibration and rotation spectra of simple molecules are rigidly redshifted, not just the monatomic lines. Whenever you can distinguish observed rotation/vibration spectra for H2, DH, CH4, H2O and O2 in a distant cloud, this is enough to ensure that the proton mass, deuteron mass, deuteron binding force, H_2 binding length, C12 and O16 nucleus mass, and C-H, O-H, O-O bond lengths and angles are preserved relative to the purported new scale, so that if one mass is rescaled, all the masses are rescaled.
In order to rescale universally low-energy non-gravitational physics, you need to do a renormalization group step. It's a very convoluted thing, you need to shift the masses of the electron, the up and down quark, and the strong coupling constant, so that all the nuclei scale down in mass, relative to the gravitational scale. By dimensional analysis the effect is equivalent to making "G" vary with time, and this is proposed by Dirac in the large-numbers hypothesis: http://en.wikipedia.org/wiki/Dirac_large_numbers_hypothesis . The theory you are proposing is a large-number-hypothesis theory, with a particular specific time-dependence designed to reproduce the entire redshift of distant matter.
Doing this has an impact on star stability, since gravity is becoming weaker in the past uniformly, so that the rate of light-production is altered. In the linked Wikipedia article, this historical argument is made by Teller in 1948 to give a bound on large-number variations. This has an effect on the Earth--- the sun can't maintain 5 billion years of constant shining, the Earth's orbit and temperature can't be stable for so long, if the mass of everything keeps going up over cosmological time. The historical bounds on large-number theories should easily exclude your theory, at least if the same parameter changes happen on Earth.
From these constraints, you can conclude that a model of this sort can work only if it has us in a preffered center--- since we don't have variations of parameters over cosmological time, but the distant objects do. This is a violation of the Copernican principle.
The direct observational tests to exclude this idea are more difficult, you could theoretically look at broadening effects on spectral lines, since stationary and moving objects would have different broadening, but this is probably unobservable. But I think just to be sure the idea is not true, the bounds on large-numbers are sufficient.
answered Jun 11, 2014 by Ron Maimon (7,720 points) [ revision history ]
edited Jun 11, 2014 by Ron Maimon
I made the post more clear and included a short introduction to the theory that, I hope, address all of your concerns.
The $G,c,\varepsilon$ are kept constant as seen in the dimensional equation when all four base units are scaled. And the same with the adimensional constant $\alpha$. But it is probably a surprise that the constant of Planck is dependent on the scale of the atom as a dimensional analisys puts in evidence.
This is the first scale theory where all constants are kept invariant, besides BB that has 6 parameters instead of one, and it provides no cause for the expansion (the metric is not a cause, it only describes).
About the centre of the universe: it is exaclly the opposite; the presumption that the actual size is unique ('absolute') and no other incarnation of the atom is viable is limited.
The equations can tell us if the atom can scale or not and the data tell us if the atom scaled or not. The study is done and available to be studied.
Dirac ? not at all.
Keeping G fixed and changing m_e and m_u m_d and g_s appropriately to rescale all the spectra is exactly the same thing as keeping m_e, m_u, g_s fixed and making G vary. It's exactly the same thing. There is no difference. The question is only whether you define the unit of mass to be the Planck mass, in which case G is obviously fixed, or the proton or electron mass, in which case you have a varying G theory.
Your response, that the "equations will tell us which is varying" is incorrect, because the equations depend on your choice of units. I have chosen to use units where the atomic spectra are fixed, because this is the easiest way to find a literature refutation of the idea in the theory. You have chosen another way of choosing units, where G is fixed, and then it looks like all the masses are rescaled, and the strong coupling fiddled with.
Do you agree, or not, that my simple procedure proved that "those 2 equations show that scaling of the atom, the way I did, gave a shifted spectra" ? (a)
Scaling the atom was choosed to be defined as: to keep $G,\varepsilon,c$ as constants and to vary the atomic properties of mass/length/charge/time units with the same factor. (b)
It is useless to invoke the actual system of units we use, SI, Planck, etc,.. because all the realizations of the unit's systems are based in the atomic properties, called 'atomic system' through all the paper (c), in contraposition to a Space (S) system. It is useless to invoke a varying $G$ or any other varying scenario besides the one defined in the paper, as above said. Those other variations (page 2- Dirac's LNH,Canuto,Hoyle and Narlikar,Wesson,Meader and Bouvier ),..., are not under scrutiny now.
I expected that you (based in a previous answer of yours) try to anchor your position on the constancy of the Planck constant (length or mass). By dimensional analysis the Planck constant has dimensions $ML^2T^{-1}$ therefore it scales as $\alpha^2$ thus, although an atomic observer (A) will measure it as a constant, an invariant observer (S) will see its value changing as the atom's size change. If us, as atomic observers did not found any contradiction in the laws using $h$ then any other atomic observer inhabiting a diferent atom incarnation will use the same equations and arrive to the same conclusions.
Not relevant to our discussion, an aside: Knowing what I know now, any attempt to bring the total mass of the universe (m_u in your comment?) to any unit definition is extremelly abusive, because it depends on a choosed cosmological model and it can not be directly measured; and the same for any black-hole's characteristic, or whatever unrealizable measure process.
(a) (b) by your previous comments I understood that you agreed, but I can be wrong in the interpretation.
(c) reading the paper is beneficial because it is the only way to know new ways of thinking, and the new conclusions expressed there that no one had expressed before. You can not presume that it adds notingh to your preconceptions. In that way the time can be used to talk on why/where you disagree with the expressed position.
There is nothing to read. If you rescale all spectra to redshift, you rescale all energies down by a factor, and all atomic length-scales get blown up by the same factor. This is in units where G is fixed. This is equivalent to changing G and keeping the atomic scales fixed, the two are related by a unit transformation. In fact, this is the best way to say what "changing G" means in a way that is unit-invariant--- all atomic scales are dilated relative to a (fixed) Planck scale. I have said it three times, and you keep on saying it's not true. It is true, and it is not hard to see either. I didn't read anything, there's no point, I already understand this.
I'm writing in the benefit of those wanting to read.
What is the role of $G,c,\varepsilon_0$ in the physics laws? They represent how space allows the response of the battle - space versus matter; and space is not null, it has properties; the $\varepsilon_0$ is ... of vacuum is there to remind us. Let X be a function of matter properties X=f(M,L,T,Q). Then Ron is saying that the outcome F of the battle (which determines the dynamics): $F=G\cdot X$ is equivalent to 1- fiddle with $space-G$ or 2- fiddle with $matter-X$. I can't see how either physically neither mathematically. Change G then F has a different outcome. What I point is that it exists another mathematical function of M,L,T,Q that have the same result of f() and, thus, the outcome is not changing.
If you want to know the limitations involved in the scaling of a geophysical model find 'Scaling Laws' inside it, and think: The perfect solution, if at hand, is to scale the atoms.
We measure everything, including the atom, with the atom. There is no external reference.
The mass unit, in all units system is the mass of a determined colecction of atoms, put the number.
The mass of the electron is a tiny fraction of a mass unit, put the number.
Wat is the mass of an atom? It is a tiny fraction of a mass unit, put the number.
This is a circular definition because we measure the atom with itself thus, the atom can be of any size.
Try with length unit and find it is linked to atom' radius.
Thus, the atom can be of any size. It is inscribed in our atomic model that the atom can scale.
Thus, while others are thinking of 'absolute energy scale' and of the absolute size of the atom, I cant find references to that.
The best and the accepted way to say that I'm wrong is to present a reference document on the absolute atom's size. Until then I'will keep saying that the atom scale, there is no Dark Energy, no Dark matter, no BB, and also I've the best argument to maintain: there is no BlackHoles (this argument is after Ron's hint, thanks).
May be that reading the above linked paper you can say what is wrong inside of it, and deny my statements (any reasoning, math, physics). Go on.
answered Jun 15, 2014 by HelderVelez (-10 points) [ no revision ]
p$\hbar$ysicsO$\varnothing$erflow
|
CommonCrawl
|
The Mathematical Games of Martin Gardner
The great contributions of the man who started popular mathematics
by Matthew Scroggs. Published on 13 March 2016.
It all began in December 1956, when an article about hexaflexagons was published in Scientific American. A hexaflexagon is a hexagonal paper toy which can be folded and then opened out to reveal hidden faces. If you have never made a hexaflexagon, then you should stop reading and make one right now. Once you've done so, you will understand why the article led to a craze in New York; you will probably even create your own mini-craze because you will just need to show it to everyone you know.
The author of the article was, of course, Martin Gardner.
A Christmas flexagon. Make them with our how to make a flexagon guide.
Martin Gardner was born in 1914 and grew up in Tulsa, Oklahoma. He earned a bachelor's degree in philosophy from the University of Chicago and after four years serving in the US Navy during the Second World War, he returned to Chicago and began writing. After a few years working on children's magazines and the occasional article for adults, Gardner was introduced to John Tukey, one of the students who had been involved in the creation of hexaflexagons.
Soon after the impact of the hexaflexagons article became clear, Gardner was asked if he had enough material to maintain a monthly column. This column, Mathematical Games, was written by Gardner every month from January 1956 for 26 years until December 1981. Throughout its run, the column introduced the world to a great number of mathematical ideas, including Penrose tiling, the Game of Life, public key encryption, the art of MC Escher, polyominoes and a matchbox machine learning robot called Menace.
Gardner regularly received topics for the column directly from their inventors. His collaborators included Roger Penrose, Raymond Smullyan, Douglas Hofstadter, John Conway and many, many others. His closeness to researchers allowed him to write about ideas that the general public were previously unaware of and share newly researched ideas with the world.
In 1970, for example, John Conway invented the Game of Life, often simply referred to as Life. A few weeks later, Conway showed the game to Gardner, allowing him to write the first ever article about the now-popular game.
In Life, cells on a square lattice are either alive (black) or dead (white). The status of the cells in the next generation of the game is given by the following three rules:
Any live cell with one or no live neighbours dies of loneliness;
Any live cell with four or more live neighbours dies of overcrowding;
Any dead cell with exactly three live neighbours becomes alive.
For example, here is a starting configuration and its next two generations:
The first three generations of a game of Life
The collection of blocks on the right of this game is called a glider, as it will glide to the right and upwards as the generations advance. If we start Life with a single glider, then the glider will glide across the board forever, always covering five squares: this starting position will not lead to the sad ending where everything is dead. It is not obvious, however, whether there is a starting configuration that will lead the number of occupied squares to increase without bound.
Gosper's glider gun.
Originally, Conway and Gardner thought that this was impossible, but after the article was published, a reader and mathematician called Bill Gosper discovered the glider gun: a starting arrangement in Life that fires a glider every 30 generations. As each of these gliders will go on to live forever, this starting configuration results in the number of live cells perpetually increasing!
This discovery allowed Conway to prove that any Turing machine can be built within Life: starting arrangements exist that can calculate the digits of pi, solve equations, or do any other calculation a computer is capable of (although very slowly)!
Another concept that made it into Mathematical Games shortly after its discovery was public key cryptography. In mid-1977, mathematicians Ron Rivest, Adi Shamir and Leonard Adleman invented the method of encryption now known as RSA (the initials of their surnames). Here, messages are encoded using two publicly shared numbers, or keys. These numbers and the method used to encrypt messages can be publicly shared as knowing this information does not reveal how to decrypt the message. Rather, decryption of the message requires knowing the prime factors of one of the keys. If this key is the product of two very large prime numbers, then this is a very difficult task.
Encrypting with RSA
To encode the message 809, we will use the public key:
\[s=19 \quad \text{and} \quad r=1769\]
The encoded message is the remainder when the message to the power of $s$ is divided by $r$:
\[809^{19}\equiv \underline{388} \mod 1769.\]
Decrypting with RSA
To decode the message, we need the two prime factors of $r$, (29 and 61). We multiply one less than each of these together:
a&=(29-1)\times(61-1)\\
&=1680.
We now need to find a number $t$ such that $st\equiv1\mod a$. Or in other words:
\[19t\equiv 1\mod 1680\]
One solution of this equation is $t=619$ (calculated via the extended Euclidean algorithm).
Then we calculate the remainder when the encoded message to the power of $t$ is divided by $r$:
\[388^{619}\equiv \underline{809} \mod 1769.\]
Gardner had no education in maths beyond high school, and at times had difficulty understanding the material he was writing about. He believed, however, that this was a strength and not a weakness: his struggle to understand led him to write in a way that other non-mathematicians could follow. This goes a long way to explaining the popularity of his column.
After Gardner finished working on the column, it was continued by Douglas Hofstadter and then AK Dewney before being passed down to Ian Stewart.
Gardner died in May 2010, leaving behind hundreds of books and articles. There could be no better way to end than with something for you to go away and think about. These of course all come from Martin Gardner's Mathematical Games:
Find a number base other than 10 in which 121 is a perfect square.
Why do mirrors reverse left and right, but not up and down?
Every square of a 5-by-5 chessboard is occupied by a knight. Is it possible for all 25 knights to move simultaneously in such a way that at the finish all cells are still occupied as before?
Banner photo courtesy of Alex Bellos
Matthew Scroggs
Matthew Scroggs is a postdoctoral researcher in the Department of Engineering at the University of Cambridge working on finite and boundary element methods. His website, mscroggs.co.uk, is full of maths.
@mscroggs mscroggs.co.uk + More articles by Matthew
Did you solve it?
We have a go at the puzzles in Daniel Griller's new book
Prize crossnumber, Issue 09
Win £100 of Maths Gear goodies by solving our famously fiendish crossnumber
Carnival of Mathematics 162
This month's round up of mathematical blog posts from all over the internet
← What's hot and what's not, Issue 03
Roots: the legacy of Fibonacci →
|
CommonCrawl
|
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.
Interpretation of kinetic energy and temperature in thermodynamics
Consider the question above.
Till now I have interpreted kinetic energy as a consequence of temperature in thermodynamics, i.e., the kinetic energy of a gas is directly proportional to its temperature.
Provided this, in the above question, I have three arguments:
If the gases were to stop, shouldn't the temperature of the gas become zero degrees Kelvin?
If they were to stop, should the energy go into the walls of the container?
In the ideal case, the energy is stated to be purely kinetic in books. But shouldn't the energy of the system include nuclear energy, bond energy, etc? In this case, can the kinetic energy appear in these ways?
Which argument is correct?
I misinterpreted 'suddenly stopped' which lead to the formulation of argument 1 in my mind. I agree (as pointed in the @Bob D answer) that it does not apply. Anyway, I am not deleting it.
homework-and-exercises thermodynamics energy temperature kinetic-theory
Deschele Schilder
Tony StarkTony Stark
$\begingroup$ The question must have been framed in ideal conditions ignoring all those energies $\endgroup$ – Anusha Aug 14 '20 at 5:02
Argument 1
The macroscopic kinetic energy of the 0.03 kg mass of the gas moving as a whole at 100 m/s is 150 J ($1/2 mv^2$). This is not the same as the internal microscopic kinetic energy which is due to the random velocities of the molecules and which determines the temperature. So your argument 1 does not apply.
The author appears to be assuming the energy is absorbed by the gas as discussed below
Only the molecular kinetic energy is involved
As I said above, based on the author's answer the author appears to be making the (perhaps questionable) assumption that all of the macroscopic kinetic energy when the gas stops is absorbed by the gas increasing its internal energy and temperature, as follows:
For an ideal gas the change in it's internal energy depends only on It's change in temperature. For one mole
$$\Delta U =C_{v}\Delta T$$.
For a diatomic gas
$$C_{v}=\frac{5}{2}R$$
$$\Delta U =\frac {5}{2}R\Delta T$$.
Setting that equal to 150 J
$$\Delta U =\frac {5}{2}R\Delta T=150J$$
$$\Delta T=\frac{60}{R}$$
Bob DBob D
First, let me state that if the box has stopped the particles in the box won't have stopped moving. This would be the case if they all had zero velocity in the non-moving box, meaning that the velocity of the box would be imparted to all of them (as is the case of particles in a box with non-zero temperature). You have to calculate the kinetic energy of the collection of particles in the box, due to the motion of the box, which seems not too difficult, I guess.
When the box stops moving this extra energy (on top of the kinetic energies of all particles) is gone. Of course, it's absorbed by the gas of diatomic particles (as stated in the cited question), and later on, depending on the temperature outside the box, it's absorbed or not (energy coming in). That's why it's best in this question it's best to state that the box is a perfect insulator.
As you have calculated the kinetic energy of all particles due to the moving box (which is the same as in gas at zero Kelvin temperature contained in a moving box), you can calculate the temperature increase of the gas.
All the other forms of energy you mentioned don't contribute. These only contribute to the final mass of the particles and thus to the final kinetic energy. The final mass (and thus kinetic energy), after all the interactions you stated, is the one used in the molecular weight.
Deschele SchilderDeschele Schilder
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged homework-and-exercises thermodynamics energy temperature kinetic-theory or ask your own question.
Would the temperature of a gas change when accelerated in a train?
Thermodynamics thought experiment
How does temperature relate to the kinetic energy of molecules?
Total pressure in a gaseous system
Average kinetic energy of escaping molecule from a container
What is the kinetic energy of one molecule of a diatomic gas?
Why do different gases have the same average kinetic energy at the same temperature?
Writing an expression for pressure of a mixture of gases according to kinetic energy theory
Change in temperature of a moving gas container which is suddenly stopped
|
CommonCrawl
|
Bateman-Horn conjecture
A conjecture on the asymptotic behaviour of a polynomial satisfying the Bunyakovskii condition (cf. also Bunyakovskii conjecture).
Let $ f _ {1} ( x ) \dots f _ {r} ( x ) $ be polynomials (cf. Polynomial) with integer coefficients, of degrees $ d _ {1} \dots d _ {r} \geq 1 $, irreducible (cf. Irreducible polynomial), and with positive leading coefficients. Let
$$ f = f _ {1} \dots f _ {r} $$
be their product.
V. Bunyakovskii considered the case $ r = 1 $ and asked whether $ f ( n ) $ could represent infinitely many prime numbers as $ n $ ranges over the positive integers. An obvious necessary condition is that all coefficients of $ f $ be relatively prime. However, that is not sufficient. He conjectured that, in addition, the following Bunyakovskii condition is sufficient: there is no prime number $ p $ dividing all the values $ f ( n ) $ for the positive integers $ n $( cf. Bunyakovskii conjecture).
Assuming the Bunyakovskii condition, let
$$ C ( f ) = \prod _ {p \textrm{ a prime } } \left ( 1 - { \frac{1}{p} } \right ) ^ {- r } \left ( 1 - { \frac{N _ {f} ( p ) }{p} } \right ) , $$
where $ N _ {f} ( p ) $ is the number of solutions of the congruence equation $ f ( n ) \equiv0 ( { \mathop{\rm mod} } p ) $( for $ p $ prime). The Bateman–Horn conjecture asserts that
$$ \pi _ {f} ( x ) \sim { \frac{C ( f ) }{d _ {1} \dots d _ {r} } } \int\limits _ { 2 } ^ { x } { { \frac{1}{( { \mathop{\rm log} } t ) ^ {r} } } } {dt } , $$
where $ \pi _ {f} ( x ) $ is the number of positive integers $ n \leq x $ such that all $ f _ {1} ( n ) \dots f _ {r} ( n ) $ are prime.
This formula gives the density of primes in an arithmetic progression (cf. Dirichlet theorem), using the polynomial $ f ( x ) = ax + b $. After some computations, it gives the asymptotic behaviour conjectured by G.H. Hardy and J.E. Littlewood for the number of primes representable by the polynomial $ x ^ {2} + 1 $. It also gives the Hardy–Littlewood conjecture for the behaviour of the number of twin primes, by applying the formula to the polynomials $ x $ and $ x + 2 $( cf. also Twins). Similarly, it implies many other conjectures of Hardy and Littlewood stated in [a2].
See also Distribution of prime numbers.
[a1] P.T. Bateman, R. Horn, "A heuristic formula concerning the distribution of prime numbers" Math. Comp. , 16 (1962) pp. 363–367
[a2] G.H. Hardy, J.E. Littlewood, "Some problems of Partitio Numerorum III" Acta Math. , 44 (1922) pp. 1–70
[a3] H. Halberstam, H.-E. Richert, "Sieve methods" , Acad. Press (1974)
Bateman-Horn conjecture. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bateman-Horn_conjecture&oldid=45994
This article was adapted from an original article by S. Lang (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Bateman-Horn_conjecture&oldid=45994"
|
CommonCrawl
|
TAnnotator: Towards Annotating Programming E-textbooks with Facts and Examples
Akhila Sri Manasa Venigalla ORCID: orcid.org/0000-0003-4356-03341 &
Sridhar Chimalakonda1
Smart Learning Environments volume 10, Article number: 9 (2023) Cite this article
6 Accesses
E-textbooks are one of the commonly used sources to learn programming, in the domain of computer science and engineering. Programming related textbooks provide examples related to syntax, but the number of examples are often limited. Thus, beginners who use e-textbooks often visit other sources on the internet for examples and other information. Adding dynamic information to programming related e-textbooks such as additional information about topics of discussion and real-world programming examples could enhance readers' experience, and improve their learning. Hence, towards enhancing user experience with programming-based e-textbooks, we present TAnnotator, a web-based portal that dynamically annotates computer-programming based e-textbook, The C++ Tutorial, with related programming examples and tooltips. The tooltips aim to provide further knowledge to the readers about various concepts being discussed in textbooks by providing related facts adjacent to the text of the topic in the e-textbook. TAnnotator has been evaluated to assess the usefulness, user experience and complexity using UTAUT2 model through a user survey with 15 volunteers. The results of the survey indicated that TAnnotator was useful in providing additional knowledge on top of the e-textbook.
Use of online resources has increased exponentially with widely available and accessible internet. Novice programmers often rely on various online knowledge sources such as MOOC courses, e-textbooks,Footnote 1 interactive and intelligent online tutors, blogs, crowd sourced platforms and so on to learn programming (Keuning et al., 2021; Venigalla & Chimalakonda, 2020; Vinaja, 2014). Many emerging technologies are being used in the classrooms to improve both teaching and learning (Almiyad et al., 2017; Dutta et al., 2022; Kim et al., 2016; Weber & Brusilovsky, 2016). Augmented reality, virtual reality, collaborative environments are being integrated into teaching to improve information retention by learners (Holstein et al., 2018; Kao & Ruan, 2022; Mystakidis et al., 2021).
Among the available online sources for learning, textbooks are observed to comprise of authentic information (Oates, 2014). E-textbooks are observed to be used on a large scale in the field of computer science and engineering (Fischer et al., 2015; Tang, 2021). This extensive use of textbooks also resulted in various innovative ways to be incorporated into the basic e-textbooks (Hori et al., 2015; Rockinson-Szapkiw et al., 2013; Weng et al., 2018). E-textbooks are now being developed to be interactive, including animation features, simulations and emotions (Chang & Chen, 2022; Lee et al., 2013; Sun et al., 2012; Weng et al., 2018). Dictionaries are also being added to the current e-textbooks to reduce the user efforts of browsing through separate platforms for clear understanding of some words in the textbooks (McGowan et al., 2009; Rockinson-Szapkiw et al., 2013). Several other features including quick search, note making and low printing costs have resulted in e-textbooks having an edge over physical textbooks (Davis & Song, 2020; Sun et al., 2012; Weng et al., 2018). Textbooks are also integrated with content from community Q&A forums to provide information that is deficient in the e-textbook (Ghosh, 2022; Venigalla & Chimalakonda, 2020). Other features to support easy navigation across the e-textbook based on various frequent jump-back behaviours of students using e-textbooks are being explored (Ma et al., 2022).
Programming related e-textbooks have also been integrated with video and audio tutorials, visualizations, code executions and so on (Ericson et al., 2015; Miller & Ranum, 2012; Solcova, 2016; Venigalla & Chimalakonda, 2020; Weber & Brusilovsky, 2016). Code execution platforms that show the run time execution of code snippets presented in the textbook are observed to contribute to better and clear understanding (Solcova, 2016; Weber & Brusilovsky, 2016). Researchers have also attempted to enhance existing information platforms by integrating information available on multiple external sources (Venigalla & Chimalakonda, 2020).
Crowd sourced Question and Answer platforms such as StackOverflow are being integrated with code snippets from crowd sourced code sharing platforms such as GitHub and other code hosting platforms such as JExamples (Reinhardt et al., 2018; Venigalla et al., 2019). GitHub is also being integrated with information from StackOverflow platform (Pletea et al., 2014). Several developers post information and observations about various programming concepts on multiple view-sharing platforms such as blogsFootnote 2 and developer forums.Footnote 3 Integrating e-textbooks with information available on multiple sources could help in learning through different integrated resources, at one place, rather than spending time to explore different resources in finding the desired related information (Almansoori et al., 2021; Seeling, 2016). For example, Almansoori et al. have observed lack of security related discussions in the introductory programming textbooks and suggested augmenting security related information to the contents in the textbook (Almansoori et al., 2021). Patrick Seeling has also observed that integrating online programming textbooks with self-check exercises and feedback has improved the learners' performance (Seeling, 2016). These integrations help users to gain a wider range of knowledge and exposure. The current e-books, are an authentic source of information, however, they contain only static data. Though e-textbooks are not the only learning materials in a course, they could be used as a reference. Moreover, e-textbooks could support enthusiast learners, though not enrolled in a course, in learning concepts of programming.
The existing research on improving e-textbooks mostly focuses on integration with augmented reality, virtual reality, audio and video tutorials and so on (Rockinson-Szapkiw et al., 2013; Sun et al., 2012; Weber & Brusilovsky, 2016). However, most of these enhancements are static in nature, indicating that they are predefined for a given textbook. Also, the code execution platform augmented with e-textbooks implements only the code presented in the textbook. Only interaction and intelligent tutor-based e-textbooks exhibit a scope to present information apart from that available in the textbook. It has also been observed that existing programming language textbooks do not suffice in well explaining the concepts of programming languages, motivating the need to improve these textbooks (Mazumder et al., 2020). However, to the best of our knowledge, except for the intelligent tutor-based textbooks, we are not aware of any research that aims to dynamically integrate an e-textbook with further conceptual information from other sources.
Though DynamiQue (Venigalla & Chimalakonda, 2020) aims to integrate e-textbook with information from StackOverflow, it is only in the form of question and answers and no programming examples or conceptual information is presented. Researchers have also observed that additional examples help readers to construct better mental models, which could further help in improving the readers understanding on a topic (Gerjets et al., 2006).
Readers of programming oriented e-textbooks spend time and effort to visit external sources to explore examples and other additional information about a concept, owing to limited content in the e-textbook. Hence, we propose TAnnotator to reduce these efforts by dynamically augmenting e-textbook with examples and tooltips containing additional information, extracted from external sources.
We are primarily interested to explore the following two research questions in this work:
RQ1: What is the possibility of integrating external knowledge to e-textbooks?
RQ2: Is the annotation of additional content to e-textbooks useful from user perspective?
Hence, towards integrating external knowledge to e-textbooks, TAnnotator, a web-based portal aims to annotate programming-based textbooks with various facts about the content being discussed in the textbook. It also augments the e-textbook with code-snippet examples, as an add-on to already specified examples on the textbook. Figure 1 presents an example of a fact (Fig. 1B) and code snippet example (Fig. 1A) displayed on TAnnotator for the topic—#include. To analyse the usefulness of this annotation from user perspective, we evaluated the usefulness of TAnnotator through a qualitative user survey based on UTAUT2 model (Rondan-Catalun˜a et al., 2015) with 15 participants. The results of the survey indicated that TAnnotator was found to be useful and easy to use. The questionnaire and results of the user survey are presented here.Footnote 4
Sample tooltip with code snippet example in (A) and factual information in (B)
Several technologies have been introduced to improve and teaching and learning (Bailey & Zilles, 2019; Benotti et al., 2018; Zavala & Mendoza, 2018). Mumuki has been proposed as a web-based tool, to reduce work load of teachers and to avoid biased evaluations by assessing code snippets submitted by students and providing feedback to teachers about the performance of students (Benotti et al., 2018).
Emotions have been integrated into the e-textbooks to reduce the cognitive load on the students and support better learning achievements. It has been observed that e-textbooks with emotions helped students in learning and retention when compared to e-textbooks without emotions integrated and also paper textbooks (Chang & Chen, 2022).
Content in the e-textbooks is analysed to identify deficient areas that have a scope for inclusion of more information (Ghosh, 2022). These identified deficient content types are explored in the community Q&A forums and relevant information is extracted and integrated to the e-textbook at the deficient area. Such integration was observed to enhance the learning interest among students (Ghosh, 2022).
Zavala et al., have proposed to automatically generate programming assignments based on a specific template mentioned by teachers. This idea has been demonstrated by integrating it with existing programming practice tool (Zavala & Mendoza, 2018). uAssign has been proposed as a software, that can be embedded into any Learning Management System (Bailey & Zilles, 2019). It aims to generate assignments based on unix terminal and also facilitates teachers to understand the skills of students on unix terminal.
Krusche et al., have proposed ArTemis to support automatic assessment of programming solutions and provide feedback to the students based on their performance instantly, that could constructively improve their programming skill (Krusche & Seitz, 2018).
Improving conceptual learning has become the primary goal of learning in the present day. A study conducted on different types of collaborative learning indicated better conceptual skills in students who followed a feedback and discussion based collaborative learning, rather than those who followed collaborative learning with minimal discussions (Harsley et al., 2017).
Integrating information across various sources also supports learners to gain both knowledge and insights of other developers. StackOverflow has been integrated with definitions of APIs, extracted from JDK and examples from a code hosting platform-JExamples (Venigalla et al., 2019). This integration of information from multiple sources improves knowledge of users about various APIs and also supports the users to be aware of multiple ways in which a specific API can be used (Venigalla et al., 2019). Reinhardt et al. have integrated information from multiple platforms to provide insights on API usage patterns to users (Reinhardt et al., 2018). They have integrated source code available in public repositories on GitHub with code snippets present in questions and answers of StackOverflow. Code snippets that misuse APIs are detected from StackOverflow and are presented with multiple examples from GitHub that display the correct usage of APIs (Reinhardt et al., 2018).
Also, linking information present on multiple platforms might provide valuable insights on issues present in code snippets. DynamiQue has been developed to integrate e-textbooks with questions and answers present on StackOverflow (Venigalla & Chimalakonda, 2020). Relevant questions were identified based on topics on the page, extracted using LDA topic modelling technique. However, no other information except question and answers from StackOverflow was augmented with e-textbooks (Venigalla & Chimalakonda, 2020). E-textbooks have also been integrated with programming practice sessions and automatic grading to ease and improve learning in a programming course (Ellis et al., 2019).
Several approaches and tools have aimed to improve learning of programming concepts by making learning interactive, interesting and simple (Berns et al., 2019; Harsley et al., 2017; Krusche & Seitz, 2018; Weber & Brusilovsky, 2016). Various tools have also been augmented to e-textbooks to ease the learning of students (Rockinson-Szapkiw et al., 2013; Sun et al., 2012; Venigalla & Chimalakonda, 2020). Considering the advantages of multiple source integration, linking e-textbooks with information from multiple sources could be of a great advantage to learners (Venigalla & Chimalakonda, 2020). Though there is adequate research in integrating various forms of information present across multiple programming-based platforms, there is hardly any research that attempts to integrate multiple source conceptual information with textbooks, other than integration of question and answers.
Design and development of TAnnotator
TAnnotator augments e-textbook in the domain of programming, with facts and examples from various discussion forums. The current version of TAnnotator is a web portal and augments only one specific e-textbook—The C++ Tutorial, with facts related to some keywords in the textbook content. Each page of the e-book is rendered onto the web portal and facts are displayed adjacent to the text in the e-book, page-by-page, as shown in Fig. 3. Currently, we demonstrate the idea of augmenting e-book with facts and examples, with six pages of the e-book, which could be easily extended to all pages in the e-textbook.
We used FlaskFootnote 5 framework to design and implement the web portal of TAnnotator. Implementation of TAnnotator requires identification of useful keywords and rendering of facts and examples for these keywords. Useful keywords can be obtained from identifying frequent words and through topic modelling. In a textbook, the words that occur with more frequency and those that occur with least frequency may not be useful. For example, the word good might have highest frequency in a text that describes usefulness of a data structure, but, this does not imply towards any useful topic. Hence, the words with medium frequencies could be identified as useful keywords, because the words with high frequencies might sometimes be insignificant (Choi & Park, 2019). A plenty of approaches such as Latent Semantic Analysis (LSA), Latent Dirichlet Allocation (LDA), and so on can be used to identify topics in a text through topic modelling. Among these approaches, LDA facilitates processing of long texts that occur in textbooks, and hence is used in many studies that involve topic modelling (Onan et al., 2016; Venigalla & Chimalakonda, 2020). For the implementation of TAnnotator, we chose an approach that includes identification of the keywords through LDA and also based on frequency of the keyword, to ensure identification of most useful keywords.
The approach followed in development of TAnnotator is displayed in Fig. 2 and presented below.
Step 1: Extract Textbook Content- The content in the textbook is extracted page-by-page in '.pdf' format and are stored in '.html' format for further processing and rendering. The content in '.pdf' format is converted to '.html' format using the pyPdfFootnote 6 package.
Step 2: Process Content- This content is then processed for lemmatization, stemming and stop word removal. We created a set of stopwords that are to be removed, which are considered during stopword removal process. This set of stopwords is based on the stopwords in English language listed in 'wordcloud'Footnote 7 and 'nltk'Footnote 8 libraries. The process of stemming involves excluding prefixes and suffixes of the words and eliminating punctuations from the text. These words are morphologically analysed and further converted into their dictionary base forms, which is termed as lemmatization (Plisson et al., 2004; Porter et al., 1980). Count vectorizer was used for further cleaning and conversion of words into machine readable format. After preprocessing the text, we employ two methods to identify the keywords that are required to extract facts.
Step 3: Method 1- This method includes identification of keywords based on their frequency scores
Identify frequent words- Frequency of all the processed words is calculated and the words with highest, medium and lowest frequencies are identified. The frequency counts are listed in the descending order of their frequencies and the frequency count for top 15% of this list are considered highest, next 20% as medium and the remaining 65% as lowest frequencies. This decision on the thresholds for highest, medium and lowest is based on the observation of key-word lists for 10 excerpts of the textbook. These thresholds can be modified as per the content being discussed in the text and based on inputs from experts in the domain of text under discussion.
Assign weights- To ensure that the least frequent words are not considered as useful and to view relative importance of words, we used tf-idf method to up-weight more frequent terms and down-weight the least frequent terms.
Identify useful words- A sparse matrix has been generated based on these assigned scores. The words with highest and medium-value scores have been selected and stored.
Step 4: Method 2- This method uses Latent Dirichlet Allocation approach of topic modelling to identify useful keywords in the text.
Apply Latent Dirichlet Allocation- Topics being discussed in the text are identified using LDA. We devised LDA to generate five topics, for the text on which it is applied. LDA is then applied on the text obtained after preprocessing, which resulted in top five topics being discussed in the text.
Identify important keywords- We considered the top three keywords in each of the five topics, along with their scores. This resulted in a set of fifteen keywords, which are stored for further processing in the subsequent steps.
Step 5: Calculate Average Scores for Keywords- We fetched the keywords obtained during Method 1 and Method 2, and appended score as 1 to all the 15 keywords obtained through Method 2, as they did not include any specific scores. Method 1 identifies important keywords based on the frequencies of the keywords. However, considering most frequent words alone might compromise identification of important, but not frequently occurring words. Hence, we apply LDA in the Method 2 to identify useful keywords. However, as we apply LDA only for five classes, the keywords obtained might miss out on the frequently occurring but useful keywords. This is because LDA assigns lower scores for words that occur across multiple classes. Hence, to retain both useful and frequent keywords and useful and not-frequent keywords, we employ two methods as discussed in Step 3 and Step 4. We then calculated the average scores of all the keywords obtained from Method 1 and Method 2. As we aimed to display 10 facts on each page of the e-book, we fetched top 10 keywords based on the averaged scores, for better results. However, using keywords only from Method 2 also results in useful topics, but compromises on frequency. The decision of displaying 10 facts is only to demonstrate the idea of annotating e-textbook. Based on the requirement of the users, the number of facts being displayed could be altered, with minimal changes in the source code of TAnnotator.
Step 6: Store keywords in database- The obtained keywords are stored in the database to ease searching for facts in next phases.
Step 7: Scrape facts and examples from sources- The factual data is scraped (fetched) dynamically from three discussion sources identified prior to the implementation of TAnnotator. These sources include Tutorialspoint,Footnote 9CPPReferenceFootnote 10 and code-Mentor,Footnote 11 and were sufficient to obtain facts for the 10 keywords identified in each of the six pages. More number of sources can further be added if number of pages or the number of e-textbooks are increased. These facts are scraped by passing 'http' requests to the sources, appended with the identified keywords. These requests return factual data from the sources based on the keywords.
Step 8: Store extracted facts and examples in database- The extracted facts and examples are stored in the database, along with their corresponding keywords in .json file.
Step 9: Align extracted data on e-book- The scraped data stored in the database is extracted along with their keywords and is aligned adjacent to the text discussing about each keyword. The location of empty space in the margin, adjacent to the text discussing about the keyword is identified. A textbox containing the extracted data is placed in the space identified.
Step 10: Display extracted data to the user- The facts are displayed in the form of tooltips to the user. All the keywords are highlighted on the text in the e-book with the help of hilitor.js library. When users hover on the tooltip, they are displayed with examples related to the facts that are annotated to the e-book.
Approach followed in designing TAnnotator
This process of development answers RQ1, that while annotations to e-textbooks with additional knowledge could be done, this annotation is limited to availability of information, ease of extraction, associated license guidelines and so on. However, if the information is available for extraction, and is permitted to be extracted, while the content in e-textbook is also permitted to be extracted, then the e-textbook could be annotated with additional information.
User scenario
Suppose Risha is a computer science student, and is interested in reading the textbook- The C++ Language tutorial. She visits the e-textbook and reads through the book, and wishes to know more about the topics being discussed in the e-book. She learns about TAnnotator and considers that it would aid her learning. Hence, she visits TAnnotator portal and decides to read the e-textbook integrated with tooltips that include facts and examples.
She is displayed with text of The C++ Language tutorial e-textbook, page by page as shown in Fig. 3. The title of the text book is displayed as shown in Fig. 3A, and the text in each page of the textbook is displayed as shown in Fig. 3B. The text of the page displayed is processed using topic modelling technique Latent Dirichlet Allocation (LDA), and the topics such as programming, #include, cout and so on are extracted, as shown in the right panel of Fig. 3. Risha is displayed with facts as tooltips, that are dynamically extracted from preidentified sources based on the keywords of extracted topics. These tooltips are displayed to the right of the web page, as shown in Fig. 3C. A 'Read More/Less' icon is presented at every tooltip to enable further reading (as highlighted in Fig. 3D). When Risha clicks on the Read More/Less icon, she is displayed with more elaborated text on the tooltip. When Risha hovers on the image highlighted by in Fig. 3E, she is displayed with example code snippet corresponding to the #include keyword, as shown in Fig. 3F. Further, Risha is facilitated to navigate back and forth through pages using the Previous and Next tabs represented in Fig. 3G.
A Snapshot of TAnnotator representing A name of the e-textbook, B content of the textbook, C facts being displayed, D option to expand or collapse the fact, E option to view example code snippet, F the example code snippet being displayed and G option to navigate across pages in the e-textbook
TAnnotator has been developed with main aim of supporting readers and enhancing user experience with e-textbooks, specifically in the area of programming, keeping in mind, the ever evolving nature of technologies in the domain of computer science and engineering. Hence, our evaluation was based on identifying the extent to which TAnnotator is useful, the complexity associated with using TAnnotator and broadly, the user experience with TAnnotator. Research tools and approaches such as DynamiQue (Venigalla & Chimalakonda, 2020), proposed with similar goals of improving user experience in the existing literature have been evaluated through user surveys, based on multiple models such as TAM (Technology Acceptance Model), UTAUT (Unified Theory of Acceptance & Use of Technology) (Venkatesh et al., 2003, 2012), IDT (Innovation Diffusion Theory), integration of TAM and IDT, and other integrated models (Venigalla & Chimalakonda, 2020).
Considering attributes such as ease of use and usefulness, against which TAnnotator has to be evaluated, and considering the advantages and appropriateness of UTAUT2 model (Venkatesh et al., 2012) over other TAMs (Rondan-Catalun˜a et al., 2015), we evaluate TAnnotator through a user survey based on UTAUT2 model, adapted to the functionalities of TAnnotator. Certain factors of UTAUT2 such as social influence and facilitating conditions that could strongly influence book readers, have made UTAUT2 more appropriate to evaluate TAnnotator, than the other existing TAMs. UTAUT refers to Unified Theory of Acceptance and Use of Technology, that aims to evaluate a novel technological contribution based on multiple factors, with behavioral intention to use and user behavior as the prominent attributes for evaluation.
UTAUT2 (Venkatesh et al., 2012) model evaluates technology based on seven attributes—Performance Expectancy (PE), Effort Expectancy (EE), Social Influence (SI), Facilitating conditions (FC), Hedonic Motivations (HM), Price Value (PV) and Habit (H). These attributes were identified to influence the users' decision on willing to use the portal. PE describes the usefulness of technology, as perceived by the users, with respect to providing users with the necessary information. EE refers to the effort to be spent by users in order to use the technology, that includes clarity and ease of use. SI indicates the opinion of users' society towards using the technology, while FC indicates the availability of resources to use the technology and the information and skillset to be learned before using the technology. HM refers to the fun and pleasure obtained by using the technology. PV refers to the users' opinion on value of the price associated with the technology. H deals with interest of users in using the technology in the users' daily routine, and BI deals with the tendency of user to use and continue to use the technology. TAnnotator can currently be accessed without any pricing from the source, thus, not requiring to be evaluated against PV attribute. Hence, an adapted UTAUT2 model, with the following six attributes, which are considered for evaluation of TAnnotator, has been used to perform a user survey.
Performance expectancy (PE)
Effort expectancy (EE)
Social influence (SI)
Facilitating conditions (FC)
Hedonic motivations (HM)
Habit (H)
Behavioral intention to use (BI)
Invites were sent out to 20 under graduate students and 10 industry developers requesting for participation in the evaluation survey. While the invites were sent randomly to 20 undergraduate students from a class of 40, we had a filter that the undergraduate students selected have computer science as their major, in our academic institution. The industry developers were randomly chosen based from different organizations, through their LinkedinFootnote 12 India profiles, where the users mention their work profile and details.
We received acceptance responses from 11 students and 4 industry developers. Thus, we considered 15 volunteers, in the age group of 19–44 years. The volunteers included 6 female students, 5 male students and 2 female industry developers and 2 male industry developers. Of all the participants, 12 of them stated that they were very familiar or familiar to using e-textbooks, while three of the participants marked their familiarity as neutral. A 5-point Likert scale based questionnaire of 30 questions, based on the six attributes of UTAUT2, presented in Table 1 and other required demographic information such as age and familiarity, was then circulated among the volunteers. While the first 6 questions of the questionnaire referred to demographic information, the rest 24 questions dealt with attributes of adapted UTAUT2 model, similar to the following questions.
Using TAnnotator would increase my productivity in learning. (Rate from Strongly Disagree to Strongly Agree)
TAnnotator has displayed facts and examples relevant to content in the e- textbook. (Rate from Strongly Disagree to Strongly Agree)
My interaction with the TAnnotator is clear and understandable. (Rate from Strongly Disagree to Strongly Agree)
Table 1 Adapted UTAUT2 Questionnaire and the corresponding mean and standard deviation values obtained after the user survey
The volunteers were explained about TAnnotator and were requested to use the portal through the provided link. They were asked to read through the textbook provided and consequently view the tips presented on the portal. They were then asked to answer the circulated questionnaire.
The questionnaire and corresponding results are presented here.Footnote 13
All questions of the questionnaire sent for user survey were classified into corresponding UTAUT2 factors and labelled accordingly. These questions were answered based on a 5-point Likert scale from Strongly Disagree to Strongly Agree. During analysis of the results, the highest score, 5 has been assigned to strongly agree and the lowest score, 1 has been assigned to strongly disagree. Mean and standard deviation of the results have been calculated for each of the questions and the resultant scores are plotted on a graph, that is presented in Fig. 4. The horizontal axis of the graph comprises of question codes for all the questions based on the factors they correspond to, while the vertical axis comprises of mean and standard deviation scores. The mean value of the questions indicates the extent of acceptance, while standard deviation indicates the mutual level of agreement among the participants.
Mean and standard deviation plots for factors of UTAUT2 in the questionnaire
As reported in the Fig. 4 the mean of all the questions is greater than 3 and close to 4 in majority of the cases, indicating better acceptance of TAnnotator among the participants of user survey. Questions corresponding to Effort Expectancy (EE) and Facilitating Conditions (FC) aspects have highest means, close to 4, and reasonably low standard deviations. This indicates that majority of the participants have collectively found learning to use and being skillful at TAnnotator to involve minimal effort from the participants. The mean value 4.1 for question based on correctness indicates that TAnnotator has rendered correct and relevant information on the respective pages of the e-textbook. However, the mean values of questions related to Habit (H), though close to 3, are observed to be lesser than other factors, indicating the need for improvements in TAnnotator towards being regularly used by users, and consequently be put into practice.
This analysis of usefulness through the user survey based evaluation answers RQ2, by indicating that majority of the users consider annotating an e-textbook with additional information to be useful. However, the analysis also indicates that TAnnotator could be further improved to support users in making it a habit to use TAnnotator frequently.
Participants have also suggested improving TAnnotator to support multiple e- textbooks. Some suggestions from users include:
"A code editor could be integrated for better usage of code examples"
"More facts along with source from which they are extracted could be useful"
The correlation among the variables from UTAUT2 model considered for evaluation of TAnnotator are presented in Table 2. We calculated the correlation of UTAUT2 variables using the following formula, where r: correlation coefficient, xi: values of the x-variable in a sample, \(\bar{x}\): mean of the values of the x-variable, yi: values of the y-variable in a sample and ȳ: mean of the values of the y-variable, and x and y correspond to UTAUT2 variables.
$$r = \frac{{\sum \left( {x_{i} - \overline{x}} \right)\left( {y{\text{i}} - \overline{y}} \right)}}{{\sqrt {\sum \left( {x_{i} - \overline{x}} \right)^{2} \sum \left( {y_{i} - \overline{y}} \right)^{2} } }}$$
Table 2 Correlation analysis results of UTAUT2 variables
The values of correlation among the variables displayed in Table 2 indicate positive correlation among all the factors. The strongest correlation (0.638) is observed among Social Influence (SI) and Facilitating Conditions (FC), indicating that the users' facilitating conditions to use TAnnotator are strongly affected by Social Influence on the user. Also, users' behavioral intention (BI) to use TAnnotator is observed to be strongly influenced by the extent to which TAnnotator contributes to users' habit (H), than any other factor.
Discussion and limitations
TAnnotator currently demonstrates the idea of annotating e-textbooks with facts and examples through only one e-textbook- The C++ Language Tutorial. However, it could be extended with minimal technical effort to any other e-textbook in the programming domain, based on the e-textbooks' policies. Extension of TAnnotator to other e-textbooks of any other domains requires identification of active discussion forums that could be used as sources for fetching facts about the concepts being discussed in the textbook.
Currently, facts and examples are only being extracted from three sources. Increasing the number of sources could further improve the number of facts being presented to the readers and could consequently contribute to increased knowledge base for the readers. Moreover, the topics for which facts and examples are being fetched is based on the keywords obtained through LDA and frequent word identification, when specific paragraphs are processed. Hence, the LDA model used influences the accuracy of topics obtained as keywords to represent topic of discussion in the textbook. Moreover, the accuracy of topics depends on efficiency of the LDA model, and these topics might sometimes be irrelevant. The current version being a prototype version, we only included 6 pages of the textbook. We can add more pages in the current portal before making it available to be used by a wider range of audience. The facts and examples augmented to the e-textbook using TAnnotator could be aligned more attractively to further improve readers' experience.
TAnnotator displays additional information only in extra margin adjacent to the text, and does not disturb the inherent flow of content in the original e-textbook. However, there is need for qualitative assessment towards determining the position of annotations. An expert in the field corresponding to topics of discussion in the e-textbook could be consulted to validate the annotated information and the information that could disturb the didactic concepts could be omitted. This exercise could be conducted for multiple annotations and the actions could be drafted, which could help in training machine learning model that could help in validating the annotations. This could reduce the need for human interference, while facilitating validation of the annotations.
We demonstrated the annotation of one programming-based e-textbook using TAnnotator in this work. This could be applicable and adapted to other similar e-textbooks, but is limited to the associated permissions. The e-textbook under consideration should allow permissions to extract the data present in it, and the relevant sources corresponding to this e-textbook should allow permissions to extract and reuse the data. However, the sources and the e-textbook considered for demonstration in this work permit data to be extracted and reused.
Also, the keywords in a specific page are being identified only once to ensure faster response of the portal. Approaches to dynamically identify keywords every time a page loads, without compromising on the response time of the portal could be explored. This could further improve the knowledge base of users by displaying newer facts frequently. The evaluation performed largely focuses on the user experience and usability of TAnnotator, but does not focus on the learning outcome using TAnnotator. However, as programming courses differ with respect to their goals across various institutions, programmer age groups, educational and social backgrounds, it is difficult to arrive at common learning goals, and hence difficult to assess the learning outcome.
Conclusion and future work
We presented the idea of annotating online textbooks, specifically in the area of computer programming, with facts and examples in this paper. We demonstrated this idea by presenting a prototype version of TAnnotator, a web portal that displays tooltips and corresponding example code snippets adjacent to the text in the e-textbook. These tooltips are displayed based on the context of the topics being discussed in the textbook. The tooltips and examples are extracted from developer forums and blogs. We have evaluated the user experience and usefulness of TAnnotator, through a user survey based on UTAUT2 technology acceptance model. The results of the survey indicate that majority of the users appreciated the idea of annotating e-textbooks in the programming domain and that they were willing to use TAnnotator. It can be used to learn about the latest trends and specific tooltips for multiple topics being discussed in the e-textbooks. It could be useful to the software developers in the industry to be up-to-date about multiple facts in their domain. It could also help students in getting a better idea about the topics being discussed in the textbooks.
We plan to increase the number of sources being used for tooltip and example retrieval based on the content of the textbook. The responses of the survey indicate lesser inclination towards using TAnnotator on a day-to-day basis. The suggestions of the users also indicate need for improvement in the number of facts annotated and the addition of other features. We plan explore multiple sources by considering the participants' expertise and further explore ways to incorporate the participants' suggestions. We plan to extend TAnnotator to support more number of freely available e-textbooks in the programming domain. Also, we intend to identify appropriate rich sources of information for each of the textbooks that would be supported by TAnnotator. We also plan to integrate an automated question and answer mechanism, capable of answering questions of the users, to the current version. User activity on the portal could be tracked, with user permissions, and tooltips based on the activity could be provided in the future versions, thus moving towards a personalised textbook annotator.
The tool and the results of evaluation are available from the corresponding author on reasonable request.
e-books and e-textbooks are used interchangeably.
https://en.cppreference.com/w/cpp/language.
https://moodle.org/mod/forum/s://moodle.org/mod/forum/.
http://bit.ly/35PT9lC.
https://flask.palletsprojects.com/en/1.1.x/.
https://pypi.org/project/pyPdf/.
urlhttps://amueller.github.io/word cloud/generated/wordcloud.WordCloud.html.
https://www.nltk.org/.
https://www.javatpoint.com/cpp-tutorial.
https://www.codementor.io/.
https://www.linkedin.com/.
Almansoori, M., Lam, J., Fang, E., Soosai Raj, A. G., & Chatterjee, R. (2021). Textbook underflow: Insufficient security discussions in textbooks used for computer systems courses. In Proceedings of the 52nd ACM technical symposium on computer science education (pp. 1212–1218).
Almiyad, M. A., Oakden-Rayner, L., Weerasinghe, A., & Billinghurst, M. (2017). Intelligent augmented reality tutoring for physical tasks with medical professionals. In International conference on artificial intelligence in education (pp. 450–454).
Bailey, J., & Zilles, C. (2019). uassign: Scalable interactive activities for teaching the unix terminal. In Proceedings of the 50th ACM technical symposium on computer science education (pp. 70–76).
Benotti, L., Aloi, F., Bulgarelli, F., & Gomez, M. J. (2018). The effect of a web-based coding tool with automatic feedback on students' performance and perceptions. In Proceedings of the 49th ACM technical symposium on computer science education (pp. 2–7).
Berns, C., Chin, G., Savitz, J., Kiesling, J., & Martin, F. (2019). Myr: A web-based platform for teaching coding using VR. In Proceedings of the 50th ACM technical symposium on computer science education (pp. 77–83).
Chang, C.-C., & Chen, T.-C. (2022). Emotion, cognitive load and learning achievement of students using e-textbooks with/without emotional design and paper textbooks. Interactive Learning Environments, 1–19.
Choi, H.-J., & Park, C. H. (2019). Emerging topic detection in twitter stream based on high utility pattern mining. Expert Systems with Applications, 115, 27–36.
Davis, R. C., & Song, X. (2020). Uncovering the mystery of how users find and use ebooks through guerilla usability testing. Serials Review, 46, 1–8.
Dutta, R., Mantri, A., & Singh, G. (2022). Evaluating system usability of mobile augmented reality application for teaching Karnaugh-maps. Smart Learning Environments, 9(1), 1–27.
Ellis, M., Shaffer, C. A., & Edwards, S. H. (2019). Approaches for coordinating etextbooks, online programming practice, automated grading, and more into one course. In Proceedings of the 50th ACM technical symposium on computer science education (pp. 126–132).
Ericson, B. J., Guzdial, M. J., & Morrison, B. B. (2015). Analysis of interactive features designed to enhance learning in an ebook. In Proceedings of the eleventh annual international conference on international computing education research (pp. 169–178).
Fischer, L., Hilton, J., Robinson, T. J., & Wiley, D. A. (2015). A multi-institutional study of the impact of open textbook adoption on the learning outcomes of post- secondary students. Journal of Computing in Higher Education, 27(3), 159–172.
Gerjets, P., Scheiter, K., & Catrambone, R. (2006). Can learning from molar and modular worked examples be enhanced by providing instructional explanations and prompting self-explanations? Learning and Instruction, 16(2), 104–121.
Ghosh, K. (2022). Remediating textbook deficiencies by leveraging community question answers. Education and Information Technologies, 1–41.
Harsley, R., Di Eugenio, B., Green, N., & Fossati, D. (2017). Enhancing an intelligent tutoring system to support student collaboration: Effects on learning and behavior. In International conference on artificial intelligence in education (pp. 519–522).
Holstein, K., McLaren, B. M., & Aleven, V. (2018). Student learning benefits of a mixed-reality teacher awareness tool in AI-enhanced classrooms. In International conference on artificial intelligence in education (pp. 154–168).
Hori, M., Ono, S., Kobayashi, S., Yamaji, K., Kita, T., & Yamada, T. (2015). Learner autonomy through the adoption of open educational resources using social network services and multi-media e-textbooks. AAOU Journal, 10(1), 23.
Kao, G.Y.-M., & Ruan, C.-A. (2022). Designing and evaluating a high interactive augmented reality system for programming learning. Computers in Human Behavior, 132, 107245.
Keuning, H., Heeren, B., & Jeuring, J. (2021). A tutoring system to learn code refactoring. In Proceedings of the 52nd ACM technical symposium on computer science education (pp. 562–568). New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3408877.3432526.
Kim, H. J., Park, J. H., Yoo, S., & Kim, H. (2016). Fostering creativity in tablet-based interactive classrooms. Journal of Educational Technology & Society, 19(3), 207–220.
Krusche, S., & Seitz, A. (2018). Artemis: An automatic assessment management system for interactive learning. In Proceedings of the 49th ACM technical symposium on computer science education (pp. 284–289).
Lee, H. J., Messom, C., & Yau, K.-L.A. (2013). Can an electronic textbooks be part of k-12 education?: Challenges, technological solutions and open issues. Turkish Online Journal of Educational Technology-TOJET, 12(1), 32–44.
Ma, B., Lu, M., Taniguchi, Y., & Konomi, S. (2022). Exploring jump back behavior patterns and reasons in e-book system. Smart Learning Environments, 9(1), 1–23.
Mazumder, S. F., Latulipe, C., & P´erez-Quin˜ones, M. A. (2020). Are variable, array and object diagrams in java textbooks explanative? In Proceedings of the 2020 ACM conference on innovation and technology in computer science education (pp. 425–431).
McGowan, M. K., Stephens, P. R., & West, C. (2009). Student perceptions of electronic textbooks. Issues in Information Systems, 10(2), 459–465.
Miller, B. N., & Ranum, D. L. (2012). Beyond pdf and epub: Toward an interactive textbook. In Proceedings of the 17th ACM annual conference on innovation and technology in computer science education (pp. 150–155).
Mystakidis, S., Christopoulos, A., & Pellas, N. (2021). A systematic mapping review of augmented reality applications to support stem learning in higher education. Education and Information Technologies, 27, 1–45.
Oates, T. (2014). Why textbooks count. Cambridge: Cambridge Assessment.
Onan, A., Korukoglu, S., & Bulut, H. (2016). LDA-based topic modelling in text sentiment classification: An empirical analysis. International Journal of Linguistic Applications, 7(1), 101–119.
Pletea, D., Vasilescu, B., & Serebrenik, A. (2014). Security and emotion: Sentiment analysis of security discussions on github. In Proceedings of the 11th working conference on mining software repositories (pp. 348–351).
Plisson, J., Lavrac, N., Mladenic, D., et al. (2004). A rule based approach to word lemmatization. In Proceedings of is (Vol. 3, pp. 83–86).
Porter, M. F., et al. (1980). An algorithm for suffix stripping. Program, 14(3), 130–137.
Reinhardt, A., Zhang, T., Mathur, M., & Kim, M. (2018). Augmenting stack overflow with API usage patterns mined from github. In Proceedings of the 2018 26th ACM joint meeting on European software engineering conference and symposium on the foundations of software engineering (pp. 880–883).
Rockinson-Szapkiw, A. J., Courduff, J., Carter, K., & Bennett, D. (2013). Electronic versus traditional print textbooks: A comparison study on the influence of university students' learning. Computers & Education, 63, 259–266.
Rondan-Cataluña, F. J., Arenas-Gaitán, J., & Ramírez-Correa, P. E. (2015). A comparison of the different versions of popular technology acceptance models. Kybernetes, 44(5), 788–805.
Seeling, P. (2016). Switching to blended: Effects of replacing the textbook with the browser in an introductory computer programming course. In 2016 IEEE frontiers in education conference (fie) (pp. 1–5).
Solcova, L. (2016). Interactive textbook–a new tool in off-line and on-line education. Turkish Online Journal of Educational Technology-TOJET, 15(3), 111–125.
Sun, J., Flores, J., & Tanguma, J. (2012). E-textbooks and students' learning experiences. Decision Sciences Journal of Innovative Education, 10(1), 63–77.
Tang, K.-Y. (2021). Paradigm shifts in e-book-supported learning: Evidence from the web of science using a co-citation network analysis with an education focus (2010–2019). Computers & Education, 175, 104323.
Venigalla, A. S. M., & Chimalakonda, S. (2020). Dynamique–a technical intervention to augment static textbook with dynamic q&a. Interactive Learning Environments, 30, 1–15.
Venigalla, A. S. M., Lakkundi, C. S., Agrahari, V., & Chimalakonda, S. (2019). Stackdoc-a stack overflow plug-in for novice programmers that integrates q&a with API examples. In 2019 IEEE 19th international conference on advanced learning technologies (ICALT) (Vol. 2161, pp. 247–251).
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27, 425–478.
Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36, 157–178.
Vinaja, R. (2014). The use of lecture videos, ebooks, and clickers in computer courses. Journal of Computing Sciences in Colleges, 30(2), 23–32.
Weber, G., & Brusilovsky, P. (2016). Elm-art–an interactive and intelligent web-based electronic textbook. International Journal of Artificial Intelligence in Education, 26(1), 72–81.
Weng, C., Otanga, S., Weng, A., & Cox, J. (2018). Effects of interactivity in e- textbooks on 7th graders science learning and cognitive load. Computers & Education, 120, 172–184.
Zavala, L., & Mendoza, B. (2018). On the use of semantic-based AIG to automatically generate programming exercises. In Proceedings of the 49th ACM technical symposium on computer science education (pp. 14–19).
We thank all the participants for their valuable time and honest feedback that helped us in evaluating TAnnotator. We would also like to thank Shruti Priya and Shubhankar for helping us with the development of TAnnotator.
Research in Intelligent Software & Human Analytics (RISHA) Lab, Department of Computer Science & Engineering, Indian Institute of Technology Tirupati, Tirupati, India
Akhila Sri Manasa Venigalla & Sridhar Chimalakonda
Akhila Sri Manasa Venigalla
Sridhar Chimalakonda
AV has contributed more in terms of implementation of the idea, and SC has contributed more in terms of the idea. All authors read and approved the final manuscript.
Correspondence to Akhila Sri Manasa Venigalla.
There are no competing interests.
Venigalla, A.S.M., Chimalakonda, S. TAnnotator: Towards Annotating Programming E-textbooks with Facts and Examples. Smart Learn. Environ. 10, 9 (2023). https://doi.org/10.1186/s40561-023-00228-y
DOI: https://doi.org/10.1186/s40561-023-00228-y
Dynamic annotation
|
CommonCrawl
|
International Journal of Industrial Chemistry
June 2019 , Volume 10, Issue 2, pp 159–173 | Cite as
Synthesis and investigations of heterocyclic compounds as corrosion inhibitors for mild steel in hydrochloric acid
Salima K. Ahmed
Wassan B. Ali
Anees A. Khadom
The corrosion inhibition of mild steel in 0.5 M hydrochloric acid by six synthesized heterocyclic compounds was studied using weight loss measurements. The inhibition efficiency exceeded 95%. The excellent inhibitor performance was attributed to the formation of protection adsorption films on the steel surface. The structures of compounds were confirmed by Fourier transform infrared and nuclear magnetic resonance analysis. The adsorption of inhibitor on steel surface followed the Langmuir adsorption isotherm. Quantum chemical calculations were also adopted to clarify the inhibition mechanism.
Steel Weight loss FT-IR Corrosion Inhibition Synthesis
Iron and its alloys are widely used as a constructional material in several industrial applications, such as petroleum, power plants, chemical industries due to its high mechanical strength, easy fabrication and low cost [1, 2]. The variety of applications make steel in contact with various corrosive environments, such as acidic solutions during the processes of etching, acid pickling, acid descaling, acid cleaning, oil well acidification. [3]. In acidic media, steel alloys react easily and converted from metallic to ionic state forming a huge economic loss. Therefore, there is a necessary need to develop some excellent corrosion controlling methods. One of the methods is the use of corrosion inhibitors [4, 5, 6]. Corrosion inhibitors can be classified according to chemical structure, method of action, etc. One of the common classes is organic corrosion inhibitors that obtained the highest importance due to their ease synthesis at relatively low cost and high protection ability. The method of prevention can be ascribed to the adsorption on the steel surface and impeding the active corrosion sites. Formation of protective layer between the aggressive solution and metal surface hinder the dissolution of the metal and reduce corrosion damages [7, 8]. Organic inhibitors containing heteroatoms, such as N, O, S and P, have proven practically and theoretically to act as efficiently corrosion inhibitors in a wide range of acidic solutions [9, 10]. The efficiency of these inhibitors can be attributed to their high polarizability and lower electronegativity; so that these atoms and the functional groups can cover large metallic surface areas and easily electrons transfer to the empty orbitals of atoms [11]. In addition, nitrogen-containing organic inhibitors is good anticorrosion materials for metals in hydrochloric acid, while compounds having sulfur atoms act as good inhibitors in sulfuric acid. Compounds holding nitrogen and sulfur behave as perfect corrosion inhibitors for both media [12]. The action of any inhibitor for any specific metallic alloys in sever acidic environments depends on the nature of the characteristic inhibitor film accumulated on the metal surface and the number and nature of adsorption centers contributing in the adsorption process. In general, the inhibition performance of the inhibitors having different heteroatoms follows the reverse order of their electronegativities, so that in S, N, O and P, the inhibition performance followed in the order of: O < N < S < P [13]. The existence of organic materials in the acidic solutions commonly alters the electrochemical behavior of the acidic environments. In other words, it decreases the aggressiveness of the solution. The most regularly used heterocyclic compounds have sulfur (S), phosphorus (P), nitrogen (N) or oxygen (O) heteroatoms, and these effectively take part in adsorption centers. Therefore, in this study, six heterocyclic compounds were syntheses and selected to act as corrosion inhibitors for steel in hydrochloric acid solution.
Experimental work
Materials and test conditions
All the reagents and starting materials as well as solvents were purchased commercially and used without any further purification. Test sample was carbon steel which has the following chemical compositions (%wt): 0.1 C, 0.335 Mn, 0.033 Si, 0.0067 S, 0.0056 P, 0.057 Al, 0.0476 Cu, 0.0201 Cr, 0.001 Co, 0.0007 Ti and the balance is F. Prior to each measurement, the steel electrode was abraded with emery papers with grade of 800–1500, washed ultrasonically with distilled water, acetone and alcohol, and dried under dry air. Testing electrolyte of 0.5 M HCl aqueous solution was prepared by diluting Analar Grade 37% hydrochloric acid with ultra-pure water. All measurements were performed for three times to obtain a satisfactory reproducibility.
Inhibitors diagnosis and measurements
The melting points of compounds were determined by Gallen Kamp (MFB-600) melting point apparatus. FT-IR spectra of compounds were recorded PERKIN ELMER SPEACTUM-65 within the range 4000–400 cm−1 using KBr Disk. The 1H-NMR spectra were performed by Bruker 400 MHz spectrophotometer with TMS as internal standard, and deuterated DMSO was used as a solvent. The compounds were checked for their purity on silica gel TLC plates and the visualization of spots performed by using UV light.
Synthesis of inhibitors
General method for the synthesis of 4-amino-5-(substituted-phenyl)-4H [1, 2, 4] -triazole-3-thiols (ATT1, ATT2, ATT4, ATT5, ATT6)
Figure 1 shows the scheme of synthesis procedure of inhibitors. The compounds were synthesized by the fusion of substituted benzoic acid (0.01 mol) and thiocarbohydrazide (0.015 mol), which contained in a round bottom flask and heated by a mantle until the content of the flask was melted [14, 15]. After cooling, the product was treated with sodium bicarbonate solution to neutralize the unreacted carboxylic acid, if any. It was then washed with water and collected by filtration. The completion of the reaction and the purity of the compound were checked by TLC (mobile phase hexane: ethyl acetate 1:2). The product was recrystallized from appropriate solvent to afford the title compounds.
Synthesis scheme of inhibitors
Specifications of 4-(4-amino-5-mercapto-4H-1,2,4-triazole-3-yl) phenol (ATT1)
White crystals, m.p: 216–218 °C, FT-IR (KBr, cm−1): OHstr (3524(, NH2 (3356), N–Hstr (3144), aromatic C–Hstr (3090), C=Nstr (1654), aromatic C=Cstr (1505, 1409), C=S (1244(, Yield: 60%. The spectrum of 1H-NMR (400 MHz, d6-DMSO, ppm) to compound ATT1 appears to show the following data: δ H = 12.4 (S, 1 H, OH), 8.4 (S, 1 H, SH), 5.6–6.02 (4H, aromatic H), 5.2 (S, 2H, NH2).
Specifications of 4-amino-5-(4-aminophenyl)-4H-1,2,4-triazole-3-thiol (ATT2)
Light gray crystals, m.p: 147–149 °C, FT-IR (KBr, cm−1): NH2 (3356, 3195), N–Hstr (3137), aromatic C–Hstr (2954), C=Nstr (1647), aromatic C=Cstr (1508, 1464), C=S (1248(, Yield: 63%.
Specifications of 4-amino-5-(4-((4-nitrobenzylidene)amino)phenyl)-4H-1,2,4-triazole-3-thiol (ATT4)
Light yellow crystals, m.p: 208–210 °C, FT-IR (KBr, cm−1): NH2 (3381–3287), N–Hstr (3141), aromatic C–Hstr (2981), aliphatic C–Hstr (2843), C=Nstr (1607), aromatic C=Cstr (1392), NO2 (1341–1516), yield: 79%. The spectrum of 1H-NMR (400 MHz, d6-DMSO, ppm) show the following data: δ H = 10.3 (S, 1 H, SH), 8.6 (S, 1H, N=CH), 8–8.2 (8H, aromatic H), 4.9 (S, 2H, NH2). 13C-NMR (400 MHz, d6-DMSO, ppm): δ C = 175 (CH=N), 140(N–C–N), 148, 147, 140, 135, 133, 129, 127, 123, 120 (Ar–CH).
Specifications of 4-amino-5-(4-((4-chlorobenzylidene)amino)phenyl)-4H-1,2,4-triazol-3-thiol (ATT5)
Dark gray crystals, m.p:149–151 °C, FT-IR (KBr, cm−1): NH2(3422–3276), N–Hstr (3115), aromatic C–Hstr (2942), aliphatic C–Hstr (2859), C=Nstr (1610), C=Cstr (1504), C–Clstr (630), yield: 80%. The spectrum of 1H-NMR 400 MHz, d6-DMSO, ppm to compound ATT5 appears to show the following data: δ H = 8.7 (S, 1 H, SH), 8.2 (S, 2H, N=CH), 7.2–8 (4H, aromatic H), 5.4 (S, 2H, NH2). 13C-NMR (400 MHz, d6-DMSO, ppm): δ C = 163 (CH=N), 133(N–C–N), 141, 132, 129, 128, 127, 126, 123 (Ar–CH).
Specifications of 4-amino-5-(3,4-diaminophenyl)-4H-1,2,4-triazole-3-thiole (ATT6)
Deep brown crystals, m.p: < 300 °C, FT-IR (KBr, cm−1): NH2 (3363), N–Hstr (3232), aromatic C–Hstr (3151), C=Nstr (1628), aromatic C=Cstr (1526, 1486), C=S (1274), yield: 80%.
General method for synthesis of 4-((4-nitrobenzylidene)amino)-5-(4- (((Z)-4-nitrobenzylidene)amino)phenyl)-4H-1,2,4-triazole-3-thiol (ATT3)
A mixture of the compound ATT2 (0.005 mol, 1.03 g) in 15 ml of absolute ethanol with a solution of 4-nitrobenzaldehyde (0.01 mol, 1.51 g) in 10 ml ethanol with five drops of glacial acetic acid as a catalyst and refluxed the mixture for 13 h [16]. The completion of the reaction and the purity of the compound were checked by TLC (mobile phase hexane: ethyl acetate 1:2) and then cooled the resultant solution to room temperature. The resulting yellow solid crystal 4-((4-nitrobenzylidene)amino)-5-(4-(((z)-4-nitrobenzylidene)amino)phenyl)-4H-1,2,4-triazole-3-thiol was filtered washed and recrystallized from appropriate solvent. The specifications of ATT3 were yellow crystals, m.p:233–235 °C, FT-IR (KBr, cm−1):NH2 (3297), N–Hstr (3122), aromatic C–Hstr (2990), C=Nstr (1599), C=Cstr (1442), NO2 str (1343–1515), yield: 79%. The spectrum of 1H-NMR (400 MHz, d6-DMSO, ppm) of ATT3 appears to show the following data: δ H = 11.9 (S, 1 H, SH), 8.6 (S, 2H, N=CH), 7.6–8.2 (12H, aromatic H). 13C-NMR (400 MHz, d6-DMSO, ppm): δC = 175 (CH=N), 130(N–C–N), 147, 128, 123, 112 (Ar–CH). Figures 2, 3, 4 and 5 show selected FT-IR and NMR curves of some synthesis inhibitors, while Table 1 collects the physical properties of compounds.
FT-IR curve of ATT1
1H-NMR spectra of compound ATT1
Physical property of the syntheses compounds
m.p. (°C)
Res. solvent
% Yield
ATT1
C8H8N4OS
Ethanol/water
Deep brown
C8H10N6S
C22H15N7O4S
C15H12N5SCl
C8H9N5S
Weight loss measurements
Rectangular test specimens, with dimensions 3 cm × 1 cm × 0.1 cm, were made from low carbon steel, whose chemical composition as listed above. Samples were washed with running tap water followed by distilled water, dried with clean tissue, immersed in acetone and alcohol, dried again with clean tissue, then, kept in desiccators over silica gel bed until use. The dimensions of each sample were measured with a vernier to second decimal of millimeter and accurately weighted to the 4th decimal of gram. The metal samples were completely immersed each in 500 ml of uninhibited and inhibited 0.5 M HCl solution contained in a conical flask. They were exposed for a period of 3 h at the desired temperature and inhibitor concentration. Then, the metal samples were cleaned, washed with running tap water followed by distilled water dried with clean tissue then immersed in acetone and alcohol and dried again. Weight losses in gm m−2 day−1 (gmd) were determined in the presence and absence of inhibitor. At the beginning, all inhibitors were tested at inhibitor concentration of 0.001 M and 30 °C to select the best one. Then, the inhibitor with higher efficiency was evaluated at different temperature (30–60 °C) and inhibitor concentration of 1 × 10−3, 2 × 10−3, 3 × 10−3, and 4 × 10−3 M.
Weight or mass loss technique is a very common and conventional method for corrosion rate evaluation. It was used in many researches as a powerful tool for metal loss estimation [17, 18, 19]. Table 2 summarizes the results of weight loss technique of the low carbon steel alloy corrosion in 0.5 M hydrochloric acid solution at 30 °C and 0.001 M inhibitor concentration. The values of corrosion rate were evaluated using the following equation [20]:
Corrosion rate of low carbon steel alloy and inhibitor efficiency of synthesis compounds corrosion in 0.5 M hydrochloric acid solution at 30 °C and 0.001 M inhibitor concentration
Corrosion rate (g m2 day)
Inhibitor efficiency (%)
$${\text{CR}} = \frac{{{\text{weight}}\;{\text{loss}}\;(g)}}{{{\text{area}}\;({\text{m}}^{2} ) \times {\text{time}}\;({\text{day}})}}$$
From the corrosion rate, the percentage inhibition efficiency of weight loss experiments (IE) was calculated using the following equation [21]:
$${\text{IE}} = \frac{{{\text{CR}}_{\text{uninibit}} - {\text{CR}}_{\text{inhibit}} }}{{{\text{CR}}_{\text{uninhibit}} }} \times 100$$
where CRuninhibit and CRinhibit are the corrosion rates in the absence and presence of inhibitors, respectively. Table 2 shows that inhibitor efficiency ranged from 50.43 to 81.16%. ATT5 shows the higher performance. In order to have a clear vision of ATT5 behavior, the effect of inhibitor concentration and temperature was studied. The results were shown in Table 3. Corrosion rate increased with increase in temperature and decrease in inhibitor concentration. While inhibitor efficiency increased with both increasing inhibitor concentration and temperature.
Corrosion rate of low carbon steel alloy and inhibitor efficiency of synthesis ATT5 corrosion in 0.5 M hydrochloric acid solution at different conditions
Test number
Inhibitor concentration (M)
Corrosion rate (g/m2. day)
1 × 10−3
Effect of inhibitor concentration and adsorption isotherm
As shown in Table 3, at specific experimental temperature, corrosion rate of steel decreases with an increase in ATT5 concentration. Values of inhibitor efficiency increase with increasing of ATT5 concentration approach the maximum value of 95.8% at higher level of temperature and inhibitor concentration. This increase in inhibitor performance with temperature is apparently due to an increase in chemisorption of the inhibitor. Crucial step in the action of inhibitor behavior in acidic media is commonly agreed to be adsorption on the metal surface. This includes the assumption that the corrosion reactions are prevented from occurring over the area or active sites of the metal surface protected by adsorbed inhibitor molecules, whereas these corrosion reactions occurred generally on the inhibitor-free active sites [22]. The surface coverage (ϴ = IE/100) data are very valuable in discussing the adsorption features. Surface covered is related to the concentration of inhibitor at constant temperature by well-known adsorption isotherm relationships that evaluated at equilibrium condition. The dependence of θ on the concentration of ATT5 concentration was tested graphically by fitting it to Langmuir adsorption isotherm that assume a metal surface contains a fixed number of adsorption sites and each site took only one adsorbed molecule. Figure 6 shows linear plots for C/ϴ versus C with average R2 = 0.999 correlation coefficient, suggestion that the adsorption follows the Langmuir adsorption isotherm [23]:
$$\frac{C}{\theta } = \frac{1}{K} + C$$
where C is the inhibitor concentration, K adsorption equilibrium constant, representing the degree of adsorption, in other words the higher the value of K specifies that the ATT5 molecules are strongly adsorbed on the metal surface. The slops of Langmuir adsorption lines are near unity meaning that each inhibitor molecules occupies one active site on the metal surface.
Langmuir adsorption isotherms of ATT5 on the steel surface in 0.5 M HCl solution at different temperatures
The standard adsorption free energy (ΔGads) was calculated using the following equation [23]:
$$K = \frac{1}{55.5}\exp \left( { - \frac{{\Delta G_{\text{ads}} }}{RT}} \right)$$
where 55.5 are the concentration of water in solution expressed in molar, R is gas constant, and T absolute temperature. Table 4 shows the adsorption parameters. The average value of standard adsorption free energy was − 33.9 kJ mol−1. The negative value of ΔG ads ° ensures the spontaneous adsorption process and stability of the adsorbed layer on the metal surface. Commonly, value of ΔG ads ° up to − 20 kJ mol−1 is consistent with electrostatic interaction between the charged molecules and the charged metal (physical adsorption) while those around − 40 kJ mol−1 or higher are associated with chemical adsorption as a result of sharing or transfer of electrons from the molecules to the metal surface to form a coordinate type of bond [24]. While other researchers suggested that the range of standard adsorption free energy of chemical adsorption processes for inhibitor in aqueous media lies between –21 and –42 kJ mol−1 [25]. Therefore, for present work, the values of adsorption heat have been considered within the range of chemical adsorption. It was also observed from Table 4, limited increase in the absolute value of ΔGads with an increase in temperatures, indicating that the adsorption was somewhat favorable with increasing experimental temperature and ATT5 adsorbed according to chemical mechanism. The value of ΔHads was obtained from Van't Hoff equation (Eq. 11) [26] that drawn in Fig. 7. This figure shows good linear fitting.
Adsorption parameters of ATT5 at different temperatures
T (°C)
K (M−1)
Slop
ΔGads (kJ mol−1)
ΔHads (kJ mol−1)
ΔSads (kJ mol−1 K−1)
2.5 × 103
− 28.84
5 × 103
17.2 × 103
Van't Hoff equation of ATT5 on the steel surface in 0.5 M HCl solution at different temperatures
The values of adsorption thermodynamic parameters for inhibitor can offer valuable information about the mechanism of corrosion inhibition. The endothermic adsorption process (ΔHads > 0) is ascribed unequivocally to chemisorption, while generally, an exothermic adsorption process (ΔHads < 0) may involve either physisorption or chemisorption or a mixture of both processes. In the present work; the positive sign of heat of adsorption (ΔHads) indicates that the adsorption of inhibitor is an endothermic process and the adsorption is chemisorption. This result agrees with above discussion.
While entropy of adsorption value (ΔSads) was obtained from Eq. 12 at average value of ΔGads and average temperature.
$$\ln \;K = - \frac{{\Delta H_{\text{ads}} }}{\text{RT}} + {\text{constant}}$$
$$\Delta S_{\text{ads}} = \frac{{\Delta H_{\text{ads}} - \Delta G_{\text{ads}} }}{T}$$
These results, which showed in Table 4, appear to contrast to that normally accepted for adsorption phenomena. It is well known that adsorption is an exothermic with a negative sign of adsorption heat accompanied by reduction entropy of adsorption [27]. In aqueous solution, the adsorption of organic molecules commonly is accompanied by desorption of water molecules. The adsorption of organic molecules at the metal–solution interface is a substitution adsorption process [28]. This means that each adsorbed molecule of ATT5 on metal surface displaces water molecules from the surface. The thermodynamic values of ΔSads are the algebraic sum of the adsorption of ATT5 molecules and the desorption of water molecules. Therefore, the increase in entropy is attributed to the increase in solvent entropy [29, 30]. Chaitra et al. [31] studied the effect of newly synthesized thiazole hydrazones on the corrosion of mild steel in 0.5 M hydrochloric acid. Adsorption of the inhibitors followed Langmuir isotherm and addition of inhibitors simultaneously decreased corrosion rate.
Tezcan et al. [32] investigated newly synthesized sulfur containing Schiff base (4-((thiophene-2-ylmethylene)amino)benzamide) compound. Inhibition performance on mild steel in 1.0 M HCl solution was studied. The results showed that the highest inhibitor efficiency of 96.8%.
Messali et al. [33] studied the inhibition effect and adsorption behavior of 4-((2,3-dichlorobenzylidene)amino)-3-methyl-1H-,2,4-triazole-5(4H)-thione on mild steel in 1 M HCl solution. The inhibitor can be adsorbed onto surface by both physical and chemical means obeys Langmuir adsorption isotherm.
Effect of temperature and activation parameters
As shown in Table 5, at specific experimental temperature, corrosion rate of steel decreases with an increase in ATT5 concentration. The kinetics of the ATT5 action can be realized by comparing the activation parameters in the presence and absence of the inhibitor. Activation energy (Ea), enthalpy of activation (ΔHa), and entropy of activation (ΔSa) for both uninhibited and inhibited 0.5 M hydrochloric acid steel corrosion at different temperatures and acid concentration were evaluated from an Arrhenius-type plot (Eq. 13) and transition state theory (Eq. 14) [34]:
$${\text{CR}} = A\;\exp \left( { - \frac{{E_{\text{a}} }}{\text{RT}}} \right)$$
$${\text{CR}} = \frac{\text{RT}}{Nh}\;\exp \left( {\frac{{\Delta S_{\text{a}} }}{R}} \right)\exp \left( { - \frac{{\Delta H_{\text{a}} }}{\text{RT}}} \right)$$
where CR is the corrosion rate, A is the Arrhenius constant, R is the universal gas constant, h is Plank's constant, and N is Avogadro's number. As shown in Fig. 8, plot of ln (CR) versus 1/T gives straight lines with slopes of ΔEa/R and intercept can be used for evaluating of A. While, Fig. 9 shows a liner straight lines of ln (CR/T) versus 1/T slopes of ΔHa/R and intercept can be used for evaluating of ΔSa. Table 5 illustrates the activation parameters for steel corrosion reaction acidic solution at different conditions. It is clearly shown that the activation energy and enthalpy vary in similar way. The activation energy and activation enthalpy for uninhibited acid were higher than inhibited one. The decrease in the value of activation energy and enthalpy appears to be unreliable. However, this may be attributed to increase in metal surface coverage by the inhibitor molecules at higher temperatures and also suggested that the formation rate of the chemisorbed layer may be greater than its rate of dissolution at higher temperatures [35]. Other researchers [36] found that some anticorrosion materials in the acidic solutions alter the kinetics of corrosion reaction by proposing alternate reaction paths with lower activation energies. Table 4, illustrates also that all the values of frequency factor are lower than uninhibited one, which is benefit for inhibiting the corrosion rate of steel. It is also well known that the increase in A raises the corrosion rate of steel [37]. Furthermore, at all cases, the values of Ea are higher than ΔHa by a value which approximately equal to RT, which confirm the thermodynamic principle of the reactions are characterized by following equation [38]:
Activation parameters for steel corrosion reaction in uninhibited and inhibited 0.5 M HCl
C (M)
A (gmd)
Ea (kJ mol−1)
ΔHa (kJ mol−1)
ΔSa (J mol−1 K−1)
7.3 × 1012
− 7.98
1 × 10− 3
− 155.22
Arrhenius plots of steel in uninhibited and inhibited 0.5 M HCl
Transition—state plots of steel in uninhibited and inhibited 0.5 M HCl
$$E_{\text{a}} - \Delta H_{\text{a}} = {\text{RT}}$$
The negative value of ΔSa for both cases of absence and presence of inhibitor indicates that activated complex in the rate determining step denotes an association rather than a dissociation step, which means a decrease in disorder, takes place during the course of transition from reactant to the activated complex [39].
Khan et al. [40] studied the inhibitory effect of two Schiff bases 3-(5-methoxy-2-hydroxybenzylideneamino)-2-(-5-methoxy-2-hydroxyphenyl)-2,3-dihydroquinazoline-4(1H)-one (MMDQ), and 3-(5-nitro-2-hydroxybenzylideneamino)-2(5-nitro-2-hydroxyphenyl)-2,3-dihydroquinazoline-4(1H)-one (NNDQ) on the corrosion of mild steel in 1 M hydrochloric acid. The effect of temperature on the inhibition process in 1 M HCl with the addition of inhibitors was investigated at a temperature range of 30–60 °C. Corrosion rate increased with raise in temperature, and the efficiencies of the investigated inhibitors are strongly temperature dependent. Enthalpy, entropy and enthalpy of activation were calculated. The result showed that Enthalpy of activation for solution containing inhibitors are lower than those in the inhibitor-free acid solution can be attributed to its chemisorption on mild steel surface. Similar results were obtained by Obaid et al. [41]. The lower values of activation energy in the presence of the inhibitors and the general increase in their inhibitor efficiencies with increasing temperatures are indicative of chemisorption (interaction of unshared electron pairs in the adsorbed molecule with the metal) of these compounds on the steel surface.
Quantum chemical and theoretical calculations
Quantum chemical calculations have been widely used to investigate reaction mechanism of inhibition process [42]. It is also verified to be a very important tool for studying corrosion control mechanism and to obtain insight view to the inhibition mechanism of ATT5 inhibitor. By using of quantum chemical calculations, the structural parameters, such as HOMO (highest occupied molecular orbital), LUMO (lowest unoccupied molecular orbital), dipole moment (µ) and fraction of electron transferred (ΔN), were calculated. The structures of inhibitors were optimized by ChemoOffice version 14 software. Figure 10 shows the optimized structures, HOMO and the LUMO structure of all synthesis inhibitors. The calculated quantum chemical properties are summarized in Table 6. As shown in Fig. 10, both the HOMO and LUMO distributions of synthesis inhibitors were concentrated mainly over sulfur and nitrogen atoms. ELUMO and EHOMO characterized the electron-receiving and—donating capability of synthesis inhibitors. In general, a low ELUMO implies that inhibitors tend to accept electrons, while a high EHOMO refer to a strong electron donating [43]. Energy gap (ΔE) specifies the chemical stability of inhibitors, and a lower energy gap value typically leads to higher adsorption on the metal surface, resulting in greater inhibition efficiencies [44]. The order of inhibition efficiency was ATT5 > ATT1 > ATT2 > ATT3 > ATT6 > ATT4, while the order of energy gap was ATT5 > ATT4 > ATT3 > ATT2 > ATT6 > ATT1. The differences in orders may be attributed to close inhibition efficiencies of inhibitors. As seen in Table 2, the value of inhibitor efficiencies were 60.43, 65.02, 67.97, and 69.17% for ATT6, ATT3, ATT2 and ATT1, respectively, which is very close range. However, still ATT5 has the lower energy gap that confirms the experimental work. The number of transferred electrons (ΔN) was also calculated according to Eq. 16 [45].
Optimized chemical structures of six inhibitors and HOMO–LUMO distribution
Quantum chemical parameters for inhibitors
EHOMO (eV)
ELUMO (eV)
ΔE (eV)
ΔN
Dipole (debye)
− 7.654
$$\Delta N = \frac{{X_{\text{Fe}} - X_{\text{inh}} }}{{ 2 (\eta_{\text{Fe}} + \eta_{\text{inh}} )}}$$
where XFe and Xinh denote the absolute electronegativity of iron and the MLH inhibitor molecule, respectively; ηFe and ηinh denote the absolute hardness of iron and the inhibitor molecule, respectively. These quantities are related to electron affinity (A) and ionization potential (I) that both related in turn to EHOMO and ELUMO:
$$\begin{aligned} X = \frac{I + A}{2} = \frac{{ - \left( {E_{\text{HOMO}} + E_{\text{LUMO}} } \right)}}{2} \hfill \\ \eta = \frac{I - A}{2} = \frac{{ - \left( {E_{\text{HOMO}} - E_{\text{LUMO}} } \right)}}{2} \hfill \\ \end{aligned}$$
Values of X and η were considered by using the values of I and A gained from quantum chemical calculation. The theoretical value of XFe is 7 according to Pearsons electronegativity scale and ηFe is 0 eV/mol, respectively [46]. The fraction of electrons transferred from inhibitor to the steel surface (ΔN) was calculated and listed in Table 3. According to Lukovits [47], if ΔN < 3.6, the inhibition efficiency increased with increasing electron-donating ability at the steel surface. In this study, synthesis inhibitors were the donor of electrons, and the metal surface was the acceptor. This result supports the assertion that the adsorption of inhibitors on the steel surface can occur on the bases of donor–acceptor interactions between the Л electrons of the compound and the vacant d-orbitals of the metal surface. The dipole moment (µ) is also a significant factor and there is lack of agreement on the relation between µ and inhibitive performance. Some researchers founded that a low µ value will favor accumulation of the inhibitor on metal surface and increasing the inhibitor performance [48, 49]. While others researches suggested that a high value of dipole moment associated with the dipole–dipole interaction of inhibitor and metal surface can enhance the adsorption on the metal surface and increasing efficiency [50, 51]. In present work, the value of µ for ATT5 was the lowest one among all tested inhibitors that agree with the first opinion. The anodic oxidation behavior of steel in HCl acid can be explained by following reaction [52]:
$${\text{Fe}} + {\text{Cl}}^{ - } \leftrightarrow \left( {{\text{FeCl}}^{ - } } \right)_{\text{ads}}$$
$$\left( {{\text{FeCl}}^{ - } } \right)_{\text{ads}} \leftrightarrow \left( {\text{FeCl}} \right)_{\text{ads}} + {\text{e}}^{ - }$$
$$\left( {\text{FeCl}} \right)_{\text{ads}} \to \left( {{\text{FeCl}}^{ + } } \right)_{\text{ads}} + {\text{e}}^{ - }$$
$$\left( {{\text{FeCl}}^{ + } } \right)_{\text{ads}} \to {\text{Fe}}^{ 2+ } + {\text{Cl}}^{ - }$$
While the cathodic hydrogen evolution reaction can be written as:
$${\text{Fe}} + {\text{H}}^{ + } \leftrightarrow \left( {{\text{FeH}}^{ + } } \right)_{\text{ads}}$$
$$\left( {{\text{FeH}}^{ + } } \right)_{\text{ads}} + {\text{e}}^{ - } \to \left( {\text{FeH}} \right)_{\text{ads}}$$
$$\left( {\text{FeH}} \right)_{\text{ads}} + {\text{H}}^{ + } + {\text{e}}^{ - } \to {\text{Fe}} + {\text{H}}_{ 2}$$
According to the structures of the synthesis inhibitors, there are the free electron pairs on N and S that able to forming **σ-bond with iron [53]. In addition, in case of acidic solution, electrostatic interaction is possible between the negatively charge of iron surface that may be brought about by specific adsorption of Cl− anions and the positively charged inhibitor. The essential effect of inhibitors is due to the presence of free electron pairs in the N and S atoms, p-electrons on the aromatic ring, type of interaction with the steel surface, and metallic complexes formation. It is well known that steel has coordination affinity toward N and S bearing ligand. Therefore, adsorption on metal surface can be ascribed to coordination through heteroatoms and p-electrons of aromatic rings [54]. In the present case, synthesis inhibitors, there are unshared electron pairs on N and S, able to form σ-bond with steel.
The following points can be concluded from present work:
The six inhibitors were synthesis and test successfully as corrosion inhibitors for steel in acidic solution.
Experimental results show that the order of inhibition efficiency was ATT5 > ATT1 > ATT2 > ATT3 > ATT6 > ATT4.
The addition of ATT5 to the 0.5 M HCl solution at different temperature and inhibitor concentration reduces corrosion of mild steel with inhibitor efficiency exceed 95%.
Inhibitor efficiency of ATT5 increased with an increase in the inhibitor concentration. The high inhibition efficiency of inhibitor was attributed to the formation of a layer on the steel surface.
Adsorption follows Langmuir adsorption isotherm with high negative value of heat of adsorption, which indicates the formation of chemical layer on metal surface.
The authors would like to thank Department of Chemistry—College of Science—University of Diyala for continuous support and facilities.
Verma C, Olasunkanmi LO, Obot IB, Ebenso EE, Quraishi MA (2016) 2, 4-Diamino-5- (phenylthio)-5 H-chromeno [2, 3-b] pyridine-3-carbonitriles as green and effective corrosion inhibitors: gravimetric, electrochemical, surface morphology and theoretical studies. RSC Adv 6:53933–53948CrossRefGoogle Scholar
Sasikumar Y, Adekunle AS, Olasunkanmi LO, Bahadur Baskar IR, Kabanda MM, Obot IB, Ebenso EE (2015) Experimental, quantum chemical and Monte Carlo simulation studies on the 15 corrosion inhibition of some alkyl imidazolium ionic liquids containing tetrafluoroborate anion on MSin acidic medium. J Mol Liq 211:105–118CrossRefGoogle Scholar
Khadiri R, Bekkouche K, Aouniti A, Hammouti B, Benchat N, Bouachrine M, Solmaz R (2016) Gravimetric, electrochemical and quantum chemical studies of some pyridazine derivatives as corrosion inhibitors for mild steel in 1 M HCl solution. J Taiwan Inst Chem Engg 58:552–564CrossRefGoogle Scholar
Khadom A, Yaro A, Altaie A, Kaduim A (2009) Electrochemical, activations and adsorption studies for the corrosion of low carbon steel in acidic media. Port Electrochem Acta 27(2009):699–712CrossRefGoogle Scholar
Musa A, Kaduim A, Abu Bakar M, Takriff M, Daud Abdul Razak, Kamarudin Siti Kartom (2010) On the inhibition of mild steel corrosion by 4-amino-5-phenyl-4H-1, 2,4-trizole-3-thiol. Corros Sci 52:526–533CrossRefGoogle Scholar
Khadom A, Musa A, Kaduim A, Abu Bakar M, Takriff M (2010) Adsorption kinetics of 4-Amino-5-Phenyl-4H-1, 2, 4-Triazole-3-thiol on mild steel surface inhibitor. Port Electrochim Acta 28:221–230CrossRefGoogle Scholar
Alaneme K, Daramola Y, Olusegun S, Afolabi A (2015) Corrosion inhibition and adsorption characteristics of rice husk extracts on mild steel immersed in 1 M H2SO4 and HCl solutions. Int J Electrochem Sci 10:3553–3567Google Scholar
Noor E, Al-Moubaraki A (2008) Thermodynamic study of metal corrosion and inhibitor adsorption processes in mild steel/1-methyl-4[4 (-X)-styryl pyridinium iodides/hydrochloric acid systems. Mater Chem Phys 110:145–154CrossRefGoogle Scholar
Bentiss F, Lebrini M, Lagrenee M (2005) Thermodynamic characterization of metal dissolution and inhibitor adsorption processes in mild steel/2,5-bis(n-thienyl)-1,3,4-thiadiazoles/hydrochloric acid system. Corros Sci 47:2915–2931CrossRefGoogle Scholar
Popova A, Christov M, Zwetanova A (2007) Effect of the molecular structure on the inhibitor properties of azoles on mild steel corrosion in 1 M hydrochloric acid. Corros Sci 49:2131–2143CrossRefGoogle Scholar
Bhrara K, Kim H, Singh G (2008) Inhibiting effects of butyl triphenyl phosphonium bromide on corrosion of mild steel in 0.5 M sulphuric acid solution and its adsorption characteristics. Corros Sci 50:2747–2754CrossRefGoogle Scholar
Sudheer M, Quraishi A (2014) 2-Amino-3,5-dicarbonitrile-6-thio-pyridines: new and effective corrosion inhibitors for mild steel in 1 M HCl. Ind Eng Chem Res 53:2851–2859CrossRefGoogle Scholar
Musa A, Kadhum A, Mohamad A, Takriff M (2010) Experimental and theoretical study on the inhibition performance of triazole compounds for mild steel corrosion. Corros Sci 52:3331–3340CrossRefGoogle Scholar
Gupta A, Prachand S, Patel A, Jain S (2012) Synthesis of some 4-Amino-5-(substituted-phenyl)-4H-[1, 2, 4] triazole-3-thiol derivatives and antifungal activity. Int J Pharm Life Sci 3:1848–1857Google Scholar
Aly A, Brown A, El-Emary T, Ewas A, Ramadan M (2009) Hydrazinecarbothioamide group in the synthesis of heterocycles. Arkivoc I:150–197Google Scholar
Hadi Kadhim S, Qahtan Abd-Alla I, Jawad Hashim T (2017) Synthesis and characteristic study of Co(II), Ni(II) and Cu(II) complexes of new schiff base derived from 4-amino antipyrine. Int J Chem Sci 15:107–115Google Scholar
Khadom A, Abd A, Ahmed N (2018) Xanthium strumarium leaves extracts as a friendly corrosion inhibitor of low carbon steel in hydrochloric acid: kinetics and mathematical studies. S Afr J Chem Eng 25:13–21Google Scholar
Khadom A, Abd A, Ahmed N (2018) Potassium iodide as a corrosion inhibitor of mild steel in hydrochloric acid: kinetics and mathematical studies. J Bio Tribo Corros 4(17):2–10Google Scholar
Hassan K, Khadom A, Kurshed N (2016) Citrus aurantium leaves extracts as a sustainable corrosion inhibitor of mild steel in sulfuric acid. S Afr J Chem Eng 22:1–5Google Scholar
Khadom A (2015) Kinetics and synergistic effect of halide ion and naphthylamin for inhibition of corrosion reaction of mild steel in hydrochloric acid. React Kinet Mech Catal 115:463–481CrossRefGoogle Scholar
Khadom AA, Abod BM, Mahood HB, Kadhum AAH (2018) Galvanic corrosion of steel–brass couple in petroleum waste water in presence of a green corrosion inhibitor: electrochemical, kinetics, and mathematical view. J Fail Anal Preven 18:1300–1310CrossRefGoogle Scholar
Yaro A, Khadom A, Ibraheem H (2011) Peach juice as an anti-corrosion inhibitor of mild steel. Anti-Corrosion Methods Mater 58(3):116–124CrossRefGoogle Scholar
Khadom A, Yaro A, AlTaie A, Kadum A (2009) Electrochemical, activations and adsorption studies for the corrosion inhibition of low carbon steel in acidic media. Port Electrochim Acta 27:699–712CrossRefGoogle Scholar
Noor E (2009) Evaluation of inhibitive action of some quaternary N-heterocyclic compounds on the corrosion of Al–Cu alloy in hydrochloric acid. Mater Chem Phys 114:533–541CrossRefGoogle Scholar
Umoren S, Ebenso E (2007) The synergistic effect of polyacrylamide and iodide ions on the corrosion inhibition of mild steel in H2SO4. Mater Chem Phys 106:393CrossRefGoogle Scholar
Obot I, Obi-Egbedi N (2010) An interesting and efficient green corrosion inhibitor for aluminium from extracts of Chlomolaena odorata L. in acidic solution. J Appl Electrochem 40:1977–1984CrossRefGoogle Scholar
Amar H, Benzakour J, Derja A, Villemin D, Moreau B, Braisaz T (2006) Piperidin-1-yl-phosphonic acid and (4-phosphono-piperazin-1-yl) phosphonic acid: a new class of iron corrosion inhibitors in sodium chloride 3% media. Appl Surf Sci 252(6162):6166Google Scholar
Umoren S, Ebenso E (2007) The synergistic effect of polyacrylamide and iodide ions on the corrosion inhibition of mild steel in H2SO4. Mater Chem Phys 106:387–393CrossRefGoogle Scholar
Li X, Deng SD, Fu H, Mu GN (2010) Synergistic inhibition effect of rare earth cerium(IV) ion and sodium oleate on the corrosion of cold rolled steel inphosphoric acid solution. Corros Sci 52:1167–1178CrossRefGoogle Scholar
Fadhil Ahmed A, Khadom AA, Liu Hongfang, Chaoyang Fu, Wang Junlei, Fadhil Noor A, Mahood Hameed B (2019) (S) 6 Phenyl 2,3,5,6 tetrahydroimidazo[2,1 b] thiazole hydrochloride as corrosion inhibitor of steel in acidic solution: gravimetrical, electrochemical, surface morphology and theoretical simulation. J Mol Liq 276:503–518CrossRefGoogle Scholar
Chaitra TK, Mohana KN, Gurudatt DM, Tandon HC (2016) Inhibition activity of new thiazole hydrazones towards mild steel corrosion in acid media by thermodynamic, electrochemical and quantum chemical methods. J Taiwan Inst Chem Eng 67:521–531CrossRefGoogle Scholar
Tezcan F, Yerlikaya G, Mahmood A, Kardaş G (2018) A novel thiophene Schiff base as an efficient corrosion inhibitor for mild steel in 1.0 M HCl: electrochemical and quantum chemical studies. J Mol Liq 269:398–406CrossRefGoogle Scholar
Messali M, Larouj M, Lgaz H, Rezki N, Al-Blewi FF, Aouad MR, Chaouiki A, Salghi R, Chung I (2018) A new Schiff base derivative as an effective corrosion inhibitor for mild steel in acidic media: experimental and computer simulations studies. J Mol Struct 1168:39–48CrossRefGoogle Scholar
Musa A, Kadhum A, Mohamad A, Takriff M, Daud A, Kamarudin S (2010) On the inhibition of mild steel corrosion by 4-amino-5-phenyl-4H-1, 2, 4-trizole-3-thiol. Corros Sci 52:526–533CrossRefGoogle Scholar
Ahmed SK, Ali WB, Khadom AA (2019) Synthesis and characterization of new triazole derivativesas corrosion inhibitors of carbon steel in acidic medium. J Bio Tribo-Corros 5:15CrossRefGoogle Scholar
Khadom AA, Yaro AS (2011) Protection of low carbon steel in phosphoric acid by potassium iodide. Prot Metals Phys Chem Surf 47:662–669CrossRefGoogle Scholar
Benabdellah M, Touzani R, Dafali A, Hammouti B, El Kadiri S (2007) Ruthenium–ligand complex, an efficient inhibitor of steel corrosion in H3PO4 media. Mater Lett 61:1197–1204CrossRefGoogle Scholar
Noor EA, Al-Moubaraki AH (2008) Thermodynamic study of metal corrosion and inhibitor adsorption processes in mild steel/1 methyl-4[4′(-X)-styryl pyridinium iodides/hydrochloric acid systems. Mater Chem Phys 110:145–154CrossRefGoogle Scholar
Ramesh Saliyan V, Adhikari V, Airody A (2007) Inhibition of corrosion of mild steel in acid media by N0-benzylidene-3-(quinolin- 4-ylthio)propanohydrazide. Bull Mater Sci 31:699–711CrossRefGoogle Scholar
Khan G, Basirun WJ, Kazi SN, Ahmed P, Magaji L, Ahmed SM, Khan GM, Rehman MA, Mohamad Badry AB (2017) Electrochemical investigation on the corrosion inhibition of mild steel by Quinazoline Schiff base compounds in hydrochloric acid solution. J Colloid Interface Sci 502:134–145CrossRefGoogle Scholar
Obaid AY, Ganash AA, Qusti Shabaan AH, Elroby A, Hermas AA (2017) Corrosion inhibition of type 430 stainless steel in an acidic solution using a synthesized tetra-pyridinium ring-containing compound. Arab J Chem 10(Supplement 1):S1276–S1283CrossRefGoogle Scholar
Guo L, Zhu S, Zhang S, He Q, Li W (2014) Theoretical studies of three triazolederivatives as corrosion inhibitors for mild steel in acidic medium. Corros Sci 87:366–375CrossRefGoogle Scholar
Zhang D, Tang Y, Qi S, Dong D, Cang H, Lu G (2016) The inhibition performance of long-chain alkyl-substituted benzimidazole derivatives for corrosion of mild steel in HCl. Corros Sci 102:517–522CrossRefGoogle Scholar
Hu K, Zhuang J, Ding J, Ma Z, Wang F, Zeng X (2017) Influence of biomacromolecule DNA corrosion inhibitor on carbon steel. Corros Sci 125:68–76CrossRefGoogle Scholar
Mourya P, Singh P, Tewari A, Rastogi R, Singh M (2015) Relationship between structure and inhibition behavior of quinolinium salts for mild steel corrosion: experimental and theoretical approach. Corros Sci 95:71–87CrossRefGoogle Scholar
Pearson RG (1988) Absolute electronegativity and hardness: application to inorganic chemistry. Inorg Chem 27:734–740CrossRefGoogle Scholar
Lukovits I, Lalman E, Zucchi F (2001) Corrosion inhibitors—correlation between electronic structure and efficiency. Corrosion 57:3–8CrossRefGoogle Scholar
Qiang Y, Guo L, Zhang S, Li W, Yu S, Tan J (2016) Synergistic effect of tartaric acid with 2,6-diaminopyridine on the corrosion inhibition of mild steel in 0.5 M HCl. Sci Rep 6: 33305Google Scholar
Li LJ, Zhang XP, Lei JL, He JX, Zhang ST, Pan FS (2012) Adsorption and corrosion inhibition of Osmanthus fragran leaves extract on carbon steel. Corros Sci 63:82–90CrossRefGoogle Scholar
Yüce A, Do˘grumert B, Kardas G, Yazıcı B (2014) Electrochemical andquantum chemical studies of 2-amino-4-methyl-thiazole as corrosioninhibitor for mild steel in HCl solution. Corros Sci 83:310–316CrossRefGoogle Scholar
Zheng X, Zhang S, Li W, Gong M, Yin L (2015) Experimental and theoretical studiesof two imidazolium-based ionic liquids as inhibitors for mild steel in sulfuricacid solution. Corros Sci 95:168–179CrossRefGoogle Scholar
Kumari P, Shetty P, Rao S (2017) Electrochemical measurements for the corrosion inhibition of mild steel in 1 M hydrochloric acid by using an aromatic hydrazide derivative. Arab J Chem 10:653–663CrossRefGoogle Scholar
Rashid K, Khadom A (2018) Adsorption and kinetics behavior of kiwi juice as a friendly corrosion inhibitor of steel in acidic media. World J Eng 15(3):388–401CrossRefGoogle Scholar
Ahmad I, Prasad R, Quraishi M (2010) Inhibition of mild steel corrosion in acid solution by pheniramine drug: experimental and theoretical study. Corros Sci 52:3033–3341CrossRefGoogle Scholar
1.Department of Chemistry, College of ScienceUniversity of DiyalaBaquba CityIraq
2.Department of Chemical Engineering, College of EngineeringUniversity of DiyalaBaquba CityIraq
Ahmed, S.K., Ali, W.B. & Khadom, A.A. Int J Ind Chem (2019) 10: 159. https://doi.org/10.1007/s40090-019-0181-8
Received 30 July 2018
|
CommonCrawl
|
Only show content I have access to (60)
Only show open access (21)
Last month (1)
Last 12 months (15)
Last 3 years (50)
Over 3 years (135)
Physics and Astronomy (44)
Earth and Environmental Sciences (32)
Materials Research (24)
Politics and International Relations (1)
British Journal of Nutrition (19)
Journal of Fluid Mechanics (13)
Journal of Materials Research (13)
Zygote (12)
Geological Magazine (8)
MRS Online Proceedings Library Archive (8)
Microscopy and Microanalysis (8)
Chinese Journal of Agricultural Biotechnology (6)
Journal of Financial and Quantitative Analysis (5)
Quaternary Research (5)
Acta Neuropsychiatrica (4)
Communications in Computational Physics (4)
International Journal of Technology Assessment in Health Care (4)
Plant Genetic Resources (4)
Psychological Medicine (4)
Public Health Nutrition (4)
Antiquity (3)
High Power Laser Science and Engineering (3)
The Journal of Laryngology & Otology (3)
The Journal of Navigation (3)
Materials Research Society (23)
Nutrition Society (12)
Nestle Foundation - enLINK (11)
Test Society 2018-05-10 (6)
Global Science Press (4)
Ryan Test (4)
International Glaciological Society (3)
MSC - Microscopical Society of Canada (3)
RIN (3)
test society (3)
AEPC Association of European Paediatric Cardiology (2)
AMA Mexican Society of Microscopy MMS (2)
Entomological Society of Canada TCE ESC (2)
International Association for Chinese Management Research (2)
JLO (1984) Ltd (2)
Royal College of Psychiatrists / RCPsych (2)
Applied Probability Trust (1)
European Microwave Association (1)
MiMi / EMAS - European Microbeam Analysis Society (1)
Society for Disaster Medicine and Public Health, Inc. SDMPH (1)
Detection of aberrant DNA methylation patterns in sperm of male recurrent spontaneous abortion patients
Rong-Hua Ma, Zhen-Gang Zhang, Yong-Tian Zhang, Sheng-Yan Jian, Bin-Ye Li
Journal: Zygote , FirstView
Published online by Cambridge University Press: 09 January 2023, pp. 1-10
Aberrant DNA methylation patterns in sperm are a cause of embryonic failure and infertility, and could be a critical factor contributing to male recurrent spontaneous abortion (RSA). The purpose of this study was to reveal the potential effects of sperm DNA methylation levels in patients with male RSA. We compared sperm samples collected from fertile men and oligoasthenospermia patients. Differentially methylated sequences were identified by reduced representation bisulfite sequencing (RRBS) methods. The DNA methylation levels of the two groups were compared and qRT-PCR was used to validate the expression of genes showing differential methylation. The results indicated that no difference in base distribution was observed between the normal group and the patient group. However, the chromosome methylation in these two groups was markedly different. One site was located on chromosome 8 and measured 150 bp, while the other sites were on chromosomes 9, 10, and X and measured 135 bp, 68 bp, and 136 bp, respectively. In particular, two genes were found to be hypermethylated in these patients, one gene was DYDC2 (placed in the differential methylation region of chromosome 10), and the other gene was NXF3 (located on chromosome X). Expression levels of DYDC2 and NXF3 in the RSA group were significantly lower than those in the normal group (P < 0.05). Collectively, these results demonstrated that changes in DNA methylation might be related to male RSA. Our findings provide important information regarding the potential role of sperm DNA methylation in human development.
Continuous theta burst stimulation over the bilateral supplementary motor area in obsessive-compulsive disorder treatment: A clinical randomized single-blind sham-controlled trial
Qihui Guo, Kaifeng Wang, Huiqin Han, Puyu Li, Jiayue Cheng, Junjuan Zhu, Zhen Wang, Qing Fan
Journal: European Psychiatry / Volume 65 / Issue 1 / 2022
Published online by Cambridge University Press: 07 October 2022, e64
Obsessive-compulsive disorder (OCD) can cause substantial damage to quality of life. Continuous theta burst stimulation (cTBS) is a promising treatment for OCD patients with the advantages of safety and noninvasiveness.
The present study aimed to evaluate the treatment efficacy of cTBS over the bilateral supplementary motor area (SMA) for OCD patients with a single-blind, sham-controlled design.
Fifty-four OCD patients were randomized to receive active or sham cTBS treatment over the bilateral SMA for 4 weeks (five sessions per week, 20 sessions in total). Patients were assessed at baseline (week 0), the end of treatment (week 4), and follow-up (week 8). Clinical scales included the YBOCS, HAMD24, HAMA14, and OBQ44. Three behavioral tests were also conducted to explore the effect of cTBS on response inhibition and decision-making in OCD patients.
The treatment response rates were not significantly different between the two groups at week 4 (active: 23.1% vs. sham: 16.7%, p = 0.571) and week 8 (active: 26.9% vs. sham: 16.7%, p = 0.382). Depression and anxiety improvements were significantly different between the two groups at week 4 (HAMD24: F = 4.644, p = 0.037; HAMA14: F = 5.219, p = 0.028). There was no significant difference between the two groups in the performance of three behavioral tests. The treatment satisfaction and dropout rates were not significantly different between the two groups.
The treatment of cTBS over the bilateral SMA was safe and tolerable, and it could significantly improve the depression and anxiety of OCD patients but was not enough to improve OCD symptoms in this study.
Telling Our Own Story: A Bibliometrics Analysis of Mainland China's Influence on Chinese Politics Research, 2001–2020
Hui-Zhen Fu, Li Shao
Journal: PS: Political Science & Politics / Volume 56 / Issue 1 / January 2023
Print publication: January 2023
This study conducted a bibliometric analysis of Chinese politics research from 2001 to 2020 (N = 11,285) using Social Sciences Citation Index data. The number of publications in the field by scholars from Mainland China surged in the past 20 years; however, their influence on academia remained limited. Chinese institutions serve as the major hubs of collaborative networks. Using structural topic models, we identified 25 research topics that can be categorized in three clusters. In the past 20 years, scholars from Mainland China steered the focus of Chinese politics by causing a reduction in the proportion of international relation topics and an increase in the proportion of political economy topics. Domestic politics topics had the most citations. Scholars from Mainland China have made contributions to better research methods in the field. This article is a comprehensive view of Chinese politics research using a tool that is rarely used by political scientists. It depicts how studies of Chinese politics influence academia from a bibliometrics perspective.
A flexible wearable e-skin sensing system for robotic teleoperation
Chuanyu Zhong, Shumi Zhao, Yang Liu, Zhijun Li, Zhen Kan, Ying Feng
Journal: Robotica , First View
Published online by Cambridge University Press: 16 September 2022, pp. 1-14
Electronic skin (e-skin) is playing an increasingly important role in health detection, robotic teleoperation, and human-machine interaction, but most e-skins currently lack the integration of on-site signal acquisition and transmission modules. In this paper, we develop a novel flexible wearable e-skin sensing system with 11 sensing channels for robotic teleoperation. The designed sensing system is mainly composed of three components: e-skin sensor, customized flexible printed circuit (FPC), and human-machine interface. The e-skin sensor has 10 stretchable resistors distributed at the proximal and metacarpal joints of each finger respectively and 1 stretchable resistor distributed at the purlicue. The e-skin sensor can be attached to the opisthenar, and thanks to its stretchability, the sensor can detect the bent angle of the finger. The customized FPC, with WiFi module, wirelessly transmits the signal to the terminal device with human-machine interface, and we design a graphical user interface based on the Qt framework for real-time signal acquisition, storage, and display. Based on this developed e-skin system and self-developed robotic multi-fingered hand, we conduct gesture recognition and robotic multi-fingered teleoperation experiments using deep learning techniques and obtain a recognition accuracy of 91.22%. The results demonstrate that the developed e-skin sensing system has great potential in human-machine interaction.
Polar Nano-Domains in Barium Hexaferrite Revealed with Multislice Electron Ptychography
Harikrishnan K. P., Yilin Evan Li, Yu-Tsun Shao, Zhen Chen, Jiaqiang Yan, Christo Guguschev, Darrell G. Schlom, David A. Muller
Journal: Microscopy and Microanalysis / Volume 28 / Issue S1 / August 2022
Print publication: August 2022
Detrital zircon geochronology of the Permian Lower Shihezi Formation, northern Ordos Basin, China: time constraints for closing of the Palaeo-Asian Ocean
Rong Chen, Feng Wang, Zhen Li, Noreen J Evans, Hongde Chen
Journal: Geological Magazine / Volume 159 / Issue 9 / September 2022
Published online by Cambridge University Press: 11 July 2022, pp. 1601-1620
Temporal constraints on the closure of the eastern segment of the Palaeo-Asian Ocean along the northern margin of the North China Craton (NCC) remain unclear. As a part of the NCC, the sedimentation and tectonic evolution of the Late Palaeozoic Ordos Basin were closely related to the opening and closing of the Palaeo-Asian Ocean. We use petrology, quantitative mineralogical analysis, U–Pb geochronology and trace element signatures of detrital zircons of the Lower Shihezi Formation from two sections in the eastern north Ordos Basin and two sections in the western north Ordos Basin to reconstruct the sedimentary provenance and tectonic background of the northern Ordos Basin. The results show that the sediments of the western sections were mainly derived from the Yinshan orogenic belt and Alxa block, and that those in the eastern sections only came from the Yinshan orogenic belt. The trace element ratios in detrital zircons from the Late Palaeozoic sandstones indicate that the source areas were mainly subduction-related continental arcs, closely related to the continued subduction of the Palaeo-Asian Ocean in the Late Palaeozoic. Since the main Late Palaeozoic magmatic periods vary on the east and west sides of the northern margin of the Ordos Basin, two main collisions related to Palaeo-Asian Ocean closure are recorded. The collision on the west side occurred significantly earlier than that in the east. This study implies that the Palaeo-Asian Ocean began to subduct beneath the NCC in the Carboniferous and gradually closed from west to east thereafter.
Dual-channel LIDAR searching, positioning, tracking and landing system for rotorcraft from ships at sea
Tao Zeng, Hua Wang, Xiucong Sun, Hui Li, Zhen Lu, Feifei Tong, Hao Cheng, Canlun Zheng, Mengying Zhang
Journal: The Journal of Navigation / Volume 75 / Issue 4 / July 2022
Print publication: July 2022
To address the shortcomings of existing methods for rotorcraft searching, positioning, tracking and landing on a ship at sea, a dual-channel LIDAR searching, positioning, tracking and landing system (DCLSPTLS) is proposed in this paper, which utilises the multi-pulse laser echoes accumulation method and the physical phenomenon that the laser reflectivity of the ship deck in the near-infrared band is four orders of magnitude higher than that of the sea surface. The DCLSPTLS searching and positioning model, tracking model and landing model are established, respectively. The searching and positioning model can provide estimates of the azimuth angle, the distance of the ship relative to the rotorcraft and the ship's course. With the above parameters as inputs, the total tracking time and the direction of the rotorcraft tracking speed can be obtained by using the tracking model. The landing model can calculate the pitch and the roll angles of the ship's deck relative to the rotorcraft by using the least squares method and the laser irradiation coordinates. The simulation shows that the DCLSPTLS can realise the functions of rotorcraft searching, positioning, tracking and landing by using the above parameters. To verify the effectiveness of the DCLSPTLS, a functional test is performed using a rotorcraft and a model ship on a lake. The test results are consistent with the results of the simulation.
An Audit on the Monitoring and Management of Hyperprolactinemia in Inpatient Adults on Regular Antipsychotics in 8 Acute Wards in an NHS Mental Health Trust
Peter Zhang, Zhen Dong Li
Journal: BJPsych Open / Volume 8 / Issue S1 / June 2022
Published online by Cambridge University Press: 20 June 2022, p. S13
The aim of the audit is to ascertain how well hyperprolactinemia is being monitored and managed across all acute adult inpatient wards in a mental health NHS Trust, for patients on regular antipsychotic mediation. The objectives of the audit are to: 1) Assess whether prolactin is being monitored according to local guidelines for patients on regular antipsychotic medication, 2) Determine whether hyperprolactinemia is being identified and managed according to appropriate guidelines, 3) Assess the standard of documentation around the decisions made.
Data were collected retrospectively from the electronic notes and records for 78 patients, who were discharged from 8 acute wards in February 2021. For checking prolactin test results, the relevant reporting systems were accessed. Two data collection forms were used, which separated patients between those already taking antipsychotics prior to their admission and those who were newly initiated on an antipsychotic.
Patients who were prescribed at least one regular antipsychotic were included. Patients with pre-existing medical conditions that cause hyperprolactinaemia, ongoing pregnancy or breastfeeding were excluded. The monitoring for and management of hyperprolactinaemia was assessed against NICE and local guidance.
From the reviewed data, 41 patients were prescribed at least one regular antipsychotic drug during their admission. 32 patients were already established on an antipsychotic prior to their admission and 9 individuals were started on their first antipsychotic. Hyperprolactinaemia was identified in 9 patients. 19 patients had no prolactin assay performed during their whole admission.
44.4% of antipsychotic naïve patients had a baseline prolactin level taken prior to starting an antipsychotic. 9.1% of patients with hyperprolactinaemia had their symptoms assessed by a clinician. 27.3% of patients with hyperprolactinaemia had actions discussed and undertaken to address this.
This audit identified that patients are at risk of suffering from hyperprolactinaemia and are being insufficiently monitored. The symptoms of hyperprolactinaemia are not adequately screened or assessed for. This may increase the side effect burden and decrease medication adherence among patients. There is a need to increase the awareness among clinicians about the importance of regular prolactin monitoring to improve patient outcomes.
Theory and simulation of electrokinetic fluctuations in electrolyte solutions at the mesoscale
Mingge Deng, Faisal Tushar, Luis Bravo, Anindya Ghoshal, George Karniadakis, Zhen Li
Journal: Journal of Fluid Mechanics / Volume 942 / 10 July 2022
Published online by Cambridge University Press: 24 May 2022, A29
Electrolyte solutions play an important role in energy storage devices, whose performance relies heavily on the electrokinetic processes at sub-micron scales. Although fluctuations and stochastic features become more critical at small scales, the long-range Coulomb interactions pose a particular challenge for both theoretical analysis and simulation of fluid systems with fluctuating hydrodynamic and electrostatic interactions. Here, we present a theoretical framework based on the Landau–Lifshitz theory to derive closed-form expressions for fluctuation correlations in electrolyte solutions, indicating significantly different decorrelation processes of ionic concentration fluctuations from hydrodynamic fluctuations, which provides insights for understanding transport phenomena of coupled fluctuating hydrodynamics and electrokinetics. Furthermore, we simulate fluctuating electrokinetic systems using both molecular dynamics (MD) with explicit ions and mesoscopic charged dissipative particle dynamics (cDPD) with semi-implicit ions, from which we identify that the spatial probability density functions of local charge density follow a gamma distribution at sub-nanometre scale (i.e. $0.3\,{\rm nm}$) and converge to a Gaussian distribution above nanometre scales (i.e. $1.55\,{\rm nm}$), indicating the existence of a lower limit of length scale for mesoscale models using Gaussian fluctuations. The temporal correlation functions of both hydrodynamic and electrokinetic fluctuations are computed from all-atom MD and mesoscale cDPD simulations, showing good agreement with the theoretical predictions based on the linearized fluctuating hydrodynamics theory.
Global, regional and national burden of autism spectrum disorder from 1990 to 2019: results from the Global Burden of Disease Study 2019
Zhen Li, Lejin Yang, Hui Chen, Yuan Fang, Tongchao Zhang, Xiaolin Yin, Jinyu Man, Xiaorong Yang, Ming Lu
Journal: Epidemiology and Psychiatric Sciences / Volume 31 / 2022
Published online by Cambridge University Press: 10 May 2022, e33
Autism spectrum disorder (ASD) is a neurodevelopmental condition, with symptoms appearing in the early developmental period. Little is known about its current burden at the global, regional and national levels. This systematic analysis aims to summarise the latest magnitudes and temporal trends of ASD burden, which is essential to facilitate more detailed development of prevention and intervention strategies.
The data on ASD incidence, prevalence, disability-adjusted life years (DALYs) in 204 countries and territories between 1990 and 2019 came from the Global Burden of Disease Study 2019. The average annual percentage change was calculated to quantify the secular trends in age-standardised rates (ASRs) of ASD burden by region, sex and age.
In 2019, there were an estimated 60.38 × 104 [95% uncertainty interval (UI) 50.17–72.01] incident cases of ASD, 283.25 × 105 (95% UI 235.01–338.11) prevalent cases and 43.07 × 105 (95% UI 28.22–62.32) DALYs globally. The ASR of incidence slightly increased by around 0.06% annually over the past three decades, while the ASRs of prevalence and DALYs both remained stable over the past three decades. In 2019, the highest burden of ASD was observed in high-income regions, especially in high-income North America, high-income Asia Pacific and Western Europe, where a significant growth in ASRs was also observed. The ASR of ASD burden in males was around three times that of females, but the gender difference was shrunk with the pronounced increase among females. Of note, among the population aged over 65 years, the burden of ASD presented increasing trends globally.
The global burden of ASD continues to increase and remains a major mental health concern. These substantial heterogeneities in ASD burden worldwide highlight the need for making suitable mental-related policies and providing special social and health services.
Dissecting the association between psychiatric disorders and neurological proteins: a genetic correlation and two-sample bidirectional Mendelian randomization study
Huimei Huang, Shiqiang Cheng, Chun'e Li, Bolun Cheng, Li Liu, Xuena Yang, Peilin Meng, Yao Yao, Chuyu Pan, Jingxi Zhang, Huijie Zhang, Yujing Chen, Zhen Zhang, Yan Wen, Yumeng Jia, Feng Zhang
Journal: Acta Neuropsychiatrica / Volume 34 / Issue 6 / December 2022
The role of neurological proteins in the development of bipolar disorder (BD) and schizophrenia (SCZ) remains elusive now. The current study aims to explore the potential genetic correlations of plasma neurological proteins with BD and SCZ.
By using the latest genome-wide association study (GWAS) summary data of BD and SCZ (including 41,917 BD cases, 11,260 SCZ cases, and 396,091 controls) derived from the Psychiatric GWAS Consortium website (PGC) and a recently released GWAS of neurological proteins (including 750 individuals), we performed a linkage disequilibrium score regression (LDSC) analysis to detect the potential genetic correlations between the two common psychiatric disorders and each of the 92 neurological proteins. Two-sample Mendelian randomisation (MR) analysis was then applied to assess the bidirectional causal relationship between the neurological proteins identified by LDSC, BD and SCZ.
LDSC analysis identified one neurological protein, NEP, which shows suggestive genetic correlation signals for both BD (coefficient = −0.165, p value = 0.035) and SCZ (coefficient = −0.235, p value = 0.020). However, those association did not remain significant after strict Bonferroni correction. Two sample MR analysis found that there was an association between genetically predicted level of NEP protein, BD (odd ratio [OR] = 0.87, p value = 1.61 × 10−6) and SCZ (OR = 0.90, p value = 4.04 × 10−6). However, in the opposite direction, there is no genetically predicted association between BD, SCZ, and NEP protein level.
This study provided novel clues for understanding the genetic effects of neurological proteins on BD and SCZ.
Dose–response efficacy of mulberry fruit extract for reducing post-prandial blood glucose and insulin responses: randomised trial evidence in healthy adults
David J Mela, Xiu-Zhen Cao, Santhosh Govindaiah, Harry Hiemstra, Ramitha Kalathil, Li Lin, Joshi Manoj, Tingyan Mi, Carole Verhoeven
Journal: British Journal of Nutrition , First View
Published online by Cambridge University Press: 11 March 2022, pp. 1-8
Extracts of mulberry have been shown to reduce post-prandial glucose (PPG) and insulin (PPI) responses, but reliability of these effects and required doses and specifications are unclear. We previously found that 1·5 g of a specified mulberry fruit extract (MFE) significantly reduced PPG and PPI responses to 50 g carbohydrate as rice porridge, with no indications of intolerance. The trials reported here aimed to replicate that work and assess the efficacy of lower MFE doses, using boiled rice as the carbohydrate source. Two separate randomised controlled intervention studies were carried out with healthy Indian males and females aged 20–50 years (n 84 per trial), with PPG area under the curve over 2 h as the primary outcome. Trial 1 used doses of 0, 0·37, 0·75, 1·12 and 1·5 g MFE in boiled rice and 0 or 1·5 g MFE in rice porridge. Trial 2 used doses of 0, 0·04, 0·12, 0·37 g MFE in boiled rice. In trial 1, relative to control, all MFE doses significantly decreased PPG (–27·2 to −22·9 %; all P ≤ 0·02) and PPI (–34·6 to −14·0 %, all P < 0·01). Breath hydrogen was significantly increased only at 1·5 g MFE (in rice porridge), and self-reported gastrointestinal symptoms were uniformly low. In trial 2, only 0·37 g MFE significantly affected PPG (–20·4 %, P = 0·002) and PPI (–17·0 %, P < 0·001). Together, these trials show that MFE in doses as low as 0·37 g can reliably reduce PPG and PPI responses to a carbohydrate-rich meal, with no apparent adverse effects.
Research on navigation risk of the Arctic Northeast Passage based on POLARIS
Lei An, Long Ma, Hui Wang, Heng-Yu Zhang, Zhen-Hua Li
Journal: The Journal of Navigation / Volume 75 / Issue 2 / March 2022
Published online by Cambridge University Press: 10 February 2022, pp. 455-475
Print publication: March 2022
The complex sea ice conditions in Arctic waters has different impacts on the legs of the Arctic passage, and ships of specific ice classes face different navigation risks. Therefore, the quantitative analysis of the navigation risks faced in different legs has important practical significance. Based on the POLARIS introduced by IMO, the sea ice condition data from 2011 to 2020 was used to quantify the navigation risk of the Arctic Northeast passage. The risk index outcome (RIO) of the Arctic Northeast Passage were calculated. The navigable windows of the route for ice class 1A ships sailing independently under different sea ice conditions in the last decade were determined, with a navigable period of 91 days under normal sea ice conditions, approximately 175 days under light sea ice conditions and only week 40 close to navigation under severe sea ice conditions. The three critical waters affecting the safety of ships were identified. Combined with the navigable windows and critical waters, recommendations on ship's navigation and manipulation and recommendations for stakeholders were given. The method and results provided reference and support for the assessment of the navigation risk of ships in the Northeast Passage and safety navigation and operations of ships, and satisfied the needs of relevant countries and enterprises to rationally arrange shipment dates and sailing plans based on different ice classes of ships.
Counter-flow orbiting of the vortex centre in turbulent thermal convection
Yi-Zhen Li, Xin Chen, Ao Xu, Heng-Dong Xi
Journal: Journal of Fluid Mechanics / Volume 935 / 25 March 2022
Published online by Cambridge University Press: 26 January 2022, A19
We present an experimental study of the large-scale vortex (or large-scale circulation, LSC) in turbulent Rayleigh–Bénard convection in a $\varGamma =\text {diameter}/\text {height}=2$ cylindrical cell. The working fluid is deionized water with Prandtl number ( $Pr$) around 5.7, and the Rayleigh number ( $Ra$) ranges from $7.64\times 10^7$ to $6.06\times 10^8$. We measured the velocity field in various vertical cross-sectional planes by using the planar particle image velocimetry technique. The velocity measurement in the LSC central plane shows that the flow is in the single-roll form, and the centre of the single-roll (vortex) does not always stay at the centre of the cell; instead, it orbits periodically in the direction opposite to the flow direction of the LSC, with its trajectory in the shape of an ellipse. The velocity measurements in the three vertical planes in parallel to the LSC central plane indicate that the flow is in the vortex tube form horizontally filling almost the whole cell, and the centre line of the vortex tube is consistent with the so-called 'jump rope' form proposed by a previous study that combined numerical simulation and local velocity measurements in the low $Pr$ case (Vogt et al., Proc. Natl Acad. Sci. USA, vol. 115, 2018, pp. 12674–12679). In addition, we found that the oscillation of the local velocity in $\varGamma =2$ cells originates from the periodical orbiting of the vortex centre. Our velocity measurements further indicate that the vortex centre orbiting is absent in $\varGamma =1$ cells, at least in the $Ra$ range of our experiments.
The potential impact of rising sea levels on China's coastal cultural heritage: a GIS risk assessment
Yuqi Li, Xin Jia, Zhen Liu, Luo Zhao, Pengfei Sheng, Michael J. Storozum
Journal: Antiquity / Volume 96 / Issue 386 / April 2022
Print publication: April 2022
Without rapid international action to curb greenhouse gas emissions, climate scientists have predicted catastrophic sea-level rise by 2100. Globally, archaeologists are documenting the effects of sea-level rise on coastal cultural heritage. Here, the authors model the impact of 1m, 2m and 5m sea-level rise on China's coastal archaeological sites using data from the Atlas of Chinese Cultural Relics and Shanghai City's Third National Survey of Cultural Relics. Although the resulting number of endangered sites is large, the authors argue that these represent only a fraction of those actually at risk, and they issue a call to mitigate the direct and indirect effects of rising sea levels.
GSDMD-mediated pyroptosis: a critical mechanism of diabetic nephropathy
Yi Zuo, Li Chen, Huiping Gu, Xiaoyun He, Zhen Ye, Zhao Wang, Qixiang Shao, Caiping Xue
Journal: Expert Reviews in Molecular Medicine / Volume 23 / 2021
Published online by Cambridge University Press: 27 December 2021, e23
Pyroptosis is a recently identified mechanism of programmed cell death related to Caspase-1 that triggers a series of inflammatory reactions by releasing several proinflammatory factors such as IL-1β and IL-18. The process is characterised by the rupture of cell membranes and the release of cell contents through the mediation of gasdermin (GSDM) proteins. GSDMD is an important member of the GSDM family and plays a critical role in the two pathways of pyroptosis. Diabetic nephropathy (DN) is a microvascular complication of diabetes and a major cause of end-stage renal disease. Recently, it was revealed that GSDMD-mediated pyroptosis plays an important role in the occurrence and development of DN. In this review, we focus on two types of kidney cells, tubular epithelial cells and renal podocytes, to illustrate the mechanism of pyroptosis in DN and provide new ideas for the prevention, early diagnosis and molecular therapy of DN.
Students' perceptions of school sugar-free, food and exercise environments enhance healthy eating and physical activity
Chieh-Hsing Liu, Fong-Ching Chang, Yu-Zhen Niu, Li-Ling Liao, Yen-Jung Chang, Yung Liao, Shu-Fang Shih
Journal: Public Health Nutrition / Volume 25 / Issue 7 / July 2022
Published online by Cambridge University Press: 22 December 2021, pp. 1762-1770
The objective of this study was to examine the relationships between students' perceptions of their school policies and environments (i.e. sugar-sweetened beverages (SSB) free policy, plain water drinking, vegetables and fruit eating campaign, outdoor physical activity initiative, and the SH150 programme (exercise 150 min/week at school)) and their dietary behaviours and physical activity.
Cross-sectional study.
Primary, middle and high schools in Taiwan.
A nationally representative sample of 2433 primary school (5th–6th grade) students, 3212 middle school students and 2829 high school students completed the online survey in 2018.
Multivariate analysis results showed that after controlling for school level, gender and age, the students' perceptions of school sugar-free policies were negatively associated with the consumption of SSB and positively associated with consumption of plain water. Schools' campaigns promoting the eating of vegetables and fruit were positively associated with students' consumption of vegetables. In addition, schools' initiatives promoting outdoor physical activity and the SH150 programme were positively associated with students' engagement in outdoor physical activities and daily moderate-to-vigorous physical activity.
Students' perceptions of healthy school policies and environments promote healthy eating and an increase in physical activity for students.
Damped shape oscillations of a viscous compound droplet suspended in a viscous host fluid
Fang Li, Xie-Yuan Yin, Xie-Zhen Yin
Journal: Journal of Fluid Mechanics / Volume 931 / 25 January 2022
Published online by Cambridge University Press: 01 December 2021, A33
A study of small-amplitude shape oscillations of a viscous compound droplet suspended in a viscous host fluid is performed. A generalized eigenvalue problem is formulated and is solved by using the spectral method. The effects of the relevant non-dimensional parameters are examined for three cases, i.e. a liquid shell in a vacuum and a compound droplet in a vacuum or in a host fluid. The fundamental mode $l=2$ is found to be dominant. There exist two oscillatory modes: the in phase and the out of phase. In most situations, the interfaces oscillate in phase rather than out of phase. For the in-phase mode, in the absence of the host, as the viscosity of the core or the shell increases, the damping rate increases whereas the oscillation frequency decreases; when the viscosity exceeds a critical value, the mode becomes aperiodic with the damping rate bifurcating into two branches. In addition, when the tension of the inner interface becomes smaller than some value, the in-phase mode turns aperiodic. In the presence of the unbounded host fluid, there exists a continuous spectrum. The viscosity of the host may decrease or increase the damping rate of the in-phase mode. The mechanism behind it is discussed. The density contrasts between fluids affect oscillations of the droplet in a complicated way. Particularly, sufficiently large densities of the core or the host lead to the disappearance of the out-of-phase mode. The thin shell approximation predicts well the oscillation of the compound droplet when the shell is thin.
Geochronology, geochemistry and tectonic implications of early Carboniferous plutons in the southwestern Alxa Block
Zeng-Zhen Wang, Xuan-Hua Chen, Zhao-Gang Shao, Bing Li, Hong-Xu Chen, Wei-Cui Ding, Yao-Yao Zhang, Yong-Chao Wang
Journal: Geological Magazine / Volume 159 / Issue 3 / March 2022
Published online by Cambridge University Press: 12 November 2021, pp. 372-388
The southeastern Central Asian Orogenic Belt (CAOB) records the assembly process between several micro-continental blocks and the North China Craton (NCC), with the consumption of the Paleo-Asian Ocean (PAO), but whether the S-wards subduction of the PAO beneath the northern NCC was ongoing during Carboniferous–Permian time is still being debated. A key issue to resolve this controversy is whether the Carboniferous magmatism in the northern NCC was continental arc magmatism. The Alxa Block is the western segment of the northern NCC and contiguous to the southeastern CAOB, and their Carboniferous–Permian magmatism could have occurred in similar tectonic settings. In this contribution, new zircon U–Pb ages, elemental geochemistry and Sr–Nd isotopic analyses are presented for three early Carboniferous granitic plutons in the southwestern Alxa Block. Two newly identified aluminous A-type granites, an alkali-feldspar granite (331.6 ± 1.6 Ma) and a monzogranite (331.8 ± 1.7 Ma), exhibit juvenile and radiogenic Sr–Nd isotopic features, respectively. Although a granodiorite (326.2 ± 6.6 Ma) is characterized by high Sr/Y ratios (97.4–139.9), which is generally treated as an adikitic feature, this sample has highly radiogenic Sr–Nd isotopes and displays significantly higher K2O/Na2O ratios than typical adakites. These three granites were probably derived from the partial melting of Precambrian continental crustal sources heated by upwelling asthenosphere in lithospheric extensional setting. Regionally, both the Alxa Block and the southeastern CAOB are characterized by the formation of early Carboniferous extension-related magmatic rocks but lack coeval sedimentary deposits, suggesting a uniform lithospheric extensional setting rather than a simple continental arc.
A seamless multiscale operator neural network for inferring bubble dynamics
Chensen Lin, Martin Maxey, Zhen Li, George Em Karniadakis
Journal: Journal of Fluid Mechanics / Volume 929 / 25 December 2021
Published online by Cambridge University Press: 21 October 2021, A18
Modelling multiscale systems from nanoscale to macroscale requires the use of atomistic and continuum methods and, correspondingly, different computer codes. Here, we develop a seamless method based on DeepONet, which is a composite deep neural network (a branch and a trunk network) for regressing operators. In particular, we consider bubble growth dynamics, and we model tiny bubbles of initial size from 100 nm to 10 $\mathrm {\mu }\textrm {m}$, modelled by the Rayleigh–Plesset equation in the continuum regime above 1 $\mathrm {\mu }\textrm {m}$ and the dissipative particle dynamics method for bubbles below 1 $\mathrm {\mu }\textrm {m}$ in the atomistic regime. After an offline training based on data from both regimes, DeepONet can make accurate predictions of bubble growth on-the-fly (within a fraction of a second) across four orders of magnitude difference in spatial scales and two orders of magnitude in temporal scales. The framework of DeepONet is general and can be used for unifying physical models of different scales in diverse multiscale applications.
|
CommonCrawl
|
Do Maxwell's equations imply that still charges produce electrostatic fields and no magnetic fields?
Suppose we have a charge distribution $\rho$, whose current density is J$=\vec{0}$ everywhere; the continuity equation implies $\frac{\partial \rho}{\partial t}=0$, i.e., the charges don't move and the density is always the same. We'd expect such a distribution to produce a static electric field and no magnetic field. If we plug our variables into Maxwell's equations we get $$\nabla\cdot \textbf{E}=4\pi \rho $$ $$ \nabla \cdot \textbf{B}=0$$ $$\nabla \times \textbf E= -\frac{1}{c}\frac{\partial \textbf{B}}{\partial t}$$ $$\nabla \times \textbf B= \frac{1}{c}\frac{\partial \textbf{E}}{\partial t}$$
But how does one go from this to $\bf B=0$ and $\frac{\partial \bf E}{\partial t}=0$?
electromagnetism electrostatics
psmears
NicolNicol
A system with no moving charges is consistent with there being only a static electric field and no magnetic field. However, it does not require there to be no time dependent phenomena. The general solution in this case consists of an electrostatic field, plus freely propagating electromagnetic waves.
You can see the consistency of the static fields by setting the time derivatives in your equations to zero. Then there is only a static divergence source for $\vec{E}$, meaning just an electrostatic field. However, it is possible to add the time-dependent fields of one or more propagating waves in top of that.
In general, any system of differential equations is going to have its solutions determined by the equations themselves, along with the boundary conditions. It's the (space and time) boundary conditions in this case that determine whether there are also freely propagating electromagnetic waves present.
$\begingroup$ That seems weird to me. If, given $\rho$ and J Maxwell's equations have multiple solutions, and thus multiple fields, then which physical parameter dictates which field the charges actually produce? $\endgroup$ – Nicol Dec 16 '17 at 18:38
$\begingroup$ Yes this is important. In many differential equations there are more degrees of freedom than just one solution. $\endgroup$ – mathreadler Dec 17 '17 at 9:19
The OP is asking whether Maxwell's equations together with the continuity equation and the condition that $\textbf J =0$ is enough to obtain $\textbf B=0$ and $\frac{\partial \textbf E}{\partial t}=0$.
The answer for the first part is negative. It's easy to see that Maxwell's equations alone do not precisely determine the electric and magnetic field. Adding a constant field to a solution will again give a solution: $$\textbf E \to \textbf E + \textbf E_0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\textbf B \to \textbf B + \textbf B_0$$ This corresponds to the physical possibility of a "background field" which cannot be determined solely from the equations themselves.
The answer to the second question is also negative. Take $\textbf E = (xt,-yt+3y,0)$. Then
$$\nabla \cdot \textbf E = 3 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\mathrm{but}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\frac{\partial \textbf E}{\partial t}=(x,-y,0)\neq 0 $$ This solves Maxwell's equations, as a corresponding magnetic field would be $\textbf B = (0,0,xy/c)$. This example may not be particularly significant physically, but it shows that it is not possible to obtain the conditions you're stating purely from the equations. To obtain what you're stating you need more information, mostly in the form of boundary conditions, which is often the most physically relevant.
John DonneJohn Donne
Further to the existing answers: yes, the solution isn't unique. In fact, even if we strengthen the assumptions to $\rho=0$ so that there are no charges (as occurs in a vacuum), you can still get an electromagnetic wave. In other words, light can travel through a vacuum!
If this feature of differential equations seems odd, it's no odder than the fact that, when there are no forces at all on a body (i.e. $\ddot{x}=0$), it can still be moving. Mathematically, the reason is the same. For the OP's problem we can show that $$c^{-2}\partial_t^2 \mathbf{E}-\nabla^2\mathbf{E}=-4\pi\boldsymbol{\nabla}\rho,\,c^{-2}\partial_t^2 \mathbf{B}-\nabla^2\mathbf{B}=0.$$(If you want to try proving these, use $\boldsymbol{\nabla}\times\boldsymbol{\nabla}\times\mathbf{X}=\boldsymbol{\nabla}(\boldsymbol{\nabla}\cdot\mathbf{X})-\nabla^2\mathbf{X}$.) We can always add to the solution something proportional to the plane wave $\exp i(\mathbf{k}\cdot\mathbf{x}-kct)$ for a constant wavevector $\mathbf{k}$, so you can't rule them out.
When we go back to the original first-order equations, we find such plane-wave solutions can be added but not arbitrarily. For example, if I add $\mathbf{E}_0,\,\mathbf{B}_0$ times the plane wave, we require $\mathbf{E}_0\cdot\mathbf{k}=\mathbf{B}_0\cdot\mathbf{k}=0$.
J.G.J.G.
A short, but I think important, comment to add to toBuzz's Answer and J. G.'s answer, which show that the no magnetic field solution is sound, but not unique, and in general one must consider boundary conditions.
A key word and concept used here to fix solutions that any full answer to the present and similar questions should, IMO, talk about is the Sommerfeld Radiation Condition. The deployment of this condition does make the zero magnetic field solution to the OP's problem unique. In particular, it rules out infinite plane waves that can be added to the solution as in J. G.s answer.
Not only does the Sommerfeld condition remove ambiguity and is a widely assumed "boundary condition at infinity" for electromagnetic problems, it (or something like it) is also needed to deploy Helmholtz's Theorem, which is essentially about the Hodge decomposition of a one form when we rule out the harmonic component.
Selene RoutleySelene Routley
Not the answer you're looking for? Browse other questions tagged electromagnetism electrostatics or ask your own question.
Derivatives of Dirac delta function and equation of continuity for a single charge
Why do Maxwell's equations contain each of a scalar, vector, pseudovector and pseudoscalar equation?
How would you define electrostatics and magnetostatics starting from Maxwell's equations?
Do changing magnetic fields always produce solenoidal electric fields?
Maxwell's equations, nonlinear media, and dynamic response
Separating static and time-dependent components of E and B
How do Maxwell's equations uniquely determine ${\bf E}$ and ${\bf B}$ despite no. of equations exceeding no. of unknowns?
|
CommonCrawl
|
Battiston, F. et al. E. MEYER, M. GUGGISBERG, CH. LOPPACHER. Impact of Electron and Scanning Probe Microscopy on Materials Research 339 (1999).
Gimzewski, J. K., Brewer, R. J., VepYek, S. & Stuessi, H. THE EFFECT OF A HYDROGEN PLASMA ON THE HYDRIDING OF TITANIUM: KINETICS AND EQUILIBRIUM CONCENTRATION. (Submitted).
Brewer, R. J., Gimzewski, J. K., Veprek, S. & Stuessi, H. Effect of surface contamination and pretreatment on the hydrogen diffusion into and out of titanium under plasma conditions. Journal of Nuclear Materials 103, 465–469 (1981).
Berndt, R., Gimzewski, J. K. & Johansson, P. Electromagnetic interactions of metallic objects in nanometer proximity. Physical review letters 71, 3493 (1993).
Joachim, C. & Gimzewski, J. K. An electromechanical amplifier using a single molecule. Chemical Physics Letters 265, 353–357 (1997).
Gimzewski, J. A. M. E. S. K. A. Z. I. M. I. E. R. Z., Schlittler, R. Rudolf & Welland, M. Edward. ELECTROMECHANICAL TRANSDUCER. (2000).
Gimzewski, J. K., Schlittler, R. R. & Welland, M. E. Electromechanical transducer. (1998).
Aitken, E. J. et al. Electron spectroscopic investigations of the influence of initial-and final-state effects on electronegativity. Journal of the American Chemical Society 102, 4873–4879 (1980).
Lang, H. P. et al. Micro Total Analysis Systems' 9 57–60 (Springer Netherlands, 1998).
Fornaro, P. et al. AN ELECTRONIC NOSE BASED ON A MICROMECHANICAL CANTILEVER ARRAY. Micro Total Analysis Systems' 98: Proceedings of the Utas' 98 Workshop, Held in Banff, Canada, 13-16 October 1998 57 (1998).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic transparence of a single C 60 molecule. Physical review letters 74, 2102 (1995).
Joachim, C., Gimzewski, J. K., Schlittler, R. R. & Chavy, C. Electronic Transparence of a Single ${\mathrm{C}}_{60}$ Molecule. Phys. Rev. Lett. 74, 2102–2105 (1995).
Joachim, C., Gimzewski, J. K. & Aviram, A. Electronics using hybrid-molecular and mono-molecular devices. Nature 408, 541–548 (2000).
Martin-Olmos, C., Stieg, A. Z. & Gimzewski, J. K. Electrostatic force microscopy as a broadly applicable method for characterizing pyroelectric materials. Nanotechnology 23, 235701 (2012).
Himpsel, F. J., Jung, T. A., Schlittler, R. R. & Gimzewski, J. K. Element-Specific Contrast in STM via Resonant Tunneling. APS March Meeting Abstracts 1, 1908 (1996).
Stieg, A. Z. et al. Emergent Criticality in Complex Turing B-Type Atomic Switch Networks. Advanced Materials 24, 286–293 (2012).
Ascott, R. Engineering nature: art & consciousness in the post-biological era. (Intellect Ltd, 2006).
Berndt, R., Gimzewski, J. K. & Schlittler, R. R. Enhanced photon emission from the STM: a general property of metal surfaces. Ultramicroscopy 42, 355–359 (1992).
Gimzewski, J. K., Sass, J. K., Schlitter, R. R. & Schott, J. Enhanced photon emission in scanning tunnelling microscopy. EPL (Europhysics Letters) 8, 435 (1989).
David, T., Gimzewski, J. K., Purdie, D., Reihl, B. & Schlittler, R. R. Epitaxial growth of C 60 on Ag (110) studied by scanning tunneling microscopy and tunneling spectroscopy. Physical Review B 50, 5810 (1994).
Gimzewski, J. K., Jung, T. A. & Schlittler, R. R. Epitaxially layered structure. (1999).
, et al. Erratum: A femtojoule calorimeter using micromechanical sensors [Rev. Sci. Instrum. 65, 3793 (1994)]. Review of Scientific Instruments 66, 3083–3083 (1995).
Han, T. H. & Liao, J. C. Erythrocyte nitric oxide transport reduced by a submembrane cytoskeletal barrier. Biochimica et Biophysica Acta (BBA)-General Subjects 1723, 135–142 (2005).
Cross, S. E. et al. Evaluation of bacteria-induced enamel demineralization using optical profilometry. dental materials 25, 1517–1526 (2009).
Fabian, D. J., Gimzewski, J. K., Barrie, A. & Dev, B. Excitation of Fe 1s core-level photoelectrons with synchrotron radiation. Journal of Physics F: Metal Physics 7, L345 (1977).
Dürig, U., Gimzewski, J. K. & Pohl, D. W. Experimental observation of forces acting during scanning tunneling microscopy. Physical review letters 57, 2403 (1986).
Bomben, K. D., Bahl, M. K., Gimzewski, J. K., Chambers, S. A. & Thomas, T. D. Extended-x-ray-absorption fine-structure amplitude attenuation in Br 2: Relationship to satellites in the x-ray photoelectron spectrum. Physical Review A 20, 2405 (1979).
Bomben, K. D., Gimzewski, J. K. & Thomas, T. D. Extra-atomic relaxation in HCl, ClF, and Cl2 from x-ray photoelectron spectroscopy. The Journal of Chemical Physics 78, 5437–5442 (1983).
, et al. A femtojoule calorimeter using micromechanical sensors. Review of Scientific Instruments 65, 3793–3798 (1994).
Reihl, B. & Gimzewski, J. K. Field emission scanning Auger microscope (FESAM). Surface Science 189, 36–43 (1987).
Bednorz, J. G., Gimzewski, J. K. & Reihl, B. Field-emission scanning auger electron microscope. (1987).
Coombs, J. H. & Gimzewski, J. K. Fine structure in field emission resonances at surfaces. Journal of Microscopy 152, 841–851 (1988).
Stieg, A. Z., Rasool, H. I. & Gimzewski, J. K. A flexible, highly stable electrochemical scanning probe microscope for nanoscale studies at the solid-liquid interface. Review of Scientific Instruments 79, 103701 (2008).
Zhang, W. et al. Folding of a donor–acceptor polyrotaxane by using noncovalent bonding interactions. Proceedings of the National Academy of Sciences 105, 6514–6519 (2008).
Steiner, W. et al. The following patents were recently issued by the countries in which the inventions were made. For US patents, titles and names supplied to us by the US Patent Office are reproduced exactly as they appear on the original published patent. (Submitted).
Dürig, U., Gimzewski, J. K., Pohl, D. W. & Schlittler, R. Force Sensing in Scanning Tunneling Microscopy. IBM, Rüschlikon 1 (1986).
Loppacher, C. et al. Forces with submolecular resolution between the probing tip and Cu-TBPP molecules on Cu (100) observed with a combined AFM/STM. Applied Physics A 72, S105–S108 (2001).
R Wali, P. et al. Fourier transform mechanical spectroscopy of micro-fabricated electromechanical resonators: A novel, information-rich pulse method for sensor applications. Sensors and Actuators B: Chemical 147, 508–516 (2010).
Zhu, L. et al. Functional characterization of cell-wall-associated protein WapA in Streptococcus mutans. Microbiology 152, 2395–2404 (2006).
Stoll, E. P. & Gimzewski, J. K. Fundamental and practical aspects of differential scanning tunneling microscopy. Journal of Vacuum Science & Technology B 9, 643–647 (1991).
Tang, H., Cuberes, M. T., Joachim, C. & Gimzewski, J. K. Fundamental considerations in the manipulation of a single C< sub> 60 molecule on a surface with an STM. Surface science 386, 115–123 (1997).
Martin-Olmos, C., Rasool, H. Imad, Weiller, B. H. & Gimzewski, J. K. Graphene MEMS: AFM probe performance improvement. ACS nano 7, 4164–4170 (2013).
Cross, S. E., Jin, Y. - S., Lu, Q. - Y., Rao, J. & Gimzewski, J. K. Green tea extract selectively targets nanomechanics of live metastatic cancer cells. Nanotechnology 22, 215101 (2011).
Putterman, S. E. T. H., Gimzewski, J. K. & Naranjo, B. B. High energy crystal generators and their applications. (2010).
Reed, J. et al. High throughput cell nanomechanics with mechanical imaging interferometry. Nanotechnology 19, 235101 (2008).
Yamashita, K., Gimzewski, J. K. & Veprek, S. Hydrogen trapping in zirconium under plasma conditions. Journal of Nuclear Materials 128, 705–707 (1984).
Reed, J. et al. Identifying individual DNA species in a complex mixture by precisely measuring the spacing between nicking restriction enzymes with atomic force microscope. Journal of The Royal Society Interface 9, 2341–2350 (2012).
|
CommonCrawl
|
Why is the domain of $x^2$ the set of all real numbers?
I try to understand, why domain of $x^2$ is the set of all real numbers.
My doubts: The domain of square root is not defined for negative numbers. Reason to that (If I am not wrong) is that function is supposed to have output with only one input leading to it. Therefore 9 cannot be square-rooted to -3 and 3, only to the positive number (in this case it's 3). Is not the situation with $x^2$ the same? Should not a domain for it be limited to non-negative numbers? Otherwise, we are having an ambiguity of 3 and -3 leading to 9...
functions polynomials square-numbers
trthhrtztrthhrtz
$\begingroup$ What is the definition of the domain of a function? Once you have that, the answer is immediate. $\endgroup$ – quasi Sep 30 '17 at 17:10
$\begingroup$ the domain of $x^2$ is the set of all real numbers, this is right $\endgroup$ – Dr. Sonnhard Graubner Sep 30 '17 at 17:11
$\begingroup$ The set of all possible inputs. Is there any number $x$ that you can't plug in to get a result? The domain is not concerned with the nature of the results. As long as there is a result, the input is legal. $\endgroup$ – quasi Sep 30 '17 at 17:11
$\begingroup$ Show me a real number that you can't square. $\endgroup$ – quasi Sep 30 '17 at 17:18
$\begingroup$ Yes, a function needs one output for every legal input. What you said: "for each output, only one input" is not the correct concept of a function. In other words, using arrows, for each legal input, an arrow has to go from that input to some output. It can't split and go to two different outputs. However, two or more inputs can share the same output. As an extreme example, consider a constant function, say $f(x) = 4$. Then every input goes to the output $4$. $\endgroup$ – quasi Sep 30 '17 at 17:25
You're right about one thing: the square root function is the inverse function to the squaring function, after the squaring function is restricted to a domain on which it's one-to-one. That domain is nonnegative numbers.
However, I think you're overthinking this problem. When a problem asks for the the domain of a function defined by an algebraic expression, the task is to calculate the entire subset of real numbers which can be substituted into the expression. For instance, if the expression is $\frac{1}{1-x^2}$, you're supposed to notice that the denominator cannot be zero for this to make sense. So the domain must carve out any numbers which do that, namely $\pm 1$. Therefore the domain is $\mathbb{R}\setminus\{-1,1\}$.
But the expression you're given is just $x^2$. This is defined for all real numbers. So the domain is $\mathbb{R}$.
Matthew LeingangMatthew Leingang
The domain is $\Bbb R$, the set of all real numbers. The two inputs $\pm3$ both have output $9$, yes, but this doesn't stop either from being an input: $x^2$ still a function.
A function $f$ on a set $X$ is a subset of the cartesian product of $X$ with itself such that if $(x,y) \in f$ and $(x,z) \in f$ then $y=z$. The domain of $f$ is the set $d$ of left coordinates of ordered pairs of $f$. In short, the domain is something you can have the distinct privilege of specifying.
I.e. if you have a function $f$ and a domain $d$ such that $d$ has more than, (for example, $10$ real numbers). Then you can create a new function $g$ by creating a new domain $d'$ by removing one element from $d$ and setting $g$ equal to the restriction of values of $f$ defined over $d'$.
In short, one domain for the function $f(x)=x^{2}$ is $\mathbb{R}$ because for each real number $x$ in the domain you get a unique real number, (namely $x^2$.)
You might opine that this is a "natural" domain for the function. But then again, we can specify the same equation defining the function $f$ and just restrict our domain set $d$ to be a new domain, $d = \{274848638463926284\} $ instead of all of $\mathbb{R}$ and the result is still a domain for $f(x)=x^{2}$. Although, the graph of this function is a single point in the plane-- which is rather boring compared to a parabola.
RustynRustyn
The key idea is to distinguish these two things:
the image of the function, i.e. the "output" of the function, and here you're right: it's not $\mathbb{R}$, but $\mathbb{R}^+$
the domain of definition, i.e. the set of all authorized numbers as "input" of the function: here all real numbers can be squared. Can $2.317$ be squared? Yes. Can $-3$ be squared? Yes. Indeed: $(-3)^2 = -3 \times (-3) = 9$. Thus the domain of definition of the function $x \mapsto x^2$ is $\mathbb{R}$.
BasjBasj
Not the answer you're looking for? Browse other questions tagged functions polynomials square-numbers or ask your own question.
Is sqrt(x) a function? Does it matter if a domain is given?
How to Find the Square Root of Numbers That Aren't Square Numbers
Why does the domain and range of $\sqrt x$ contain only positive real numbers?
What is the domain of $f(x)=x^x$?
How to formally prove whether this function is onto or not?$ K(x) = x^ 2$ where $x \ge 0$.
Understanding the formal proof for the surjectivity of $x^2$ and $x^3$ over all reals
Are 'converse domain' and 'co-domain' the same? If not, why not?
Find the Domain of a function with Tan
Why do we take the negative square root in this inverse function?
For which values of $x$ does $y = \log_e (x - 4)$ become negative?
|
CommonCrawl
|
Representation Theory of Cohen-Macaulay Modules
http://www.math.snu.ac.kr/board/index.php?mid=seminars&document_srl=978054
노경민
We introduce a representation-theoretic study of Cohen-Macaulay modules over degenerate cusp surface singularities, which was developed by Burban and Drozd. We then give its geometric interpretation into Fukaya category of Riemann surfaces using homological mirror symmetry, based on a joint work with Cho, Jeong and Kim.
(Lunch: 12:00 - 14:00, Break: 15:30 - 15:45)
Nov 01, 2022 16:40-17:10 Surgery techniques in 4-manifolds 최학호 129-101
Nov 01, 2022 15:00-16:00 Fredholm Property of the Linearized Boltzmann Operator for a Mixture of Polyatomic Gases Marwa Shahine 27-325
Nov 01, 2022 16:00-18:00 A stationary set method for estimating oscillators integrals 오세욱 27-116
Nov 01, 2022 16:00-17:00 Approaches to the alleviation of the burden of learning for Weakly Supervised Object Localization 구본경 129-301
Oct 26, 2022 16:00-18:00 Free Probability, Regularity, and Free Stein Dimension Ian Charlesworth 129-406
Oct 25, 2022 10:00-11:00 Rational liftings and gRSK Travis Scrimshaw 선택
Oct 25, 2022 11:00-12:00 Crystal Invariant Theory Travis Scrimshaw 선택
Oct 25, 2022 15:00-18:00 Ergodic theory of complex continued fraction maps Hitoshi Nakada 129-406
Oct 21, 2022 14:00-15:00 Regularity properties of Brjuno functions associated with classical continued fractions 이슬비 선택
Oct 19, 2022 15:30-16:20 일본에서의 ODE 발전에 대해서 신정선 27-325
Oct 18, 2022 16:00-16:30 Genera of manifolds 김승원 129-101
Oct 18, 2022 16:40-17:10 Counting surfaces on Calabi-Yau 4-folds 박현준 129-101
Oct 18, 2022 10:00-12:00 Remarks on the long-time dynamics of 2D Euler Theodore Drivas 선택
Oct 17, 2022 13:00-14:30 The rank of new regular quadratic forms 김민규 27-325
Oct 12, 2022 11:00-12:00 Derivation of the Vlasov equation from quantum many-body Fermionic systems with singular interaction Jacky Chong 선택
Oct 12, 2022 16:00-18:00 Haagerup inequalities on non-Kac free orthogonal quantum groups 윤상균 129-406
Oct 11, 2022 17:00-18:00 On limits of sequences of operators 유재현 27-116
Oct 05, 2022 16:00-18:00 A universal framework for entanglement detection of invariant quantum states 박상준 선택
Oct 04, 2022 16:00-17:00 On an extreme value law for the unipotent flow on $\mathrm{SL}_2(\mathbb{R})/\mathrm{SL}_2(\mathbb{Z})$. Keivan Mallahi Karai 선택
Oct 04, 2022 16:00-18:00 On the maximal Bochner-Riesz conjecture for p>2 유재현 27-116
First Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Last Page
|
CommonCrawl
|
Home » Practice(medium) » Count Arrays
Count Arrays
Problem code: COUNTIT
All submissions for this problem are available.
### Read problem statements in [Hindi](http://www.codechef.com/download/translated/JUNE19/hindi/COUNTIT.pdf), [Bengali](http://www.codechef.com/download/translated/JUNE19/bengali/COUNTIT.pdf), [Mandarin Chinese](http://www.codechef.com/download/translated/JUNE19/mandarin/COUNTIT.pdf), [Russian](http://www.codechef.com/download/translated/JUNE19/russian/COUNTIT.pdf), and [Vietnamese](http://www.codechef.com/download/translated/JUNE19/vietnamese/COUNTIT.pdf) as well. Consider all matrices with $N$ rows (numbered $1$ through $N$) and $M$ columns (numbered $1$ through $M$) containing only integers between $0$ and $K-1$ (inclusive). For each such matrix $A$, let's form a sequence $L_1, L_2, \ldots, L_{N+M}$: - For each $i$ ($1 \le i \le N$), $L_i$ is the maximum of all elements in the $i$-th row of $A$. - For each $i$ ($1 \le i \le M$), $L_{N+i}$ is the maximum of all elements in the $i$-th column of $A$. Find the number of different sequences formed this way. Since this number may be very large, compute it modulo $10^9 + 7$. ### Input - The first line of the input contains a single integer $T$ denoting the number of test cases. The description of $T$ test cases follows. - The first and only line of each test case contains three space-separated integers $N$, $M$ and $K$. ### Output For each test case, print a single line containing one integer — the number of different sequences, modulo $10^9 + 7$. ### Constraints - $1 \le T \le 1,000$ - $1 \le N \le 10^5$ - $1 \le M \le 10^5$ - $1 \le K \le 10^9$ - the sum of $N$ over all test cases does not exceed $10^5$ - the sum of $M$ over all test cases does not exceed $10^5$ ### Subtasks **Subtask #1 (10 points):** - $1 \le K \le 50$ - the sum of $N$ over all test cases does not exceed $50$ - the sum of $M$ over all test cases does not exceed $50$ **Subtask #2 (20 points):** - $1 \le K \le 10^5$ - the sum of $N$ over all test cases does not exceed $10^5$ - the sum of $M$ over all test cases does not exceed $10^5$ **Subtask #3 (20 points):** - $1 \le K \le 10^9$ - the sum of $N$ over all test cases does not exceed $200$ - the sum of $M$ over all test cases does not exceed $200$ **Subtask #4 (20 points):** - $K$ is the same for all test cases - $1 \le K \le 10^9$ - the sum of $N$ over all test cases does not exceed $10^5$ - the sum of $M$ over all test cases does not exceed $10^5$ **Subtask #5 (30 points):** - $1 \le K \le 10^9$ - the sum of $N$ over all test cases does not exceed $10^5$ - the sum of $M$ over all test cases does not exceed $10^5$ ### Example Input ``` 3 2 2 2 2 3 2 41 42 2 ``` ### Example Output ``` 10 22 903408624 ``` ### Explanation **Example case 1:** There are $16$ possible matrices, listed below along with the sequences they generate. There are $10$ different sequences among them: $(0, 0, 0, 0)$, $(0, 1, 0, 1)$, $(0, 1, 1, 0)$, $(1, 0, 0, 1)$, $(1, 0, 1, 0)$, $(1, 1, 1, 0)$, $(1, 0, 1, 1)$, $(0, 1, 1, 1)$, $(1, 1, 0, 1)$ and $(1, 1, 1, 1)$. ``` [0, 0] [0, 0] = (0, 0, 0, 0) [0, 0] [0, 1] = (0, 1, 0, 1) [0, 0] [1, 0] = (0, 1, 1, 0) [0, 1] [0, 0] = (1, 0, 0, 1) [1, 0] [0, 0] = (1, 0, 1, 0) [1, 0] [1, 0] = (1, 1, 1, 0) [1, 1] [0, 0] = (1, 0, 1, 1) [0, 0] [1, 1] = (0, 1, 1, 1) [0, 1] [0, 1] = (1, 1, 0, 1) [1, 0] [0, 1] = (1, 1, 1, 1) [0, 1] [1, 0] = (1, 1, 1, 1) [1, 1] [1, 0] = (1, 1, 1, 1) [1, 1] [0, 1] = (1, 1, 1, 1) [1, 1] [0, 1] = (1, 1, 1, 1) [1, 0] [1, 1] = (1, 1, 1, 1) [1, 1] [1, 1] = (1, 1, 1, 1) ``` **Example case 2:** There are $22$ different sequences. One of them is $(1, 1, 0, 1, 1)$, generated e.g. by the matrix ``` [0, 1, 0] [0, 0, 1] ``` **Example case 3:** Don't forget about modulo!
Author: 5★filyan
Editorial https://discuss.codechef.com/problems/COUNTIT
Tags filyan, june19, junechallenge
Date Added: 24-04-2019
Time Limit: 5 sec
Source Limit: 50000 Bytes
Languages: C, CPP14, JAVA, PYTH, PYTH 3.6, PYPY, CS2, PAS fpc, PAS gpc, RUBY, PHP, GO, NODEJS, HASK, rust, SCALA, swift, D, PERL, FORT, WSPC, ADA, CAML, ICK, BF, ASM, CLPS, PRLG, ICON, SCM qobi, PIKE, ST, NICE, LUA, BASH, NEM, LISP sbcl, LISP clisp, SCM guile, JS, ERL, TCL, kotlin, PERL6, TEXT, SCM chicken, PYP3, CLOJ, R, COB, FS
Please login at the top to post a comment.
SUCCESSFUL SUBMISSIONS
Fetching successful submissions
|
CommonCrawl
|
Association of air pollution with outpatient visits for respiratory diseases of children in an ex-heavily polluted Northwestern city, China
Yueling Ma1 na1,
Li Yue2 na1,
Jiangtao Liu1,
Xiaotao He1,
Lanyu Li1,
Jingping Niu1 &
Bin Luo ORCID: orcid.org/0000-0001-9324-89421,3,4
A great number of studies have confirmed that children are a particularly vulnerable population to air pollution.
In the present study, 332,337 outpatient visits of 15 hospitals for respiratory diseases among children (0–13 years), as well as the simultaneous meteorological and air pollution data, were obtained from 2014 to 2016 in Lanzhou, China. The generalized additive model was used to examine the effects of air pollutants on children's respiratory outpatient visits, including the stratified analysis of age, gender and season.
We found that PM2.5, NO2 and SO2 were significantly associated with the increased total respiratory outpatient visits. The increments of total respiratory outpatient visits were the highest in lag 05 for NO2 and SO2, a 10 μg/m3 increase in NO2 and SO2 was associated with a 2.50% (95% CI: 1.54, 3.48%) and 3.50% (95% CI: 1.51, 5.53%) increase in total respiratory outpatient visits, respectively. Those associations remained stable in two-pollutant models. Through stratification analysis, all air pollutants other than PM10 were significantly positive associated with the outpatients of bronchitis and upper respiratory tract infection. Besides, both NO2 and SO2 were positively related to the pneumonia outpatient visits. PM2.5 and SO2 were significantly related to the outpatient visits of other respiratory diseases, while only NO2 was positively associated with the asthma outpatients. We found these associations were stronger in girls than in boys, particularly in younger (0–3 years) children. Interestingly, season stratification analysis indicated that these associations were stronger in the cold season than in the transition or the hot season for PM10, PM2.5 and SO2.
Our results indicate that the air pollution exposure may account for the increased risk of outpatient visits for respiratory diseases among children in Lanzhou, particularly for younger children and in the cold season.
Air pollution is one of the greatest environmental risks to public health. The World Health Organization (WHO) report showed that outdoor air pollution was responsible for 4.2 million deaths worldwide in 2016 [1]. A growing body of literature has investigated the association between air pollution and respiratory tract, which is the main organ affected by air pollution. For instance, a panel study from Korea suggested that air pollution may cause respiratory symptoms [2]. In addition, a considerable amount of papers have focused on the associations between air pollution and respiratory diseases/mortality in Europe [3, 4], the United States [5, 6], and some Asian countries [7, 8]. In Taiwan, two main air pollutants (NO and NO2) were positively associated with respiratory diseases, followed by PM10, PM2.5, O3, CO and SO2 [9]. A study with urban Chinese population found that per 10 μg/m3 increase in PM2.5 and PM10 concentration on the current day of exposure was associated with 0.36 and 0.33% increase in respiratory system disease, respectively [10]. In Hangzhou, outpatient visits of adults with respiratory disease increased by 0.67, 3.50 and 2.10% with per 10 μg/m3 increase in PM2.5, SO2 and NO2, respectively, however, children outpatient visits increased by 1.47, 5.70 and 4.04%, respectively, which indicated that children were more susceptible to air pollutants [11]. Besides, the results for a study in Taiwan showed significant relationships between NO2, PM10 and asthma outpatients, especially for children [12]. Therefore, air pollution may affect the respiratory outpatient visits.
Children have relatively immature lungs and immune system, and inhale a larger volume of air per body weight [13], so they are more susceptible to the adverse respiratory effects of air pollution. Exposure to air pollution at early stage may affect children's normal growth and lung development [14, 15]. The increased prevalence of young children's respiratory diseases was also related to air pollution exposure time and dose in Jinan [16]. Particularly, air pollution was positively related to the pneumonia among children [17, 18]. Besides, better air quality has been approved to reduce respiratory symptoms among children [19]. However, research about comprehensive comparison of respiratory health changes in children from different subgroups is still limited, especially in cities that suffer from heavy air pollution.
Air pollution is a global problem. About 91% of the world population was estimated to breathe polluted air which exceeded the WHO air quality guideline levels in 2016 [20]. Lanzhou, an industrial city, located in a typical valley basin, is particularly well known as a dry city with scarce rainfall, high evaporation and low wind speeds [21]. Moreover, it is also frequently affected by dust storms due to its location closed to the arid and semi-arid region of Northwest China [22]. These factors combine to make Lanzhou one of the most traditional seriously air-polluted cities in China. Although, a study with very limited data has reported the effect of PM2.5 over respiratory disease in Lanzhou, but didn't focus on the children [21]. Normally, children are often divided into young children period (0–3 years age), preschool period (4–6 years age) and school period (7–13 years age), displaying growing level of immunity, who may show different effects when exposed to air pollution [23]. Therefore, we aim to assess the effects of air pollutants on children's outpatient visits for respiratory diseases from different subgroups with the data of 15 hospitals in a poor area of China-Lanzhou city.
Study area and data collection
Being the capital city of Gansu province, Lanzhou is located in the north-west of China with a population of over 3.7 million in 2017 [24]. Lanzhou is one of the most air-polluted cities in China, because it is heavily industrialized and owns a valley style terrain, and has a typical semi-arid continental climate with scarce precipitation [21, 25]. Even though the authorities have taken significant measures to improve the air quality in Lanzhou, the level of air pollutants concentration (The average annual PM2.5, PM10, SO2 and NO2 concentrations during 2007–2016 in Lanzhou were 61.23 μg/m3, 136.14 μg/m3, 42.93 μg/m3 and 45.37 μg/m3, respectively.) [21] exceeded the national level II (The average annual standards for PM2.5 is 35 μg/m3, PM10 is 70 μg/m3, SO2 is 60 μg/m3, and NO2 is 40 μg/m3.).
The daily number of outpatients for respiratory diseases between 2014 and 2016 were obtained from the 15 hospitals of the four central urban districts of Lanzhou (Chengguan, Qilihe, Xigu and Anning) (Fig. 1), which was confirmed and permitted by the Lanzhou center for disease control and prevention. This study protocol was approved by the ethics committee of Lanzhou University (Project identification code: IRB190612–1). We screened the outpatient visit data using the 10th Revision of the International Classification of Diseases (ICD-10) Code of respiratory diseases (J00-J99). We excluded the patients who were not living in the four central urban districts of Lanzhou and those children aged ≥14 years. Finally, all outpatient data were classified into four specific diseases [pneumonia, J12-J18; asthma, J45-J46; bronchitis and upper respiratory tract infection (J00-J06, J20-J21, J30-J39, J40-J42); and other respiratory diseases (J22, J43-J44, J47, J60-J99)].
Spatial distribution of air quality monitoring stations, studied hospitals, and four central urban districts in Lanzhou, China. Source: The map was created by the authors with ArcGIS 10.2.2 software (ESRI, Redlands, California, USA). ArcGIS is the intellectual property of ESRI and is used by license in here
The simultaneous daily meteorological variables and air pollutants data were obtained from open access website of Lanzhou Meteorological administration and Lanzhou air quality monitoring stations (including Institute of Biology, Railway design institute, Hospital of Staff and LanLian Hotel) (Fig. 1), respectively. The air quality monitoring stations were in four central urban districts of Lanzhou. Meteorological variables included daily average temperature and relative humidity, and air pollutants data included particulate matter with aerodynamic diameter ≤ 10 μm (PM10), particulate matter with aerodynamic diameter ≤ 2.5 μm (PM2.5), nitrogen dioxide (NO2) and sulfur dioxide (SO2).
The descriptive analysis was performed for all data. The Quasi-Poisson regression with generalized additive model (GAM) was used to examine the associations between air pollutants (PM10, PM2.5, NO2 and SO2) and the daily children's outpatient visits with respiratory diseases. The Quasi-Poisson distribution was applied to overcome the overdispersion of outpatient visits data. Generalized additive model allows for highly flexible fitting as the outcome is supposed to be dependent on a sum of the smoothed and linear functions of the predictor variables [26]. Based on the previous studies, the penalized smoothing spline function was used to adjust for long-term time trends, day-of-week, holiday and meteorological factors [27, 28]. The basis GAM equation is:
$$ {\displaystyle \begin{array}{c}\mathrm{logE}\left({Y}_t\right)=\upalpha +\upbeta {X}_t+s\left( Time,k= df+1\right)+s\left({Temperature}_l,k= df+1\right)\\ {}+s\left({Humidity}_l,k= df+1\right)+ DOW+ Holiday\end{array}} $$
Where t is the day of the observation; E (Yt) is the expected number of daily outpatient visits for respiratory diseases on day t; α is the intercept; β is the regression coefficient; Xt is the daily concentration of air pollutant on day t; s() denotes the smoother based on the penalized smoothing spline; The same lag structures as pollutants for temperature and relative humidity are adjusted, and Temperaturel and Humidityl are the six-day moving average (lag 05) of temperature and relative humidity, respectively [27, 29]. Based on Akaike's information criterion (AIC), the 7 degrees of freedom (df) per year is used for long-term time trends and 3 df for Temperaturel and Humidityl; DOW is a categorical variable indicating the date of the week; and holiday is a binary variable for national holiday in China.
After constructing the basic model, single-pollutant models were used to examine the lagged effects, i.e., single day lag (from lag 0 to lag 5) and multiple-day average lag (from lag 01 to lag 05). A spline function of GAM was applied to plot the exposure and response correlation curves between air pollution and outpatient visits for respiratory diseases. Moreover, two-pollutant models were set to evaluate the robustness of our results after adjusting for the other pollutants. In stratification analysis, all of these outpatients were classified into different sex (boys and girls) and age (0–3 years, 4–6 years and 7–13 years), and season [cold season (November to March), hot season (June to August) and transition season (April, May, September and October)] [23, 30]. According to the AIC and previous studies [23, 31], the df of time was 3, 2, 3 per year for the cold, hot and transition season, respectively. We also conducted a sensitivity analysis by changing the df from 5 to 9 per year for calendar time and from 3 to 8 for temperature and relative humidity.
All the statistical analyses were two-sided, and at a 5% level of significance. All analyses were conducted using R software (version 3.5.2) with the GAM fitted by the "mgcv" package (version 1.8–26). The effect estimates were denoted as the percentage changes and their 95% confidence intervals (CIs) in daily children's outpatient visits for respiratory diseases associated with per 10 μg/m3 increase in air pollutant concentrations. The ArcGIS 10.2.2 software (ESRI, Redlands, California, USA) and GraphPad Prism 7.00 software were used to plot the Figures.
Descriptive of air pollutants, meteorological variables and respiratory diseases outpatient data
There were 332,337 respiratory diseases outpatient visits for children during January 1st, 2014 through December 31st, 2016 in 15 major hospitals of Lanzhou. The mean concentrations of PM2.5, PM10, SO2 and NO2 were 54.52 μg/m3, 123.35 μg/m3, 22.97 μg/m3 and 51.80 μg/m3 during 2014–2016, respectively. In addition, the median of temperature and relative humidity were 12.9 °C and 50%, respectively (Table 1). On average, there were approximately 303 respiratory diseases outpatient visits per day in our study areas, and the bronchitis and upper respiratory tract infection, boys, children aged 4–6 and 7–13 years, and cold season had higher visits than other groups (Table 2).
Table 1 Descriptive statistics on daily air pollutants and meteorological parameters
Table 2 Descriptive statistics on daily outpatient visits in Lanzhou, China, during 2014–2016
Figure 2 showed that daily air pollutant concentrations were higher in the cold season than in the hot season, such as, the interquartile range of PM10, PM2.5, NO2 and SO2 concentrations in the cold season were 70.20 μg/m3, 41.00 μg/m3, 32.20 μg/m3 and 20.00 μg/m3, respectively, while they were 37.10 μg/m3, 15.20 μg/m3, 22.90 μg/m3 and 9.50 μg/m3 in the hot season. What's more, the trend of total respiratory outpatient visits in different seasons were similar to the daily air pollutant concentrations.
Box plots of air pollutants and total outpatients with respiratory diseases in the cold, transition and hot season. Boxes indicate the interquartile range (25th percentile-75th percentile); lines within boxes indicate medians; whiskers below boxes represent minimum values; whiskers and dots above boxes indicate maximum values. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Associations between air pollutants and outpatient visits for respiratory diseases
In Fig. 3, we observed significantly positive associations between respiratory diseases outpatient visits and the concentration of NO2 and SO2. In single-pollutant models, we found PM2.5, NO2 and SO2 were significantly associated with the increased respiratory outpatient visits (Fig. 4). Each 10 μg/m3 increase of PM2.5 was only significantly associated with total respiratory outpatient visits in lag 0, lag 01 and lag 02. The increments of respiratory outpatient visits were the highest in lag 05 for NO2 and SO2. The respiratory outpatient visits in lag 05 increased by 2.50% (95% CI: 1.54, 3.48%) and 3.50% (95% CI: 1.51, 5.53%) with per 10 μg/m3 increase in NO2 and SO2, respectively. In cause-specific analysis, PM2.5 showed significant effects on the increase of respiratory outpatient visits due to bronchitis and upper respiratory tract infection, and other respiratory diseases, but the significant effect of PM10 was not observed in any type of respiratory diseases (Fig. 5). To NO2, the significantly positive associations were attributed to pneumonia, asthma, and bronchitis and upper respiratory tract infection, with the greatest increase [1.73% (95% CI: 0.37, 3.11%) in lag 04, 3.28% (95% CI: 0.71, 5.91%) and 2.60% (95% CI: 1.59, 3.63%) in lag 05] in their outpatient visits, respectively. Moreover, for SO2, we found the significantly positive associations in pneumonia, bronchitis and upper respiratory tract infection, and other respiratory diseases in lag 05.
The exposure-response curves of air pollutants concentrations and total outpatients with respiratory diseases in Lanzhou, China, during 2014–2016. The X-axis is the concurrent day air pollutants concentrations (μg/m3), Y-axis is the predicted log relative risk (RR), is shown by the solid line, and the dotted lines represent the 95% confidence interval (CI). PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of children outpatient visits for total respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of children outpatient visits for cause-specific respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
After sex stratification, we found the effects of PM10 on respiratory outpatient visits for both boys and girls were not statistically significant (Fig. 6). However, the increase of each 10 μg/m3 in PM2.5 was only significantly associated with respiratory outpatient visits for boys in lag 0, but for girls in lag 0, lag 01, lag 02 and lag 03. Each 10-μg/m3 increment of NO2 and SO2 was positively associated with respiratory outpatient visits for boys, with the greatest increase in lag 05 [2.46% (95% CI: 1.46, 3.46%) and 3.25% (95% CI: 1.20, 5.34%), respectively, and girls, with the greatest increase in lag 05 [2.58% (95% CI: 1.50, 3.67%) and 3.89% (95% CI: 1.66, 6.16%), respectively. In different ages, NO2 and SO2 were positively related to respiratory outpatient visits for all ages, but PM2.5 only in children aged 0–3 and 7–13 years (Fig. 7). The effect of NO2 was the highest among 0–3 years children in lag 05 [3.45% (95% CI: 2.37, 4.54%)]. Meanwhile, the maximum increase of respiratory outpatient visits due to a 10 μg/m3 increase of SO2 occurred in lag 05 in children aged 0–3 [4.67% (95% CI:1.22, 8.24%)]. In addition, the greatest increment of respiratory outpatient visits was occurred in lag 05 with a 10 μg/m3 increase of PM10 [0.60% (95% CI: 0.21, 0.99%)], PM2.5 [2.52% (95% CI: 1.45, 3.60%)] and SO2 [7.95% (95% CI: 5.40, 10.55%)] in the cold season, but NO2 [4.02% (95% CI: 2.08, 5.99%)] in the transition season (Fig. 8). The positive associations were observed among air pollutants, including PM2.5 with PM10 (r = 0.73), SO2 (r = 0.60) and NO2 (r = 0.57); and PM10 with SO2 (r = 0.33) and NO2 (r = 0.39); SO2 with NO2 (r = 0.53) (Table 3).
Percentage change (95% confidence interval) of daily children outpatient visits caused by respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants stratified by sex for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of daily children outpatient visits caused by respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants stratified by age for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Percentage change (95% confidence interval) of daily children outpatient visits caused by respiratory diseases per 10 μg/m3 increase in concentrations of air pollutants stratified by season for different lag days in the single-pollutant models in Lanzhou, China, during 2014–2016. PM2.5, particulate matter with aerodynamic diameter ≤ 2.5 μm; PM10, particulate matter with aerodynamic diameter ≤ 10 μm; NO2, nitrogen dioxide; SO2, sulfur dioxide
Table 3 Pearson correlation analysis of pollutants
After the optimum lag day for each pollutant being determined in the single-pollutant models, the two-pollutant models were used to adjust for other pollutants. Table 4 compared the results of the single-pollutant models with the results of the two-pollutant models using exposure in lag 05 after adjusting for other pollutants. After adjusting for PM10 and PM2.5 concentration in the two-pollutant models, the percentage increase for total respiratory diseases outpatient visits of NO2 and SO2 remained statistically significant with a little increase. However, after controlling NO2 and SO2, we found the percentage changes of PM2.5 and PM10 were not statistically associated with total respiratory diseases outpatient visits, similar to the results of the single-pollutant models.
Table 4 Percentage change (95% confidence interval) of total children respiratory outpatients per 10 μg/m3 increase in concentrations of pollutants in the single and two-pollutant models
Lanzhou has a population of over 3.7 million with children accounting for 14% in 2016 [32]. In this study, we observed 332,337 children's outpatient visits for respiratory diseases within 3 years, suggesting respiratory diseases is a major health problem among children in Lanzhou. Many studies about air pollution and children respiratory diseases were conducted in the cities of China with moist climate, such as Shenzhen [33], Hefei [34] and so on. However, research about comprehensive comparison of air pollution at respiratory diseases of different groups (gender, age, season and cause-specific diseases) is still limited, especially in city with arid climate. Therefore, our results may add to the limited scientific knowledge that air pollution may also affect the incidence of respiratory diseases among children from different subgroups in an arid climate city.
The results showed that PM2.5, NO2 and SO2 were significantly associated with the increased total respiratory outpatient visits of children. A study in Shanghai during 2013–2015 found that an interquartile range (IQR) increase in PM2.5, SO2 and NO2 was associated with a 8.81, 17.26 and 17.02% increase for daily pediatric respiratory emergency visits in lag 03, respectively [35], which is higher than our study. The possible explanation is that the air pollution level in Shanghai of 2013–2015 showed a trend of rising, but it has been persistently declining in Lanzhou since 2013 [36]. However, a study conducted in Yichang during 2014–2015, China, observed that each IQR increase in PM2.5 and NO2 concentrations corresponded to a 1.91 and 1.88% increase of pediatric respiratory outpatient visits at current day, respectively [37], which was higher for PM2.5 but lower than our study for NO2. It is because that the daily average concentration of PM2.5 in Yichang was higher (84.9 μg/m3 VS 54.52 μg/m3) but NO2 was lower than Lanzhou (37.4 μg/m3 VS 51.80 μg/m3) [37]. However, the associations between PM10 and total respiratory outpatients were insignificant, which is not consistent with the findings from other studies [35, 37]. Shanghai is characterized by a higher degree of urbanization and industrialization than Lanzhou, so the PM10 of which mainly comes from traffic and industry pollution sources, similar to that in Yichang [36, 38]. However, the PM10 in Lanzhou was mainly contributed by raised dust containing higher level of crustal elements, which is not as poisonous as that in Shanghai and Yichang [39]. Even so, our results indicate that the air pollution is positively related to the respiratory diseases among children in Lanzhou.
It is well known that air pollutants are the risk factors for many respiratory diseases in children. An eight-year time-series study in Hanoi showed that all air pollutants (PM10, PM2.5, NO2 and SO2) were positively associated with pneumonia, bronchitis and asthma hospitalizations among children [18], like that reported in Shijiazhuang [23] and Taiwan [9]. Consistent with these studies, we also found all air pollutants (except PM10) were positively related to the outpatient visits of bronchitis and upper respiratory tract infection. Coupled with the fact that bronchitis and upper respiratory tract infection were the major types of respiratory diseases (87.45% of all respiratory outpatient visits) in Lanzhou, the effect of air pollution may explain part of this phenomenon. To asthma, gas pollutant like NO2 has been well known as its major risk factor, which has also been confirmed in a study with a broad range of exposures and diverse populations among children published in the Lancet [40]. Unfortunately, similar result was also found in our study with an arid climate. Therefore, although the Lanzhou government has worked positively and gained great international compliment in reducing the air pollution [41], more efforts will be needed to reduce the air pollution from vehicle exhaust.
In the stratified analysis, the impact of air pollution was more significant on girls than boys, which is consistent with the study in Taiwan among the children respiratory outpatients [9]. A review showed that girls had smaller lungs, shorter and wider airways, and exhibited higher forced expiratory flow rates than boys [42]. Therefore, the airways of girls may be less able to block air pollution. However, there is lack of consistent results for sex differences in health effects of various air pollutants. Many similar studies conducted in Beijing for asthma children [43], in Ningbo for respiratory infections children [44], in Jinan for outpatient respiratory diseases [45] and in Hanoi for children lower respiratory infections [18] found that there was no obvious difference between boys and girls. Thus, additional studies are needed to clarify whether there are sex differences for the associations between air pollutants and respiratory diseases among children. In age difference, we found younger children (0–3 years) were more vulnerable to air pollution. A study in Ningbo for pneumonia observed stronger associations between air pollutants and children under 5 years [17]. The study in Hanoi also showed positive relationship between airborne particles and daily hospital admission for respiratory diseases among children aged < 5 years [46]. It is generally recognized that this high vulnerability among younger children can be attributed to their immature lungs, higher breathing rate [47] and predominantly oral breathing characteristics [48], which increased their exposure and susceptibility to respiratory infections. These factors, combined with the underdeveloped immune function, may add together to make infant and younger children more susceptible to air pollutants.
In the present study, the descriptive results showed that the concentrations of air pollutants in Lanzhou were higher in the cold season, which is consistent with the study in Shennongjia [49]. Previous study suggested winter was the most polluted season [50]. In Northeastern and Northwestern China, due to a specific cold climate in winter and regional living habits, air pollution mainly comes from coal burning, motor vehicles and industrial production [51, 52]. Lanzhou is located in Northwestern China with a narrow and long valley basin and low winds, stable stratification especially inversion, which blocks the air streams and makes the pollutants difficult to disperse [53]. In addition, coal use in the winter also increases the level of air pollution [36]. These factors may lead the air pollution of Lanzhou to be the most severe in cold seasons. This may explain why we found the greatest effects of PM10, PM2.5 and SO2 on children respiratory outpatients in the cold season. However, the result for NO2 agrees with the similar study in Shijiazhuang [23], but is inconsistent with the study in Yichang [37]. This may also be explained by the different source of air pollutants among these cities, NO2 was also the major air pollutant in Yichang but not in Lanzhou or Shijiazhuang.
Our study has several limitations. First, the study only used the data of 3 years due to limited data accessibility and availability, which may not be abundantly enough to evaluate the effects of air pollution on child respiratory outpatients, but we at least provide some hypothesis from a specific topography with an arid climate and large sample. Second, the air pollution data were collected from only four monitoring stations, the average value of which may not be strong enough to represent the real condition of air quality in Lanzhou. Thus, there should be data from more monitoring stations of Lanzhou. Third, unknown or unmeasured confounders such as indoor air pollution, second-hand smoke exposure and so on may exist and affect the associations. Therefore, all these limitations should be solved in future studies.
Our results indicate that the air pollution exposure may account for the increased risk of outpatient visits for respiratory diseases among children in Lanzhou, particularly for younger children and in the cold season. To our knowledge, this is the first study to investigate the short-term effects of air pollution on child respiratory morbidity based on the large population in Northwestern China. The estimated percent changes may be helpful to monitor the disease burden caused by air pollution in Lanzhou among children and strengthen the urgency for controlling air pollution in Lanzhou. Since children are much more susceptible to air pollution, more urgent strategies will be needed to deal with the higher level of respiratory diseases among children, like promoting the use of personal protective equipment (e.g., respirators, air purifiers) and avoiding outdoor activities during heavily polluted weathers of Lanzhou.
The datasets used and/or analyzed during the current study are not publicly available but are available from the corresponding author on reasonable request.
PM10 :
Particulate matter with aerodynamic diameter ≤ 10 μm
PM2.5 :
Particulate matter with aerodynamic diameter ≤ 2.5 μm
NO2 :
SO2 :
GAM:
Generalized additive model
RR:
WHO. World Health Statistics 2018: Monitoring health for the SDGs, vol. 2020; 2018.
Nakao M, Ishihara Y, Kim C, Hyun I. The impact of air pollution, including Asian sand dust, on respiratory symptoms and health-related quality of life in outpatients with chronic respiratory disease in Korea: a panel study. J Prev Med Public Health. 2018;51(3):130–9.
Wanka ER, Bayerstadler A, Heumann C, Nowak D, Jörres RA, Fischer R. Weather and air pollutants have an impact on patients with respiratory diseases and breathing difficulties in Munich, Germany. Int J Biometeorol. 2014;58(2):249–62.
Slama A, Śliwczyński A, Woźnica J, Zdrolik M, Wiśnicki B, Kubajek J, Turżańska-Wieczorek O, Gozdowski D, Wierzba W, Franek E. Impact of air pollution on hospital admissions with a focus on respiratory diseases: a time-series multi-city analysis. Environ Sci Pollut R. 2019;26(17):16998–7009.
Kim S, Peel JL, Hannigan MP, Dutton SJ, Sheppard L, Clark ML, Vedal S. The temporal lag structure of short-term associations of fine particulate matter chemical constituents and cardiovascular and respiratory hospitalizations. Environ Health Persp. 2012;120(8):1094–9.
Sinclair AH, Edgerton ES, Wyzga R, Tolsma D. A two-time-period comparison of the effects of ambient air pollution on outpatient visits for acute respiratory illnesses. J Air Waste Manag Assoc. 2010;60(2):163–75.
Greenberg N, Carel R, Portnov BA. Air pollution and respiratory morbidity in Israel: a review of accumulated empiric evidence. Isr Med Assoc J. 2015;17(7):445–50.
Vodonos A, Friger M, Katra I, Avnon L, Krasnov H, Koutrakis P, Schwartz J, Lior O, Novack V. The impact of desert dust exposures on hospitalizations due to exacerbation of chronic obstructive pulmonary disease. Air Quality, Atmosphere & Health. 2014;7(4):433–9.
Wang K, Chau T. An association between air pollution and daily outpatient visits for respiratory disease in a heavy industry area. PLoS One. 2013;8(10):e75220.
Wang C, Feng L, Chen K. The impact of ambient particulate matter on hospital outpatient visits for respiratory and circulatory system disease in an urban Chinese population. Sci Total Environ. 2019;666:672–9.
Mo Z, Fu Q, Zhang L, Lyu D, Mao G, Wu L, Xu P, Wang Z, Pan X, Chen Z, et al. Acute effects of air pollution on respiratory disease mortalities and outpatients in southeastern China. Sci Rep-Uk. 2018;8(1):1–9.
Pan H, Chen C, Sun H, Ku M, Liao P, Lu K, Sheu J, Huang J, Pai J, Lue K. Comparison of the effects of air pollution on outpatient and inpatient visits for asthma: a population-based study in Taiwan. PLoS One. 2014;9(5):1–19.
Sunyer J. The neurological effects of air pollution in children. Eur Respir J. 2008;32(3):535–7.
Alderete TL, Habre R, Toledo-Corral CM, Berhane K, Chen Z, Lurmann FW, Weigensberg MJ, Goran MI, Gilliland FD. Longitudinal associations between ambient air pollution with insulin sensitivity, beta-cell function, and adiposity in Los Angeles Latino children. Diabetes. 2017;66(7):1789–96.
Chen C, Chan C, Chen B, Cheng T, Leon GY. Effects of particulate air pollution and ozone on lung function in non-asthmatic children. Environ Res. 2015;137:40–8.
Chen Z, Cui L, Cui X, Li X, Yu K, Yue K, Dai Z, Zhou J, Jia G, Zhang J. The association between high ambient air pollution exposure and respiratory health of young children: a cross sectional study in Jinan, China. Sci Total Environ. 2019;656:740–9.
Li D, Wang J, Zhang Z, Shen P, Zheng P, Jin M, Lu H, Lin H, Chen K. Effects of air pollution on hospital visits for pneumonia in children: a two-year analysis from China. Environ Sci Pollut R. 2018;25(10):10049–57.
Nhung NTT, Schindler C, Dien TM, Probst-Hensch N, Perez L, Künzli N. Acute effects of ambient air pollution on lower respiratory infections in Hanoi children: an eight-year time series study. Environ Int. 2018;110:139–48.
Wise J. Better air quality reduces respiratory symptoms among children in southern California. BMJ. 2016;353:i2083.
WHO. Ambient (outdoor) air quality and health, vol. 2019: https://www.who.int/en/news-room/fact-sheets/detail/ambient-(outdoor)-air-quality-and-health; 2018.
Chai G, He H, Sha Y, Zhai G, Zong S. Effect of PM2.5 on daily outpatient visits for respiratory diseases in Lanzhou, China. Sci Total Environ. 2019;649:1563–72.
Guan Q, Liu Z, Yang L, Luo H, Yang Y, Zhao R, Wang F. Variation in PM2.5 source over megacities on the ancient silk road, northwestern China. J Clean Prod. 2019;208:897–903.
Song J, Lu M, Zheng L, Liu Y, Xu P, Li Y, Xu D, Wu W. Acute effects of ambient air pollution on outpatient children with respiratory diseases in Shijiazhuang, China. Bmc Pulm Med. 2018;18(1):1–10.
National Bureau Of Statistics D. Gansu Development Yearbook for 2018, vol. 2019; 2018.
Zhang Y, Kang S. Characteristics of carbonaceous aerosols analyzed using a multiwavelength thermal/optical carbon analyzer: a case study in Lanzhou City. Science China Earth Sciences. 2019;62(2):389–402.
Dominici F, McDermott A, Zeger SL, Samet JM. On the use of generalized additive models in time-series studies of air pollution and health. Am J Epidemiol. 2002;156(3):193–203.
Zhao Y, Hu J, Tan Z, Liu T, Zeng W, Li X, Huang C, Wang S, Huang Z, Ma W. Ambient carbon monoxide and increased risk of daily hospital outpatient visits for respiratory diseases in Dongguan, China. Sci Total Environ. 2019;668:254–60.
Li Q, Yang Y, Chen R, Kan H, Song W, Tan J, Xu F, Xu J. Ambient air pollution, meteorological factors and outpatient visits for eczema in Shanghai, China: a time-series analysis. Int J Env Res Pub He. 2016;13(11):1–10.
Guo Q, Liang F, Tian L, Schikowski T, Liu W, Pan X. Ambient air pollution and the hospital outpatient visits for eczema and dermatitis in Beijing: a time-stratified case-crossover analysis. Environ Sci. 2019;21(1):163–73.
Liu Z, Jin Y, Jin H. The effects of different space forms in residential areas on outdoor thermal comfort in severe cold regions of China. Int J Env Res Pub He. 2019;16(20):3960.
Duan Y, Liao Y, Li H, Yan S, Zhao Z, Yu S, Fu Y, Wang Z, Yin P, Cheng J, et al. Effect of changes in season and temperature on cardiovascular mortality associated with nitrogen dioxide air pollution in Shenzhen, China. Sci Total Environ. 2019;697:134051.
Stasistics LMBO. Analysis of population development of Lanzhou since the 13th five-year plan period, vol. 2020; 2019.
Xia X, Zhang A, Liang S, Qi Q, Jiang L, Ye Y. The association between air pollution and population health risk for respiratory infection: a case study of Shenzhen, China. Int J Env Res Pub He. 2017;14(9):950.
Li YR, Xiao CC, Li J, Tang J, Geng XY, Cui LJ, Zhai JX. Association between air pollution and upper respiratory tract infection in hospital outpatients aged 0–14 years in Hefei, China: a time series study. Public Health. 2018;156:92–100.
Zhang H, Niu Y, Yao Y, Chen R, Zhou X, Kan H. The impact of ambient air pollution on daily hospital visits for various respiratory diseases and the relevant medical expenditures in Shanghai, China. Int J Env Res Pub He. 2018;15(3):1–10.
Su Y, Sha Y, Zhai G, Zong S, Jia J. Comparison of air pollution in Shanghai and Lanzhou based on wavelet transform. Environ Sci Pollut R. 2019;26(17):16825–34.
Liu Y, Xie S, Yu Q, Huo X, Ming X, Wang J, Zhou Y, Peng Z, Zhang H, Cui X, et al. Short-term effects of ambient air pollution on pediatric outpatient visits for respiratory diseases in Yichang city, China. Environ Pollut. 2017;227:116–24.
Yang Z, Li X, Deng J, Wang H. Stable sulfur isotope ratios and water-soluble inorganic compositions of PM10 in Yichang City, Central China. Environ Sci Pollut R. 2015;22(17):13564–72.
Jiang Y, Shi L, Guang A, Mu Z, Zhan H, Wu Y. Contamination levels and human health risk assessment of toxic heavy metals in street dust in an industrial city in Northwest China. Environ Geochem Hlth. 2018;40(5SI):2007–20.
Guarnieri M, Balmes JR. Outdoor air pollution and asthma. Lancet. 2014;383(9928):1581–92.
Liu J, Ruan Y, Wu Q, Ma Y, He X, Li L, Li S, Niu J, Luo B. Has the mortality risk declined after the improvement of air quality in an ex-heavily polluted Chinese city-Lanzhou? Chemosphere. 2020;242:125196.
Becklake MR, Kauffmann F. Gender differences in airway behaviour over the human life span. Thorax. 1999;54(12):1119–38.
Hua J, Yin Y, Peng L, Du L, Geng F, Zhu L. Acute effects of black carbon and PM2.5 on children asthma admissions: a time-series study in a Chinese city. Sci Total Environ. 2014;481:433–8.
Zheng P, Wang J, Zhang Z, Shen P, Chai P, Li D, Jin M, Tang M, Lu H, Lin H, et al. Air pollution and hospital visits for acute upper and lower respiratory infections among children in Ningbo, China: a time-series analysis. Environ Sci Pollut R. 2017;24(23):18860–9.
Wang S, Li Y, Niu A, Liu Y, Su L, Song W, Liu J, Liu Y, Li H. The impact of outdoor air pollutants on outpatient visits for respiratory diseases during 2012–2016 in Jinan. China Resp Res. 2018;19(1):1–8.
Luong LMT, Phung D, Sly PD, Morawska L, Thai PK. The association between particulate air pollution and respiratory admissions among young children in Hanoi, Vietnam. Sci Total Environ. 2017;578:249–55.
Sigmund E, De Ste CM, Miklankova L, Fromel K. Physical activity patterns of kindergarten children in comparison to teenagers and young adults. Eur J Public Health. 2007;17(6):646–51.
Esposito S, Tenconi R, Lelii M, Preti V, Nazzari E, Consolo S, Patria MF. Possible molecular mechanisms linking air pollution and asthma in children. Bmc Pulm Med. 2014;14:1–8.
Liu C, Liu Y, Zhou Y, Feng A, Wang C, Shi T. Short-term effect of relatively low level air pollution on outpatient visit in Shennongjia, China. Environ Pollut. 2019;245:419–26.
Chen W, Yan L, Zhao H. Seasonal variations of atmospheric pollution and air quality in Beijing. Atmosphere-Basel. 2015;6(11):1753–70.
Xiao Q, Ma Z, Li S, Liu Y. The impact of winter heating on air pollution in China. PLoS One. 2015;10(1):e117311.
He J, Lu S, Yu Y, Gong S, Zhao S, Zhou C. Numerical simulation study of winter pollutant transport characteristics over Lanzhou City, Northwest China. Atmosphere-Basel. 2018;9(10):1–8.
Chu PC, Chen Y, Lu S, Li Z, Lu Y. Particulate air pollution in Lanzhou China. Environ Int. 2008;34(5):698–713.
This work was supported by the National Natural Science Foundation of China (4187050043), the foundation of the Ministry of Education Key Laboratory of Cell Activities and Stress Adaptations, Lanzhou University, China (lzujbky-2020-sp21). Chengguan Science and Technology Planning Project, Lanzhou, China (2017SHFZ0043).
Yueling Ma and Li Yue contributed equally to this work.
Institute of Occupational Health and Environmental Health, School of Public Health, Lanzhou University, Lanzhou, Gansu, 730000, People's Republic of China
Yueling Ma, Jiangtao Liu, Xiaotao He, Lanyu Li, Jingping Niu & Bin Luo
Gansu Provincial Maternity and Child Health Care Hospital, Lanzhou, Gansu, 730000, People's Republic of China
Li Yue
Shanghai Typhoon Institute, China Meteorological Administration, Shanghai, 200030, China
Bin Luo
Shanghai Key Laboratory of Meteorology and Health, Shanghai Meteorological Bureau, Shanghai, 200030, China
Yueling Ma
Jiangtao Liu
Xiaotao He
Lanyu Li
Jingping Niu
BL, JPN and YLM contributed to idea formulation, study design, data preparation, data analysis, reporting results, data interpretation, and writing of the manuscript. LY and JTL contributed to data preparation and data analysis. XTH and LYL contributed to study design and interpretation of the data. All authors have seen and approved the final version.
Correspondence to Bin Luo.
The environmental data were collected from open access websites, so the consent to participate was not applicable. The hospital admission data were obtained and proved by Lanzhou center for disease control and prevention with official permission. The study protocol including data using was approved by the ethics committee of Lanzhou University (Project identification code: IRB190612–1).
Ma, Y., Yue, L., Liu, J. et al. Association of air pollution with outpatient visits for respiratory diseases of children in an ex-heavily polluted Northwestern city, China. BMC Public Health 20, 816 (2020). https://doi.org/10.1186/s12889-020-08933-w
DOI: https://doi.org/10.1186/s12889-020-08933-w
Outpatient visit
Time-series study
|
CommonCrawl
|
enVision Math Answer Key
Go Math Answer Key
Big Ideas Math Answers
Big Ideas Math Geometry Answers
Math in Focus Answer Key
Big Ideas Math Algebra 1 Answers
Big Ideas Math Answers Grade 8
Big Ideas Math Answers Grade K
Big Ideas Math Answers Grade 5 Chapter 14 Classify Two-Dimensional Shapes
April 7, 2022 April 1, 2022 by Vinay Pacha
The various topics included in the Big Ideas Math Answers Grade 5 Chapter 14 Classify Two-Dimensional Shapes. Check your math skills by taking the practice sections provided on this page. If you are looking for the Big Ideas Math Book 5th Grade Answer Key Chapter 14 Classify Two-Dimensional Shapes, then get it here. Begin your practice immediately using BIM 5th Grade Answer Key Chapter 14 Classify Two-Dimensional Shapes. Freely access all of our math answers to learn them perfectly. Download Big Ideas Math Answers Grade 5 Chapter 14 Classify Two-Dimensional Shapes PDF for free.
Big Ideas Math Book 5th Grade Chapter 14 Classify Two-Dimensional Shapes Answer Key
We have given all the topics for the sake of students to help them while preparing for the exam. Practice all given problems and know the various problems those impose in the exam. Every problem has its own answer and explanation that makes the student's preparation easier. Go through the list given below to know the topics covered in Big Ideas Math Answer Key Grade 5 Chapter 14 Classify Two-Dimensional Shapes. The main topics covered in this chapter are Classify Triangles, Relate Quadrilaterals, and Quadrilaterals are explained here.
Lesson: 1 Classify Triangles
Lesson 14.1 Classify Triangles
Classify Triangles Homework & Practice 14.1
Lesson: 2 Classify Quadrilaterals
Lesson 14.2 Classify Quadrilaterals
Classify Quadrilaterals Homework & Practice 14.2
Lesson: 3 Relate Quadrilaterals
Lesson 14.3 Relate Quadrilaterals
Relate Quadrilaterals Homework & Practice 14.3
Classify Two-Dimensional Shapes
Classify Two-Dimensional Shapes Performance Task 14
Classify Two-Dimensional Shapes Activity
Classify Two-Dimensional Shapes Chapter Practice 14
Classify Two-Dimensional Shapes Cumulative Practice 1-14
Classify Two-Dimensional Shapes Steam Performance Task 1-14
Explore and Grow
Draw and label a triangle for each description. If a triangle cannot be drawn, explain why.
Draw a triangle that meets two of the descriptions above.
Think and Grow: Classify Triangles
Key Idea
Triangles can be classified by their sides.
An equilateral triangle has three sides with the same length.
An isosceles triangle has two sides with the same length.
A scalene triangle has no sides with the same length.
Triangles can be classified by their angles.
An acute triangle has three acute angles.
An obtuse triangle has one obtuse angle.
A right triangle has one right angle.
An equiangular triangle has three angles with the same measure.
Classify the triangle by its angles and its sides.
The triangle has one ___ angle
and ___ sides with the same length.
So, it is a ___ triangle.
The triangle has one right angle
and no sides with the same length.
So, it is a right triangle.
Show and Grow
Classify the triangle by its angles and its sides
Answer: Equilateral triangle.
Explanation: An equilateral triangle has three sides of the same length.
Answer: Isosceles triangle
Explanation: An Isosceles triangle has two sides of the same size. Two of its angle also measure equal.
Answer: Scalene triangle.
Explanation: A Scalene triangle has no sides are congruent (same size)
Apply and Grow: Practice
Answer: Right triangle.
Explanation: In a triangle one of the angle is a right angle (90 Deg ) called as Right triangle.
Answer: Isosceles triangle.
Answer: Equiangular triangle
Explanation: In an equilateral triangle, all the lengths of the sides are equal. In such a case, each of the interior angles will have a measure of 60 degrees. Since the angles of an equilateral triangle are the same, it is also known as an equiangular triangle. The figure given below illustrates an equilateral triangle.
Explanation: An Isosceles triangle has two sides of the same length. Two of its angle also measure equal.
Explanation: A Scalene triangle has no sides are congruent (Same size) and angles also all different.
Explanation: In a triangle one of the angle is a right angle (90 deg ) called a Right triangle.
Question 10.
A triangular sign has a 40° angle, a 55° angle, and an 85° angle. None of its sides have the same length. Classify the triangle by its angles and its sides.
Explanation: A Scalene triangle has no sides that are congruent (Same size) and angles also all different.
YOU BE THE TEACHER
Your friend says the triangle is an acute triangle because it has two acute angles. Is your friend correct? Explain.
Answer: Above is no acute triangle and it is called a scalene triangle.
Explanation: A Scalene triangle has only no sides that are congruent (Same size) and angles also all different. So it is called a scalene triangle.
DIG DEEPER!
Draw one triangle for each category. Which is the appropriate category for an equiangular triangle? Explain your reasoning.
From the figure, we can say that acute triangles have the same length. So, the first triangle is the equiangular triangle.
Think and Grow: Modeling Real Life
A bridge contains several identical triangles. Classify each triangle by its angles and its sides. What is the length of the bridge?
Each triangle has ___ angles with the same measure and ___ sides with the same length.
So, each triangle is ___ and ___.
The side lengths of 6 identical triangles meet to form the length of the bridge. So, multiply the side length by 6 to find the length of the bridge.
27 × 6 = ___
So, the bridge is ___ long.
Each triangle has 3 angles with the same measure and 3 sides with the same length.
27 × 6 = 162
So, the bridge is 162 ft long.
The window is made using identical triangular panes of glass. Classify each triangle by its angles and its sides. What is the height of the window?
The length of the two sides of the triangle is the same.
18 in + 18 in = 36 inches
Thus the height of the window is 36 inches
You connect four triangular pieces of fabric to make the kite. Classify the triangles by their angles and their sides. Use a ruler and a protractor to verify your answer.
The name of the blue triangle is isosceles right angle triangle.
The two sides of the triangle are the same.
The name of the red triangle is isosceles right-angle triangle.
The name of the green triangle is isosceles right angle triangle.
The name of the yellow triangle is isosceles right angle triangle.
Explanation: A Scalene triangle has only no sides that are congruent (Same size) and angles also all different. So it is called a scalene triangle
Answer: Equiangular triangle.
In an equilateral triangle, all the lengths of the sides are equal. In such a case, each of the interior angles will have a measure of 60 degrees. Since the angles of an equilateral triangle are the same, it is also known as an equiangular triangle. The figure given below illustrates an equilateral triangle.
A triangular race flag has two 65° angles and a 50° angle. Two of its sides have the same length. Classify the triangle by its angles and its sides.
A triangular measuring tool has a 90° angle and no sides of the same length. Classify the triangle by its angles and its sides.
Draw a triangle with vertices A(2, 2), B(2, 6), and C(6, 2) in the coordinate plane. Classify the triangle by its angles and its sides. Explain your reasoning.
Your friend says that both Newton and Descartes are correct. Is your friend correct? Explain.
Answer: Yes
Explanation: An acute triangle is a triangle in which each angle is an acute angle. Any triangle which is not acute is either a right triangle or an obtuse triangle. All acute triangle angles are less than 90 degrees. For example, an equilateral triangle is always acute, since all angles (which are 60) are all less than 90.
The sum of all the angle measures in a triangle is 180°. A triangle has a 34° angle and a 26° angle. Is the triangle acute, right, or obtuse? Explain.
Modeling Real Life
A designer creates the logo using identical triangles. Classify each triangle by its angles and its sides. What is the perimeter of the logo?
The window is made using identical triangular panes of glass. Classify each triangle by its angles and its sides. What are the perimeter and the area of the window? Explain your reasoning.
Explanation: A right triangle is a triangle in which one of the angles is 90 degrees. In a right triangle, the side opposite to the right angle (90-degree angle) will be the longest side and is called the hypotenuse. You may come across triangle types with combined names like right isosceles triangle and such, but this only implies that the triangle has two equal sides with one of the interior angles being 90 degrees. The figure given below illustrates a right triangle
Review & Refresh
Answer : \(\frac{1}{4}\) =0.25
Explanation: 2 divides by 8 with 1/4 times, So the answer is 1/4.
Answer : \(\frac{15}{4}\) = 3.75
Explanation: 15 divides by 4 with \(\frac{15}{4}\) times, So the answer is \(\frac{15}{4}\) or 3.75.
Answer : \(\frac{15}{12}\) = \(\frac{1}{4}\) = 1.25
Explanation: 15 divides by 12 with \(\frac{1}{4}\) times, So the answer is \(\frac{1}{4}\).
Draw and label a quadrilateral for each description. If a quadrilateral cannot be drawn, explain why
Draw a quadrilateral that meets three of the descriptions above.
Think and Grow: Classify Quadrilaterals
Quadrilaterals can be classified by their angles and their sides.
A trapezoid is a quadrilateral that has exactly one pair of parallel sides.
A parallelogram is a quadrilateral that has two pairs of parallel sides. Opposite sides have the same length.
A rectangle is a parallelogram that has four right angles.
A rhombus is a parallelogram that has four sides with the same length.
A square is a parallelogram that has four right angles and four sides with the same length.
Classify the quadrilateral in as many ways as possible.
Answer: Square
Explanation: A square is a parallelogram that has four right angles and four sides with the same length.
Answer: Trapezoid
Explanation: A trapezoid is a quadrilateral that has exactly one pair of parallel sides
Answer: Parallelogram
Explanation: A parallelogram is a quadrilateral that has two pairs of parallel sides. The opposite sides have the same length.
Answer: Rectangle
Explanation: A rectangle is a parallelogram that has four right angles and diagonals are congruent.
Answer: Rhombus
Explanation: A rhombus is a parallelogram with four congruent sides and A rhombus has all the properties of a parallelogram. The diagonals intersect at right angles.
A sign has the shape of a quadrilateral that has two pairs of parallel sides, four sides with the same length, and no right angles
Assume that a quadrilateral has parallel sides or equal sides unless that is stated. A parallelogram has two parallel pairs of opposite sides. A rectangle has two pairs of opposite sides parallel, and four right angles.
A tabletop has the shape of a quadrilateral with exactly one pair of parallel sides.
Answer: A trapezoid is a quadrilateral that has exactly one pair of parallel sides. A parallelogram is a quadrilateral that has two pairs of parallel sides.
Your friend says that a quadrilateral with at least two right angles must be a parallelogram. Is your friend correct? Explain.
Explanation: A trapezoid is only required to have two parallel sides. However, a trapezoid could have one of the sides connecting the two parallel sides perpendicular to the parallel sides which would yield two right angles.
Which One Doesn't Belong? Which cannot set of lengths be the side lengths of a parallelogram?
Answer: 9 yd, 5 yd, 5 yd, 3 yd
Explanation: A parallelogram is a quadrilateral that has two pairs of parallel sides. Opposite sides have the same length, So the above one is not a parallelogram.
The dashed line shows how you cut the bottom of a rectangular door so it opens more easily. Classify the new shape of the door.
Draw the new shape of the door.
The original shape of the door was a rectangle, so it had one pairs of parallel sides. The new shape of the door has exactly one pair of parallel sides. So, the new shape of the door is a trapezoid.
The dashed line shows how you cut the corner of the trapezoidal piece of fabric. The line you cut is parallel to the opposite side. Classify the new shape of the four-sided piece of fabric.
A farmer encloses a section of land using the four pieces of fencing. Name all of the four-sided shapes that the farmer can enclose with the fencing.
Explanation: A parallelogram is a quadrilateral that has two pairs of parallel sides. Opposite sides have the same length, So four-sided shapes of fencing look like Parallelogram.
Explanation: A Trapezoid is a quadrilateral with exactly one pair of parallel sides. (There may be some confusion about this word depending on which country you're in. In India and Britain, they say trapezium; in America, trapezium usually means a quadrilateral with no parallel sides.)
Explanation: The diagonals of a square bisect each other and meet at 90°. The diagonals of a square bisect its angles. The opposite sides of a square are both parallel and equal in length. All four angles of a square are equal (each being 360°/4 = 90°, a right angle).
A name tag has the shape of a quadrilateral that has two pairs of parallel sides and four right angles. Opposite sides are the same length, but not all four sides are the same length.
A napkin has the shape of a quadrilateral that has two pairs of parallel sides, four sides with the same length, and four right angles.
Explanation: A square is a parallelogram that has four right angles and four sides of the same length.
Can you draw a quadrilateral that is not a square, but has four right angles? Explain.
Answer: A rectangle is a parallelogram that has four right angles
Plot two more points in the coordinate plane to form a square. What two points can you plot to form a parallelogram? What two points can you plot to form a trapezoid? Do not use the same pair of points twice.
Which quadrilateral can be classified as a parallelogram, and rectangle, square, rhombus? Explain.
Explanation: A square can be defined as a rhombus which is also a rectangle – in other words, a parallelogram with four congruent sides and four right angles. A trapezoid is a quadrilateral with exactly one pair of parallel sides.
The dashed line shows how you fold the flap of the envelope so it closes. Classify the new shape of the envelope.
A construction worker tapes off a section of land using the four pieces of caution tape. Name all of the possible shapes that the worker can enclose with the tape.
Explanation: A trapezoid is a quadrilateral that has exactly one pair of parallel sides.
Answer: \(\frac{1}{2}\)
Explanation: \(\frac{2}{3}\) –\(\frac{1}{6}\) equal to \(\frac{1}{2}\).
Answer: 0.112
Explanation: \(\frac{1}{2}\) is equal to 0.5 and 7/18 is equal to 0.3888.So subtraction from 0.5 to 0.3888 is 0.112.
Explanation: \(\frac{2}{5}\) is equal to 0.4 and 1/9 is equal to 0.111,So subtraction from 0.4 to 0.111is 0.289.
Label the Venn diagram to show the relationships among quadrilaterals. The first one has been done for you.
Explain how you decided where to place each quadrilateral.
Think and Grow: Relate Quadrilaterals
The Venn diagram shows the relationships among quadrilaterals.
Tell whether the statement is true or false.
All rhombuses are rectangles.
Rhombuses do not always have four right angles.
So, the statement is ___.
Answer: So, the statement is true.
All rectangles are parallelograms.
All rectangles have two pairs of parallel sides.
Tell whether the statement is true or false. Explain.
Some rhombuses are squares.
Explanation: A rhombus is a quadrilateral (plane figure, closed shape, four sides) with four equal-length sides and opposite sides parallel to each other. All squares are rhombuses, but not all rhombuses are squares. The opposite interior angles of rhombuses are congruent.
All parallelograms are rectangles.
Answer: False
Explanation: A rectangle is a parallelogram with four right angles, so all rectangles are also parallelograms and quadrilaterals. On the other hand, not all quadrilaterals and parallelograms are rectangles. A rectangle has all the properties of a parallelogram
All rectangles are squares.
Explanation: All squares are rectangles, but not all rectangles are squares.
Some parallelograms are trapezoids.
Explanation: A trapezoid has one pair of parallel sides and a parallelogram has two pairs of parallel sides. So a parallelogram is also a trapezoid.
Some rhombuses are rectangles.
Explanation: A rhombus is defined as a parallelogram with four equal sides. Is a rhombus always a rectangle? No, because a rhombus does not have to have 4 right angles. Kites have two pairs of adjacent sides that are equal.
All trapezoids are quadrilaterals.
Explanation: Trapezoids have only one pair of parallel sides; parallelograms have two pairs of parallel sides. The correct answer is that all trapezoids are quadrilaterals. Trapezoids are four-sided polygons, so they are all quadrilaterals
All squares are rhombuses.
Explanation: All squares are rhombuses, but not all rhombuses are squares. The opposite interior angles of rhombuses are congruent
Some trapezoids are squares.
Explanation: A trapezoid is a quadrilateral with at least one pair of parallel sides. In a square, there are always two pairs of parallel sides, so every square is also a trapezoid. Conversely, only some trapezoids are squares
Use the word cards to complete the graphic organizer.
Answer: The first box to be filled with Square, 3d box to be filled with Rectangle,4th box to be filled with trapezoid and final box to be filled with Quadrilateral.
Explanation: A Square can be defined as a rhombus which is also a rectangle, in other words, a parallelogram with four congruent sides and four right angles. A trapezoid is a quadrilateral with exactly one pair of parallel sides.
All rectangles are parallelograms. Are all parallelograms rectangles? Explain.
Explanation: A rectangle is considered a special case of a parallelogram because, A parallelogram is a quadrilateral with 2 pairs of opposite, equal and parallel sides. A rectangle is a quadrilateral with 2 pairs of opposite, equal and parallel sides but also forms right angles between adjacent sides.
Newton says the figure is a square. Descartes says the figure is a parallelogram. Your friend says the figure is a rhombus. Are all three correct? Explain.
Answer: No
Explanation: A square has two pairs of parallel sides, four right angles, and all four sides are equal. It is also a rectangle and a parallelogram. A rhombus is defined as a parallelogram with four equal sides. No, because a rhombus does not have to have 4 right angles.
You use toothpicks to create several parallelograms. You notice that opposite angles of parallelograms have the same measure. For what other quadrilaterals is this also true?
Parallelograms have the property that opposite angles have the same measure. Subcategories of parallelograms must also have this property.
___, ___, and ___ are subcategories of parallelograms.
So, ___, ____, and ____ also have opposite angles with the same measure.
Answer: Rectangle, Rhombus and Square are subcategories of parallelograms.
You use pencils to create several rhombuses. You notice that diagonals of rhombuses are perpendicular and divide each other into two equal parts. For what other quadrilateral is this also true? Explain your reasoning.
Answer: Square, Parallelogram, Rhombus are perpendicular and divided into the equal parts.
You place two identical parallelograms side by side. What can you conclude about the measures of adjacent angles in a parallelogram? For what other quadrilaterals is this also true? Explain your reasoning.
The adjacent angles of the parallelogram is supplementary.
Opposite angles of the parallelogram are equal.
All trapezoids are parallelograms.
Explanation: The pair of opposite sides of a parallelogram are equal and parallel but in the case of trapezium, this is not true in that only one pair of opposite sides are equal. Therefore every parallelogram is not a trapezium.
Explanation: Each pair of co-interior angles are supplementary, because two right angles add to a straight angle, so the opposite sides of a rectangle are parallel. This means that a rectangle is a parallelogram, so, Its opposite sides are equal and parallel. Its diagonals bisect each other.
All squares are quadrilaterals.
Explanation: A closed figure with four sides. For example, kites, parallelograms, rectangles, rhombuses, squares, and trapezoids are all quadrilaterals. Kite: A quadrilateral with two pairs of adjacent sides that are equal in length; a kite is a rhombus if all side lengths are equal.
Some quadrilaterals are trapezoids.
Explanation: Trapezoids have only one pair of parallel sides, parallelograms have two pairs of parallel sides. A trapezoid can never be a parallelogram. The correct answer is that all trapezoids are quadrilaterals.
Some parallelograms are rectangles.
Explanation: Not all parallelograms are rectangles. A parallelogram is a rectangle if it has four right angles and two pairs of parallel and congruent sides.
All squares are rectangles and rhombuses.
Explanation: No, because all four sides of a rectangle don't have to be equal. However, the sets of rectangles and rhombuses do intersect, and their intersection is the set of squares, all squares are both a rectangle and a rhombus.
Newton says he can draw a quadrilateral that is not a trapezoid and not a parallelogram. Is Newton correct? Explain.
Explanation: Trapezoids have only one pair of parallel sides; parallelograms have two pairs of parallel sides. A trapezoid can never be a parallelogram. The correct answer is that all trapezoids are quadrilaterals.
Explain why a parallelogram is not a trapezoid.
Explanation: a square is a quadrilateral, a parallelogram, a rectangle, and a rhombus Is a trapezoid a parallelogram? No, because a trapezoid has only one pair of parallel sides.
Write always, sometimes, or never to make the statement true? Explain.
A rhombus is ___ a square.
Answer: A rhombus is some times a square
Explanation: A rhombus is a square. This is sometimes true. Â It is true when a rhombus has 4 right angles. It is not true when a rhombus does not have any right angles.
A trapezoid is __ a rectangle.
Answer: A trapezoid is sometimes a rectangle.
Explanation: A rectangle has one pair of parallel sides.
A parallelogram is ___ a quadrilateral.
Answer: A parallelogram is always a quadrilateral.
Explanation: A parallelogram must have 4 sides, so they must always be quadrilaterals.
A quadrilateral has exactly three sides that have the same length. Why can the figure not be a rectangle?
Explanation: A rectangle is a parallelogram that has four right angles. opposite sides are in the same length, so the above one is not a rectangle.
You fold the rectangular piece of paper. You notice that the line segments connecting the halfway points of opposite sides are perpendicular. For what other quadrilateral is this also true?
You tear off the four corners of the square and arrange them to form a circle. You notice that the sum of the angle measures of a square is equal to 360°. For what other quadrilaterals is this also true?
Answer: The sum of the angles in a parallelogram are 360°
5 pt = ___ c
Answer : 5 pt = 10 c
Convert from pints to cups.
1 pint = 2 cups
5 pints = 5 × 2 cups
5 pints = 10 cups
32 fl oz = ___ c
Answer: 32 fl oz = 4 c
Convert from fl oz to cups.
1 fl oz = 0.125 cups
32 fl oz are equal to 4 c.
20 qt = ___ c
Answer : 20 qt = 80 c
Convert from quarts to cups.
1 quart = 4 cups
20 qt = 20 × 4 cups = 80 cups
A homeowner wants to install solar panels on her roof to generate electricity for her house. A solar panel is 65 inches long and 39 inches wide.
a. The shape of the panel has 4 right angles. Sketch and classify the shape of the solar panel.
Explanation: A rectangle is a parallelogram that has four right angles. So the shape of the solar panel is a rectangle.
b. There are 60 identical solar cells in a solar panel, arranged in an array. Ten cells meet to form the length of the panel, and six cells meet to form the width. Classify the shape of each solar cell. Explain your reasoning.
Answer: A rectangle is a parallelogram that has four right angles. So the shape of the solar panel is a rectangle.
The home owner measures three sections of her roof.
a. Classify the shape of each section in as many ways as possible.
Explanation: In a triangle one of the angle is a right angle (90 Deg ) called as Right triangle
Explanation: Rectangle is a parallelogram with four right angles, so all rectangles are also parallelograms and quadrilaterals. On the other hand, not all quadrilaterals and parallelograms are rectangles.
Answer: Isosceles trapezoid
Explanation: An isosceles trapezoid is a trapezoid whose non-parallel sides are congruent.
b. About how many solar panels can fit on the measured sections of the roof? Explain your reasoning.
One solar panel can produce about 30 kilowatt-hours of electricity each month. The homeowner uses her electric bills to determine that she uses about 1,200 kilowatt-hours of electricity each month.
a. How many solar panels should the homeowner install on her roof?
Answer: 40 Solar panels
Explanation: 40 Solar panels X 30 kilowatt-hours of electricity each month per one solar panel equal to 1,200 kilowatt-hours of electricity per month, So the answer is 40 solar panels.
b. Will all of the solar panels fit on the measured sections of the roof? Explain.
Quadrilateral Lineup
Players take turns spinning the spinner.
On your turn, cover a quadrilateral that matches your spin.
If you land on, Lose a turn, then do not cover a quadrilateral.
The first player to get four in a row twice, horizontally, vertically, Recor diagonally, wins!
14.1 Classify Triangles
Explanation: In an equilateral triangle, all the lengths of the sides are equal. In such a case, each of the interior angles
Explanation: In a triangle one of the angle is a right angle (90° ) called as Right triangle.
14.2 Classify Quadrilaterals
Explanation: A rectangle is a parallelogram that has four right angle, Opposite sides are the same length
Plot two more points in the coordinate plane to form a quadrilateral that has exactly two a rectangle. What two points can you plot to form a trapezoid? What two points can you plot to form a rhombus? Do not use the same pair of points twice.
Can you draw a quadrilateral that has exactly two right angles? Explain.
Explanation: A quadrilateral with only 2 right angles and it is called a trapezoid .
The dashed line shows how you break apart the graham cracker. Classify the new shape of each piece of the graham cracker.
Explanation: The diagonals of a square bisect each other and meet at 90°.
14.3 Relate Quadrilaterals
All rectangles are quadrilaterals.
Explanation: A closed figure with four sides. For example, kites, parallelograms, rectangles, rhombuses, squares, and trapezoids are all quadrilaterals
Some parallelograms are squares.
Explanation: Squares fulfill all criteria of being a rectangle because all angles are right angle and opposite sides are equal. Similarly, they fulfill all criteria of a rhombus, as all sides are equal and their diagonals bisect each other.
All trapezoids are rectangles.
Explanation: Rectangles are defined as a four-sided polygon with two pairs of parallel sides. On the other hand, a trapezoid is defined as a quadrilateral with only one pair of parallel sides.
Some rectangles are rhombuses.
Explanation: A rectangle is a parallelogram with all its interior angles being 90 degrees. A rhombus is a parallelogram with all its sides equal. This means that for a rectangle to be a rhombus, its sides must be equal. A rectangle can be a rhombus only if has extra properties which would make it a square
Some squares are trapezoids.
Explanation: The definition of a trapezoid is that it is a quadrilateral that has at least one pair of parallel sides. A square, therefore, would be considered a trapezoid.
All quadrilaterals are squares.
Explanation : Quadrilateral: A closed figure with four sides. For example, kites, parallelograms, rectangles, Square: A rectangle with four sides of equal length. Trapezoid: A quadrilateral with at least one pair of parallel sides So, All quadrilaterals are not squares.
Which model shows 0.4 × 0.2?
A triangle has angle measures of 82°, 53°, and 45°. Classify the triangle by its angles.
Explanation: An obtuse triangle has one angle measuring more than 90º but less than 180º (an obtuse angle). It is not possible to draw a triangle with more than one obtuse angle
Which expressions have an estimated difference of \(\frac{1}{2}\) ?
A rectangular prism has a volume of 288 cubic centimeters. The height of the prism is 8 centimeters. The base is a square. What is a side length of the base?
Explanation: volume of a rectangular prism, multiply its 3 dimensions: length x width x height. The volume is expressed in cubic units. So,6 X 6 X 8 is equal to 288 cubic centimeters, Therefore the side length of the base is 6 cm.
A sandwich at a food stand costs $3.00. Each additional topping costs the same extra amount. The coordinate plan shows the costs, in dollars, of sandwiches with different numbers of additional toppings. What is the cost of a sandwich with 3 additional toppings?
Which statements are true?
Answer: The following statements are true
Option 2,option 3 and option 4 .
Explanation :
All squares are rectangles are parallelograms is true, why because squares fulfill all criteria of being a rectangle because all angles are right angle and opposite sides are equal. Similarly, they fulfill all criteria of a rhombus, as all sides are equal and their diagonals bisect each other.
Option 3: All squares are rhombuses is true, why because All squares are rhombuses, but not all rhombuses are squares. The opposite interior angles of rhombuses are congruent.
Option 4: Every trapezoid is a quadrilateral is true, why because Trapezoids have only one pair of parallel sides; parallelograms have two pairs of parallel sides.
Your friend makes a volcano for a science project. She uses 10 cups of vinegar. How many pints of vinegar does he use?
Explanation: 1 cup is equal to 0.5 pints, therefore 10 cups are equal to 5 pints.
The volume of the rectangular prism is 432 cubic centimeters. What is the length of the prism?
Explanation: volume of a rectangular prism, multiply its 3 dimensions: length x width x height. The volume is expressed in cubic units.
So, 6 cm X 9 cm X 8 cm is equal to 432 cubic centimeters.
Therefore the length of prim is 9 cm.
Descartes draws a pentagon by plotting another point in the coordinate plane and connecting the points. Which coordinates could he use?
Newton rides to the dog park in a taxi. He owes the driver $12. He calculates the driver's tip by multiplying $12 by 0.15. How much does he pay the driver, including the tip?
Explanation: Driver cost $12 + ($12 X 0.15 )= 12+1.8 =13.8
Therefore answer is $13.8.
A quadrilateral has four sides with the same length, two pairs of parallel sides, and four 90° angles. Classify the quadrilateral in as many ways as possible.
Answer: Square, Parallelogram
Explanation: A quadrilateral has four sides with the same length, two pairs of parallel sides and four 90° angles is called as square. All squares are parallelograms.
Which ordered pair represents the location of a point shown in the coordinate plane?
What is the product of 5,602 and 17?
Answer: 95234
Explanation: 5602 X 17 is equal to 95234.
Which pair of points do not lie on a line that is perpendicular to the x-axis?
Newton has a gift in the shape of a rectangular prism that has a volume of 10,500 cubic inches. The box he uses to ship the gift is shown.
Part A What is the volume of the box?
Part B What is the volume, in cubic inches, of the space inside the box that is not taken up by gift? Explain.?
Which expressions have a product greater than \(\frac{5}{6}=\)?
Newton is thinking of a shape that has 4 sides, only one pair of parallel sides, and angle measures of 90°, 40°, 140°, and 90°. Which is Newton's shape?
Explanation: Trapezoid Only one pair of opposite sides is parallel.
Which rectangular prisms have a volume of 150 cubic feet?
Answer: Option 1
Explanation: volume of a rectangular prism, multiply its 3 dimensions: length x width x height. The volume is expressed in cubic feet.
So,2 ft X 25 ft X 3 ft is equal to 150 cubic ft, Therefore the right answer is option 1.
Each student in your grade makes a constellation display by making holes for the stars of a constellation on each side of the display. Each display is a rectangular prism with a square base.
Your science teacher orders a display for each student. The diagram shows the number of packages that can fit in a shipping box.
a. How many displays come in one box?
b. There are 108 students in your grade. How many boxes of displays does your teacher order? Explain.
c. The volume of the shipping box is 48,000 cubic inches. What is the volume of each display?
d. The height of each display is 15 inches. What are the dimensions of the square base?
e. Estimate the dimensions of the shipping box.
f. You paint every side of the display except the bottom. What is the total area you will paint?
g. You need a lantern to light up your display. Does the lantern fit inside of your display? Explain.
On one side of your display, you create an image of the constellation Libra. Each square on the grid is 1 square inch.
a. Classify the triangle formed by the points of the constellation.
b. What are the coordinates of the points of the constellation?
c. What is the height of the constellation on your display?
You use the coordinate plane to create the image of the Big Dipper.
a. Plot the points A(6, 2), B(8, 2), C(7, 6), D(5, 5), E(7, 9), F(6, 12), and G(4, 14).
b. Draw lines connecting the points of quadrilateral ABCD. Draw \(\overline{C E}\), \(\overline{E F}\) and \(\overline{F G}\).
c. Is quadrilateral ABCD a trapezoid? How do you know?
Answer: Yes ABCD is a trapezoid because all sides are not equal and only one pair has parallel sides.
Use the Internet or some other resource to learn more about constellations. Write one interesting thing you learn.
Answer: A constellation is a group of stars that appears to form a pattern or picture like Orion the Great Hunter, Leo the Lion, or Taurus the Bull. Constellations are easily recognizable patterns that help people orient themselves using the night sky. There are 88 "official" constellations.
Sharpen your math skills by practicing the problems from Big Ideas Math Book 5th Grade Answer Key Chapter 14 Classify Two-Dimensional Shapes. All the solutions of Grade 5 Chapter 14 Classify Two-Dimensional Shapes are prepared by the math professionals. Thus you can prepare effectively and score good marks in the exams.
Categories Big Ideas Math Post navigation
Big Ideas Math Answers Grade 6 Advanced Chapter 2 Fractions and Decimals
Big Ideas Math Answers Grade 8 Chapter 8 Exponents and Scientific Notation
enVision Math Common Core Grade 7 Answer Key | enVision Math Common Core 7th Grade Answers
Envision Math Common Core Grade 3 Answer Key | Envision Math Common Core 3rd Grade Answers
enVision Math Common Core Grade 2 Answer Key | enVision Math Common Core 2nd Grade Answers
enVision Math Common Core Grade 1 Answer Key | enVision Math Common Core 1st Grade Answers
enVision Math Common Core Kindergarten Answer Key | enVision Math Common Core Grade K Answers
enVision Math Answer Key for Class 8, 7, 6, 5, 4, 3, 2, 1, and K | enVisionmath 2.0 Common Core Grades K-8
Go Math Grade 8 Answer Key PDF | Chapterwise Grade 8 HMH Go Math Solution Key
Copyright © 2022 Big Ideas Math Answers
|
CommonCrawl
|
How do I make the conceptual transition from multivariable calculus to differential forms?
One way to define the algebra of differential forms $\Omega(M)$ on a smooth manifold $M$ (as explained by John Baez's week287) is as the exterior algebra of the dual of the module of derivations on the algebra $C^{\infty}(M)$ of smooth functions $M \to \mathbb{R}$. Given that derivations are vector fields, 1-forms send vector fields to smooth functions, and some handwaving about area elements suggests that k-forms should be built from 1-forms in an anticommutative fashion, I am almost willing to accept this definition as properly motivated.
One can now define the exterior derivative $d : \Omega(M) \to \Omega(M)$ by defining $d(f dg_1\ \dots\ dg_k) = df\ dg_1\ \dots\ dg_k$ and extending by linearity. I am almost willing to accept this definition as properly motivated as well.
Now, the exterior derivative (together with the Hodge star and some fiddling) generalizes the three main operators of multivariable calculus: the divergence, the gradient, and the curl. My intuition about the definitions and properties of these operators comes mostly from basic E&M, and when I think about the special cases of Stokes' theorem for div, grad, and curl, I think about the "physicist's proofs." What I'm not sure how to do, though, is to relate this down-to-earth context with the high-concept algebraic context described above.
Question: How do I see conceptually that differential forms and the exterior derivative, as defined above, naturally have physical interpretations generalizing the "naive" physical interpretations of the divergence, the gradient, and the curl? (By "conceptually" I mean that it is very unsatisfying just to write down the definitions and compute.) And how do I gain physical intuition for the generalized Stokes' theorem?
(An answer in the form of a textbook that pays special attention to the relationship between the abstract stuff and the physical intuition would be fantastic.)
dg.differential-geometry ca.classical-analysis-and-odes smooth-manifolds intuition
edited Oct 7 '13 at 2:46
Qiaochu Yuan
$\begingroup$ Really? I would like to award reputation for good answers and I am not necessarily just looking for a list of recommendations; perhaps someone has a clear enough intuition that it can be described in a paragraph or two. $\endgroup$ – Qiaochu Yuan Jan 3 '10 at 11:30
$\begingroup$ Have you seen From Calculus to Cohomology by Madsden and Tornehave? It's not really about physical intuition (which is why I'm making this a comment), but it might be helpful. $\endgroup$ – Akhil Mathew Jan 3 '10 at 15:28
$\begingroup$ I still think this should be community wiki because it's a sorted list. I didn't like an answer, and I'd like to vote it down, but not the user. $\endgroup$ – Harry Gindi Jan 3 '10 at 15:46
$\begingroup$ +1 for "I freaking love this question". $\endgroup$ – B. Bischof Jan 14 '10 at 22:53
$\begingroup$ A 1-form is a function which grows proportionally to how fast you are moving. Thus it doesn't matter how you parametrize the curve you are moving on - you either end up integrating a smaller function for a longer period of time, or a bigger function for a shorter period of time. This is why you can't integrate functions on manifolds - they have no intrinsic "unit speeds", because there are many choices of local coordinates - but you can still integrate differential forms. k-forms just generalize this to higher dimensions. $\endgroup$ – Steven Gubkin Jan 27 '11 at 20:25
Here's a sketch of the relation between div-grad-curl and the de Rham complex, in case you might find it useful.
The first thing to realise is that the div-grad-curl story is inextricably linked to calculus in a three-dimensional euclidean space. This is not surprising if you consider that this stuff used to go by the name of "vector calculus" at a time when a physicist's definition of a vector was "a quantity with both magnitude and direction". Hence the inner product is essential part of the baggage as is the three-dimensionality (in the guise of the cross product of vectors).
In three-dimensional euclidean space you have the inner product and the cross product and this allows you to write the de Rham sequence in terms of div, grad and curl as follows: $$ \matrix{ \Omega^0 & \stackrel{d}{\longrightarrow} & \Omega^1 & \stackrel{d}{\longrightarrow} & \Omega^2 & \stackrel{d}{\longrightarrow} & \Omega^3 \cr \uparrow & & \uparrow & & \uparrow & & \uparrow \cr \Omega^0 & \stackrel{\mathrm{grad}}{\longrightarrow} & \mathcal{X} & \stackrel{\mathrm{curl}}{\longrightarrow} & \mathcal{X} & \stackrel{\mathrm{div}}{\longrightarrow} & \Omega^0 \cr}$$ where $\mathcal{X}$ stands for vector fields and the vertical maps are, from left to right, the following isomorphisms:
the identity: $f \mapsto f$
the musical isomorphism $X \mapsto \langle X, -\rangle$
$X \mapsto \omega$, where $\omega(Y,Z) = \langle X, Y \times Z \rangle$
$f \mapsto f \mathrm{dvol}$, where $\mathrm{dvol}(X,Y,Z) = \langle X, Y \times Z\rangle$
up to perhaps a sign here and there that I'm too lazy to chase.
The beauty of this is that, first of all, the two vector calculus identities $\mathrm{div} \circ \mathrm{curl} = 0$ and $\mathrm{curl} \circ \mathrm{grad} = 0$ are now subsumed simply in $d^2 = 0$, and that whereas div, grad, curl are trapped in three-dimensional euclidean space, the de Rham complex exists in any differentiable manifold without any extra structure. We teach the language of differential forms to our undergraduates in Edinburgh in their third year and this is one way to motivate it.
As for the integral theorems, I always found Spivak's Calculus on manifolds to be a pretty good book.
Another answer mentioned Gravitation by Misner, Thorne and Wheeler. Personally I found their treatment of differential forms very confusing when I was a student. I'm happier with the idea of a dual vector space than I am with the "milk crates" they draw to illustrate differential forms. Wald's book on General Relativity had, to my mind, a much nicer treatment of this subject.
José Figueroa-O'FarrillJosé Figueroa-O'Farrill
$\begingroup$ It should be noted that Loring Tu has written a supplementary book to "Differential Forms in Algebraic Topology" which also gives a good explanation of this. $\endgroup$ – Brandon Thomas Van Over Jun 13 '17 at 1:42
There is a book that not many physicists I know of seem to like (except mathematical physicists, of course), but that is a true gem in the eyes of mathematicians: I am referring to V. Arnold Mathematical Methods of Classical Mechanics, here on amazon.
In this book, which is in the short list (number 12, to be precise) of my fundamental math book across all math fields, Chapter VIII is entirely devoted to differential forms.
If you read it, you have, I believe, an excellent answer.
One small suggestion to build understanding: DISCRETIZE. Do not think of fancy integrals, simply think that 0-forms are scalars, 1-forms oriented segments, 2-forms oriented areas, and that integration over them is simply sums. Now "prove" Stokes theorem for simple tiny cubes, and notice that the definition of the derivatives of forms is exactly done to keep track of faces. At the infinitesimal level, it is just book keeping.
If I ever had to teach a basic class on forms, I would do precisely that: discretize first.
Mirco A. Mannucci
$\begingroup$ On the topic of discretization, Peter Saveliev's online material on discrete differential forms may be of interest. $\endgroup$ – J W Apr 12 '14 at 11:27
$\begingroup$ Infinitesimal oriented segments are vectors, so 1-forms are dual to infinitesimal oriented segments, and 2-forms are dual to infinitesimal oriented surfaces, and so on. This leads naturally to the duality between homology and cohomology in topology. Differential forms are roughly the dual space of formal sums of geometric objects, like submanifolds or cycles. $\endgroup$ – Ben McKay Jul 24 '19 at 7:47
$\begingroup$ @JW The link is broken now. Is there a new link? Thanks. $\endgroup$ – Yai0Phah Nov 21 '19 at 8:53
$\begingroup$ @Yai0Phah: Try calculus123.com/wiki/Discrete_calculus $\endgroup$ – J W Nov 21 '19 at 12:23
I have struggled with this question myself, and I couldn't find a perfectly satisfactory answer. In the end, I decided that the definition of a differential form is a rather strange compromise between geometric intuition and algebraic simplicity, and that it cannot be motivated by either of these by itself. Here, by geometric intuition I mean the idea that "differential forms are things that can be integrated" (as in Bachmann's notes), and by algebraic simplicity I mean the idea that they are linear.
The two parts of the definition that make perfect geometric sense are the d operator and the wedge product. The operator d is simply that operator for which Stokes' theorem holds, namely if you integrate d of a n-form over an n+1-dimensional manifold, you get the same thing as if you integrated the form over the n-dimensional boundary.
The wedge product is a bit harder to see geometrically, but it is in fact the proper analogy to the product measure. Here's how it works for one-forms. Suppose you have two one-forms a and b (on a vector space, for simplicity). Think of them as a way of measuring lengths, and suppose you want to measure area. Here's how you do it: pick a vector $\vec v$ such that $a(\vec v) \neq 0$ but $b(\vec v) = 0$ and a vector $\vec w$ s.t. $a(\vec w) = 0$ but $b(\vec w) \neq 0$. Declare the area of the parallelogram determined by $\vec v$ and $\vec w$ to be $a(\vec v) \cdot b(\vec w)$. By linearity, this will determine area of any parallelogram. So, we get a two-form, which is in fact precisely $a \wedge b$.
Now, the part that makes no sense to me geometrically is why the hell differential forms have to be linear. This implies all kinds of things that seem counter-intuitive to me; for example there is always a direction in which a one-form is zero, and so for any one-form you can draw a curve whose "length" with respect to the form is zero. More generally, when I was learning about forms, I was used to measures as those things which we integrate, and I still see no geometric reason as to why measures (and, in particular, areas) are not forms.
However, this does make perfect sense algebraically: we like linear forms, they are simple. For example (according to Bachmann), their linearity is the thing that allows the differential operator d to be defined in such a way that Stokes' theorem holds. Ultimately, however, I think the justification for this are all the short and sweet formulas (e.g. Cartan's formula) that make all kinds of calculations easier, and all depend on this linearity. Also, the crucial magical fact that d-s, wedges, and inner products of differential forms all remain differential forms needs this linearity.
Of course, if we want them to be linear, they will be also signed, and so measures will not be differential forms. To me, this seems as a small sacrifice of geometry for the sake of algebra. Still, I don't believe it's possible to motivate differential forms by algebra alone. In particular, the only way I could explain to myself why take the "Alt" of a product of forms in the definition of the wedge product is the geometric explanation above.
So, I think the motivation and power behind differential forms is that, without wholly belonging to either the algebraic or geometric worlds, they serve as a nice bridge in between. One thing that made me happier about all this is that, once you accept their definition as a given and get used to it, most of the proofs (again, I'm thinking of Cartan's formula) can be understood with the geometric intuition.
Needless to say, if anybody can improve on any of the above, I'll be very grateful to them.
P.S. For the sake of completeness: I think that "inner products" make perfect algebraic sense, but are easy to see geometrically as well.
Ilya GrigorievIlya Grigoriev
$\begingroup$ Of course forms have to be signed (indeed, alternating) objects. Think of the classical Stokes' theorem, the direction of the normal vector and the associated direction of the boundary. If you reverse one you have to reverse the other, and the integrals change sign. As for linearity, think of integration over a very small surface – say, a parallelogram. If you double one side, the integral should double too (asymptotically). Similarly, if the sides are parallel, the integral vanishes. From ω(X,X)=0 and linearity you get the alternating property. Oh, and remember the Jacobian determinant? $\endgroup$ – Harald Hanche-Olsen Jan 3 '10 at 22:17
$\begingroup$ IMO the answer to your question as to why we choose forms to be linear goes back to a far simpler observation. That the determinant is the unique alternating multi-linear function on square matrices that takes value $1$ on the identity. This says alternating multi-linear objects measure (signed) volume. A form is just a linear combination of projections followed by determinants, so forms are precisely the objects you need to measure signed volume when you have positive co-dimension. $\endgroup$ – Ryan Budney Jan 4 '10 at 0:18
$\begingroup$ There are "things you can integrate" more general than differential forms, such as arc length or surface area. A fairly general notion of something you can integrate is a density in the sense of Gelfand. To amplify Harald's point on the importance of Stokes' Theorem, according to this MO question, imposing the linearity condition on a density is equivalent to asking that Stokes' Theorem holds. $\endgroup$ – Tim Campion Nov 5 '13 at 14:46
I'm not convinced that this is what you are looking for, Qiaochu, but I think it's worth mentioning anyway.
As someone who has no real sense for "physical intuition," and who -- probably not coincidentally -- hated his multivariable calculus class, I've found (what I've read of) David Bachman's A geometric approach to differential forms to be wonderfully intuitive. Best of all, it's available for free online.
Harrison BrownHarrison Brown
$\begingroup$ As of 2012, there's a second edition of Bachman's book, available from Springer. It's aimed at a slightly more advanced audience than the original edition and includes an introductory chapter on differential geometry. $\endgroup$ – J W Apr 12 '14 at 11:17
The great book The Geometry of Physics (2nd edition) by Frankel could be exactly what you need, see excerpts here.
Thomas SauvagetThomas Sauvaget
I'm not sure if this point of view is taken up in the many references which are named here, but I'll say something about an "elementary" way to discover the exterior derivative which sounds like ordinary calculus. Let's take on the point of view that a $k$-form is something you integrate over a $k$-dimensional submanifold. If you imagine $k$-dimensional submanifolds as being composed of a $k$-dimensional blanket of little $k$-parallelograms, then this is a geometrically natural point of view since the $k$-form will assign a (small) number to each of these parallelograms. To actually realize a submanifold as such a "blanket" is to give a parameterization. (These parallelograms are oriented; this picture is different from surface integration of scalar functions in Riemannian geometry where one simply imagines some distribution of mass on the manifold and the integral is completely measure-theoretic. There the paralellograms have a positive mass given by the $k$-dimensional volume determined by the inner product.)
In one-variable calculus, when $f$ is a function, $df$ tells you the change in $f$ per small change in its input, and if you integrate it over a curve from $a$ to $b$, it expresses the total change in $f$ from $a$ to $b$. Now, a one form $\eta$ is integrated not over points but rather over curves. Still, you can ask, how does $\int_\gamma\eta$ change when you perturb $\gamma$? Well, if you deform a closed curve $\gamma_a$ into another curve $\gamma_b$, the difference between the integrals over $\gamma_b$ and $\gamma_a$ is some derivative we can call "$d\eta$" integrated over the surface swept between the two.
Picturing the case where $\gamma_a$ and $\gamma_b$ bound an annulus is a good thing to consider here; this interpretation tells you how to orient the boundary of the annulus if you want to think of $\int_\Sigma d\eta = \int_{\gamma_b} \eta - \int_{\gamma_a} \eta$ as being $\int_{\partial \Sigma} \eta$. On the other hand, you can take the point of view that the orientation for $\Sigma$ is determined by the requirement that we start at $\gamma_a$ and go to $\gamma_b$ (much like the case for $df$ of a function). You can then contract the inner circle to a point to recover Stokes' theorem for a disk -- the integral over the inner circle will vanish in the limit by the linearity and continuity of the form (a similar thing will happen in higher dimensions but the linearity is needed for the cancellation over the inner, closed surface).
It's not completely necessary that the curve (or $k$-dimensional submanifold) you deform is closed, but by rule the boundary should remain fixed during the deformation or you will miss out on part of the boundary.
Using a specific example like a square/cube, we can get a coordinate representation for $d\eta$ through the fundamental theorem of calculus. (For $0$ forms, every point is closed, so we did not need to worry about the word "closed" before.)
It is easy to see many properties. For example, let's take $\eta$ to be a $1$-form in $3$-space; then $d^2 \eta$ is clearly $0$. Let $\gamma$ be a circle, and let $\Sigma_a$ and $\Sigma_b$ be the upper and lower hemispheres of a ball $B$ whose equator is $\gamma$. Then $\int_{\Sigma_a} d \eta = \int_\gamma \eta = \int_{\Sigma_b} d \eta$ by Stokes' theorem for a disk. On the other hand, the integral of $d^2\eta$ over the ball $B$ is just $\int_{\Sigma_b} d \eta - \int_{\Sigma_a} d\eta = 0$ because you can sweep out $B$ by deforming $\Sigma_a$ to $\Sigma_b$ with the boundary fixed. Since $\int_B d^2 \eta = 0$ for every ball, $d^2 \eta$ is identically $0$. When you execute this proof for a square, you see that mixed partials commute.
I would like to know if the product rule can easily be seen through this interpretation, but I have not thought enough about it to see it clearly yet.
To get an intuitive understanding of the Stokes theorem, I recommand the book by Arnol'd on mechanics. It gives a very intuitive definition of the exterior derivative in such a way that the Stokes theorem becomes, heuristically, very easy to grasp.
I also find Analysis, manifolds and physics and Geometry, topology and physics to be two great source of inspiration to understand the intertwining between geometry and physics. The first is written by mathematicians, the second by a physicist.
Baez's book "Gauge Fields, Knots, and Gravity" does a good job of geometrically motivating differential forms in the first section on electromagnetism. Unfortunately, I already was happy with forms when I read it, so it may or may not be what you are looking for. It certainly does have all the right infinitesimal drawings to motivate the definitions, though. It might make a good intuitive complement to whatever abstract resource you choose.
You might also want to skim through parts of Hubbard's "Vector Calculus, Linear Algebra, and Differential Forms: a unified approach". This is used at Cornell as a textbook for a 2 semester calc/linear algebra/analysis sequence. About half of the second semester is spent developing and applying differential forms. There is somewhat less intuitive explanation, but some very good motivation for why we should define things the way we do.
Matt NoonanMatt Noonan
$\begingroup$ I'd second Hubbard's book. He gives a very natural definition of the exterior derivative of a differential form, which is what it sounds like you're looking for Yuan. IMO it's a significant step-up compared to texts like Spivak or Bachman. $\endgroup$ – Ryan Budney Jan 3 '10 at 18:26
Some very nice textbooks have already been mentioned, but some of my favorites that haven't been are:
William L. Burke, Applied differential geometry
Gabriel Weinreich, Geometrical vectors
Mike ShulmanMike Shulman
$\begingroup$ +1.The late William Burke's book has sadly been forgotten over the last decade or so as Frankel's and Nakahara's more comprehensive texts have supplanted it. But I believe it should be required reading by both physicists and mathematicians in training. $\endgroup$ – The Mathemagician Mar 25 '12 at 8:07
The notes by Bachman recommended by Harrison Brown look pretty nice to me, but it seems to me that it is possible to clarify what he says even further by focusing on the simplest cases, namely the integral of a "constant function" over the simplest possible domain.
For the integral over an interval, the simplest case consists of a constant function. You can extend this case to the general case by the additive property of an integral and taking limits. But if you want an integral that is independent of the parameterization of the interval, this leads naturally to the idea that you don't want to integrate just a function $f(x)$ but a "1-form" $f(x) dx$.
This generalizes naturally to an integral of a constant function over a line segment sitting in $R^n$. If you want the concept of an integral that is independent of choice of a linear parameterization of the segment, as well as the linear co-ordinates on $R^n$, then this leads to naturally to the fact that what should be integrated is a "dual vector", i.e. a constant $1$-form. In fact, when developing these ideas, I suggest using an abstract real vector space $V$ as the ambient space instead of $R^n$.
When considering higher dimensions, I suggest focusing on linear embeddings of a $k$-dimensional cube and asking what gives a linear co-ordinate independent additive function of flat $k$-cubes embedded in $R^n$. I have not worked out the details myself, but I suspect that this leads naturally to the concept of constant $k$-forms.
My recollection is that there is a book "Advanced Calculus" by Harold Edwards that presents all of this, but I haven't looked at the book in a very long time.
In particular, it is worth noting that the question asked is really about algebra and not analysis. The analysis arises only when you want to extend the definition of an integral to a more general class of functions beyond constant ones.
ADDED LATER:
My answer above does not address the exterior derivative. I will just add a brief comment about this and leave the details to the reader. My view of the exterior derivative is that, once you decide that exterior forms are indeed the natural objects of integration over a domain (but start with cubes!) in Euclidean space, it is the natural co-ordinate-free algebraic consequence of the fundamental theorem of calculus (or, if you insist, Stoke's theorem). That $d^2 = 0$ is the appropriate co-ordinate-free expression of the basic fact that "partials commute".
Deane YangDeane Yang
$\begingroup$ The main thing I've never understood about differential forms is their "coordinate independence." I know what the general definition of a differentiable manifold is, so I see why it might be interesting there (topology might obstruct the existence of a global coordinate patch), but I don't understand why differential forms are taught for R^n. Since in R^n there is a natural inner product (by which you mean on the tangent space?) and coordinate system, why do we care? In what way are they "coordinate independent"? Is the coordinate change just if you want to change to polar or something? $\endgroup$ – David Corwin Jan 5 '10 at 3:25
$\begingroup$ It's true that there is a natural coordinate system on R<sup>n</sup>, but you might well want to use a different one depending on the situation, e.g., polar coordinates for a spherically symmmetric problem. When you change coordinates there is a formula telling you how integrals behave. One way to think of "coordinate independence" of differential forms is just that the way differential forms change under change of coordinates neatly encodes the behaviour of integrals. $\endgroup$ – Joel Fine Jan 5 '10 at 8:03
$\begingroup$ Heck, a lot of my research was about or on manifolds, and I rarely used differential forms. My thesis was actually about exterior differential systems (systems of equations defined by exterior differential forms), and even there I used very little of the formalism of differential forms! Use differential forms only if the formalism makes your life easier and not harder. Also, for me a lot of things on $R^n$ make a lot more sense and are much easier to work with, when I see that they do not require the use of a global co-ordinate system or inner product. $\endgroup$ – Deane Yang Jan 5 '10 at 15:07
There is an amazing book called Foundations of Classical Electrodynamics: Charge, Flux, and Metric written by Friedrich Hehl and Yuri Obukhov. The book is completely metric-free and heavily uses differential forms, and if you want physical intuition there is possibly no better representation than this (I quote from page 145):
Faraday-Schouten pictograms of the electromagnetic field in 3-dimensional space. The images of 1-forms are represented by two neighboring planes. The nearer the planes, the stronger the 1-form is. The 2-forms are pictured as flux tubes. The thinner the tubes, the stronger the flow. The difference between a twisted and an untwisted form accounts for the two different types of 1- and 2-forms, respectively.
With this picture it is quite easy to imagine what he means by twisted forms!
pmodulipmoduli
$\begingroup$ A problem with this is that not all forms are representable by such a picture -- take $dx \wedge dy + dz \wedge dw$ for example. Dimension 3 has some very nice things going for it but it also restricts your intuition as to what a form is. $\endgroup$ – Ryan Budney Jan 4 '10 at 0:13
First let me say, that I am also trying to gain a understanding of differential forms.
I have found the Geometric Algebra approach for visualizing simple mutlivectors (k-blades) a good way to visualize the infinitesimal multivectors of differential forms.
The first few chapters of Geometric Algebra for Computer Science do a good job of that. http://www.geometricalgebra.net/tour.html
As far as I can tell (only on chapter 4) GA excludes visualizing general multivectors like $a \wedge b + c \wedge d $, but as Dan Piponi pointed out here: http://homepage.mac.com/sigfpe/Mathematics/forms.pdf your probably okay thinking of that construction as two parallelograms.
With those resources as the basis of my visual intuition, the second chapter of Manifolds and Differential Forms by Reyer Sjamaar here:
http://www.math.cornell.edu/~sjamaar/papers/manifold.pdf seemed fairly understandable.
Jonathan Fischoff
My personal favorite is Spivak's "Calculus on manifolds". It treats classical theorems and motivations with a lot of respect. The only caveat is that it has its share of annoying misprints.
122 silver badges33 bronze badges
David LehaviDavid Lehavi
Misner, Thorne, and Wheeler's Gravitation is very good at providing a treatment of differential forms that appeals to physicists. But it is no longer the preeminient GR reference (though it's perfectly fine, its size also is an issue), so be warned. Dubrovin, Fomenko, and Novikov's Modern Geometry is also very good, but less structured.
Steve HuntsmanSteve Huntsman
I second the recommendation to at least flip through Gravitation. It has an intimidating size, but easygoing manner. I had a lot of difficulty with Spivak's Calculus on Manifolds (which has essentially no physical intuition outside the Archimedes exercise at the end), but I think I was uncomfortable with the abstract notions of tensor product and dual vector space at the time I was learning from it. At some point I caught on that df was supposed to eat vector fields and produce functions, and things got a little better.
You might try Sternberg's Advanced Calculus (available on line), especially chapters 11 and 13.
Edit: I like to think of abstract forms as "things to integrate" and Stokes's theorem as some kind of adjunction between boundaries and the derivative. This becomes a bit more meaningful when homology and cohomology are introduced. I don't have much advice for connecting with physical intuition, but I have found it useful to:
Decompose div, grad and curl in terms of d and the metric.
Work through some E&M starting from a 1-form (strictly speaking a U(1)-connection) A on $\mathbb{R}^{1,3}$ (see Wikipedia).
S. Carnahan♦S. Carnahan
Harold Edwards' book Advanced Calculus: A Differential Forms Approach starts with forms as the basic objects, and gives really nice intuitive explanations. It was written in 1969, as an undergraduate text from an unconventional point of view, and is still available from Birkhauser. But Edwards told me a few years ago that it was probably too hard for today's students (everything is done quite rigorously).
Douglas Lind
I'm a bit surprised that nobody has mentioned Harley Flanders' Differential Forms with Applications to the Physical Sciences (https://archive.org/details/DifferentialForms). It provides intuition with an absolute minimum of required background.
Phil Harmsworth
I found B. Felsager's "Geometry, Particles and Fields" a useful introduction. It has a part I that presents field theory in the language most familiar in physics, and then a part II that puts things into differential geometry language.
Michael Engelhardt
A number of books not mentioned but particularly useful for the relation to physics:
Bamberg & Sternberg "A Course in Mathematics for Students of Physics 1&2"
Volume 2 all about physical examples and explanations. Some very neat introductory examples on a basic EM level such as doing linear electric network theory. What's quite exceptional here is how strongly the topological flavor is worked out and many of the early examples are on complexes, hence giving some very solid intuition behind the discrete and continuous cases and their relationship from a topological point of view. Probably the most topological of introductory texts that I know that deal with physics formulated via differential forms.
Burke's unpublished Div Grad and Curl are Dead
Lots of physical settings here but has banished div/grad/curl, so it's a way to get direct physical intuition from differential forms without any emphasis on relation to old-school vector calculus.
Similar to the mentioned book by Arnold (in that it goes towards Hamilton/Lagrangian mechanics using differential forms hence arriving at symplectic geometry):
Abraham & Marsden "Foundations Of Mechanics"
Note that relationships of the generalized Stokes theorem to old school Green/Gauss/Stokes theorems are left as exercises. One can find these derivations worked out in Arfken, Weber, Harris or Abraham, Marsden & Ratiu.
Darling's Differential forms and connections
Provides direct comparisons of old and new Stoke-type theorems. Good for physical examples from Gauge Field Theory.
For the underlying alternating constructions, mentioned by Qiaochu, are actually not from the differential aspect of differential form per se but spring from the exterior algebra, hence can be studied in vector spaces. The original source for these ideas is Grassmann. I recommend his second book, translated in 2000 as a decent point to get some ideas about this. A more modern way to get there is through multilinear algebra. The computation of higher dimensional linear bounded entities is skew-symmetric, which characterizes this need to keep track of sign (coding orientation). The geometric connection of alternation is the need to do bookkeeping of signs (orientations) as one computes exterior product and other operations.
One could certainly do physics with exterior algebra, Grassmann himself gave some examples. I'm not aware of a good modern text that does physics through exterior algebra outside of differential forms, though it would be very instructive to have it.
Georg Essl
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ca.classical-analysis-and-odes smooth-manifolds intuition or ask your own question.
What is the exterior derivative intuitively?
Why do I need densities in order to integrate on a non-orientable manifold?
Converse to Stokes' Theorem
Commutator of Lie derivative and codifferential?
How should one present curl and divergence in an undergraduate multivariable calculus class?
Integration and Stokes' theorem for vector bundle-valued differential forms?
Generalization of Curl to higher dimensions
Characterization of the Lie derivative
How useful/pervasive are differential forms in surface theory?
Characterization of the exterior derivative
Semantics of derivations as derivatives
|
CommonCrawl
|
A general-purpose machine-learning force field for bulk and nanostructured phosphorus
Accelerated mapping of electronic density of states patterns of metallic nanoparticles via machine-learning
Kihoon Bang, Byung Chul Yeo, … Hyuck Mo Lee
BIGDML—Towards accurate quantum machine learning force fields for materials
Huziel E. Sauceda, Luis E. Gálvez-González, … Alexandre Tkatchenko
Rapidly predicting Kohn–Sham total energy using data-centric AI
Hasan Kurban, Mustafa Kurban & Mehmet M. Dalkilic
Physically informed artificial neural networks for atomistic modeling of materials
G. P. Purja Pun, R. Batra, … Y. Mishin
Data-augmentation for graph neural network learning of the relaxed energies of unrelaxed structures
Jason Gibson, Ajinkya Hire & Richard G. Hennig
Benchmarking materials property prediction methods: the Matbench test set and Automatminer reference algorithm
Alexander Dunn, Qi Wang, … Anubhav Jain
Regularized machine learning on molecular graph model explains systematic error in DFT enthalpies
Himaghna Bhattacharjee, Nikolaos Anesiadis & Dionisios G. Vlachos
Deep neural networks for accurate predictions of crystal stability
Weike Ye, Chi Chen, … Shyue Ping Ong
Machine-learning the configurational energy of multicomponent crystalline solids
Anirudh Raju Natarajan & Anton Van der Ven
Volker L. Deringer ORCID: orcid.org/0000-0001-6873-02781,
Miguel A. Caro ORCID: orcid.org/0000-0001-9304-42612,3 &
Gábor Csányi ORCID: orcid.org/0000-0002-8180-20344
Nature Communications volume 11, Article number: 5461 (2020) Cite this article
Structure of solids and liquids
Theory and computation
Elemental phosphorus is attracting growing interest across fundamental and applied fields of research. However, atomistic simulations of phosphorus have remained an outstanding challenge. Here, we show that a universally applicable force field for phosphorus can be created by machine learning (ML) from a suitably chosen ensemble of quantum-mechanical results. Our model is fitted to density-functional theory plus many-body dispersion (DFT + MBD) data; its accuracy is demonstrated for the exfoliation of black and violet phosphorus (yielding monolayers of "phosphorene" and "hittorfene"); its transferability is shown for the transition between the molecular and network liquid phases. An application to a phosphorene nanoribbon on an experimentally relevant length scale exemplifies the power of accurate and flexible ML-driven force fields for next-generation materials modelling. The methodology promises new insights into phosphorus as well as other structurally complex, e.g., layered solids that are relevant in diverse areas of chemistry, physics, and materials science.
The ongoing interest in phosphorus1 is partly due to its highly diverse allotropic structures. White P, known since alchemical times, is formed of weakly bound P4 molecules2, red P is an amorphous covalent network3,4,5 and black P can be exfoliated to form monolayers, referred to as phosphorene6,7, which have promise for technological applications8. Other allotropes include Hittorf's violet and Ruck's fibrous forms, consisting of cage-like motifs that are covalently linked in different ways9,10,11, P nanorods and nanowires12,13,14 and a range of thus far hypothetical allotropes15,16,17,18. Finally, liquid P has been of fundamental interest due to the observation of a first-order transition between low- and high-density phases19,20,21.
Computer simulations based on quantum-mechanical methods have been playing a central role in understanding P allotropes. Early gas-phase computations were done for a variety of cage-like units22 and for simplified models of red P23; periodic density-functional theory (DFT) with dispersion corrections served to study the bulk allotropes24,25,26,27. DFT modelling of phosphorene quantified strain response28, defect behaviour29 and thermal transport30. Higher-level quantum-chemical investigations were reported for the exfoliation energy of black P31,32, and the latter will be a central theme in the present study as well. For the liquid phases, DFT-driven molecular dynamics (MD) were done in small model systems with 64–128 atoms per cell33,34,35,36.
Whilst having provided valuable insight, these prior studies have been unavoidably limited by the computational cost of DFT. Empirically fitted force fields (interatomic potential models) require much fewer computational resources and have therefore been employed for P as well. Recently, different approaches have been used to parameterise force fields specifically for phosphorene37,38,39,40. For example, a ReaxFF model was used to study the exfoliation of black P, notably including the interaction with molecules in the liquid phase41. However, these empirically fitted force fields can only describe narrow regions of the large space of atomic configurations, which poses a major challenge when very diverse structural environments are present: for example, force fields developed specifically for black P or phosphorene would not be expected to properly describe the liquid phase(s).
Machine-learning (ML) force fields are an emerging answer to this problem42,43,44,45,46,47,48, and they are increasingly used to solve challenging research questions49,50,51. The central idea is to carry out a number of reference computations (typically, a few thousand) for small structures, currently normally based on DFT, and to make an ML-based, non-parametric fit to the resulting data. Alongside the choice of structural representation and the regression task itself, a major challenge in the development of ML force fields is that of constructing a suitable reference database, which must cover relevant atomistic configurations whilst having sufficiently few entries to keep the data generation tractable. Although key properties (such as equations of state and phonons) of crystalline phases can now be reliably predicted with these methods52, and purpose-specific force fields can be fitted on the fly53, it is still much more challenging to develop general-purpose ML force fields that are applicable to diverse situations out-of-the-box—to a large extent, this is enabled (or precluded) by the reference data. Indeed, when fitted to a properly chosen, comprehensive database, ML force fields can describe a wide range of material properties with high fidelity49,50, while being flexible enough for exploration tasks, such as structure prediction54,55,56,57. Phosphorus has been an important demonstration in the latter field more recently, when we constructed a Gaussian approximation potential (GAP) model through iterative random structure searching (RSS) and fitting58.
In the present work, we introduce a general-purpose GAP ML force field for elemental P that can describe the broad range of relevant bulk and nanostructured allotropes. We show how a general reference database can be constructed by starting from an existing GAP–RSS model and complementing it with suitably chosen 3D and 2D structures, thus combining two database-generation approaches that have so far been largely disjoint, and giving exquisite (few meV per atom) accuracy in the most relevant regions of configuration space. We then demonstrate how baseline pair-potentials ("R6") can help to capture the long-range van der Waals (vdW) dispersion interactions that are important in black P24 and other allotropes26, and how this baseline can be combined with a shorter-ranged ML model—together allowing our model to learn from data at the DFT plus many-body dispersion (DFT + MBD) level of theory59,60. The new GAP (more specifically, GAP + R6) force field combines a transferable description of disordered, e.g., liquid P with previously unavailable accuracy in modelling the crystalline phases and their exfoliation. We therefore expect that this ML approach will enable a wide range of simulation studies in the future.
A reference database for phosphorus
The quality of any ML model depends on the quality of its input data. In the past, atomistic reference databases for GAP fitting have been developed either in a manual process (see, e.g., ref. 61) or through GAP–RSS runs62,63—but these two approaches are inherently different, in many ways diametrically opposed, and it has not been fully clear what is the optimal way to combine them. We introduce here a reference database for P, which does indeed achieve the required generality, containing the results of 4798 single-point DFT + MBD computations, which range from small and highly symmetric unit cells to large supercell models of phosphorene. Of course, "large" in this context can mean no more than few hundred atoms per cell, which leads to one of the primary challenges in developing ML force fields: selecting properly sampled reference data to represent much more complex structures.
Whilst details of the database construction are given in Supplementary Note 1, we provide an overview by visualising its composition in Fig. 1. To understand the diversity of structures and the relationships between them, we use the smooth overlap of atomic positions (SOAP) similarity function64,65: we created a 2D map in which the distance between two points reflects their structural distance in high-dimensional space, here obtained from multidimensional scaling. In this 2D map, two SOAP kernels with cut-offs of 5 and 8 Å are linearly combined to capture short- and medium-range order. Every fifth entry of the database is included in the visualisation, for numerical efficiency.
Fig. 1: A GAP fitting database for elemental phosphorus.
The relationships between the structures in the database are visualised through 2D embedding of a SOAP similarity metric. Example structures are shown, and specific points of interest are highlighted by numbers: the closeness between molecular crystalline (white P, 1) and liquid P4, the transition between the molecular and network liquid (2), the similarity between Hittorf's and fibrous P, which both consist of extended tubes and fall in the same island on the plot (3), an isolated set of points corresponding to As-type structures (4) and the exfoliation from black P into bilayers (5) and monolayers. The GAP–RSS dataset from ref. 58, finally, is shown using smaller grey points and spans a wide range of configurations (see also Supplementary Fig. 1). Note that this map does not include the isolated P, P2 and P4 configurations, as it aims to survey the space of extended P structures.
Figure 1 allows us to identify several aspects of the constituent parts of the database. The GAP–RSS structures, taken from ref. 58, are indicated by grey points, and these are widely spread over the 2D space of the map: the initial randomised structures were generated using the same software (buildcell) as in the established Ab Initio Random Structure Searching (AIRSS) framework66, with subsequent relaxations driven by evolving GAP models58. The purpose of including those data is to cover a large variety of different structures, with diversity being more important than accuracy. For the manually constructed part, in contrast, related structures cluster together, e.g., the various distorted unit cells representing white P (top left in Fig. 1). Melting white P leads to a low-density fluid in which P4 units are found as well, and the corresponding points in the 2D visualisation are relatively close to those of the white crystalline form (marked as 1 in Fig. 1). Pressurising the low-density liquid leads to a liquid–liquid-phase transition (LLPT)19,20,21, and accordingly points representing denser liquid structures are also found closer to the centre of the map (the transition between them occurs in the region marked as 2). The high-density liquid itself, remarkably, appears to be structurally rather similar to Hittorf's and fibrous P, and the latter two crystalline allotropes occupy the same cluster of points in Fig. 1 (3)—reflecting the fact that they are built up from very similar, cage-like units10. Rhombohedral (As-type) P is further away from other entries, in line with the fact that no such allotrope is stable at ambient pressure (4)67. Finally, the right-hand side of Fig. 1 prominently features points corresponding to various types of black P and phosphorene-derived structures (an example of a bilayer is marked as 5).
The various parts of the database pose a challenge to the ML algorithm: it needs to achieve a highly accurate fit for the crystalline configurations (blue in Fig. 1), yet retain the ability to interpolate smoothly between liquid configurations (orange). In this, the selection of input data is intimately connected with the regression task itself. A key feature of our approach is the use of a set of expected errors (regularisation), which is required to avoid overfitting (a GAP fit without regularisation would perfectly reproduce the input data, but lead to uncontrolled errors for even slightly different atomistic configurations). We set these values manually, bearing in mind the physical nature of a given set of configurations61: e.g., we use a relatively large value for the highly disordered liquid structures (0.2 eV Å−1 for forces), but a smaller value for the bulk crystals (0.03 eV Å−1). Similarly, large expected errors for the initial GAP–RSS configurations allow the force field to be flexible in that region of configuration space63—thus ensuring that it remains usable for crystal-structure prediction in the future, which constitutes a very active research field for P15,16,17,18 and can be vastly accelerated by ML force fields18,58. Details of the composition of the database developed here and the regularisation are given in Supplementary Notes 1 and 2.
GAP + R6 fitting
The next task in development of our ML force field is the choice of structural descriptors. In the case of P, there is a need to accurately describe the long-range vdW interactions between phosphorene sheets or in the molecular liquid—which are weak on an absolute scale, yet crucial for stability and properties. At the same time, the ML model must correctly treat complex, short-ranged, covalent interactions, e.g., in Hittorf's P with its alternating P9 and P8 cages9; it is this length scale (5-Å cut-off) that is typically modelled by finite-range descriptors in ML force fields49,50,51.
Figure 2a–c illustrates the combination of descriptors used to "machine-learn" our force field (details are provided in the "Methods" section). The baseline is a long-range (20-Å cut-off) interaction term as in ref. 68, here fitted to the DFT + MBD exfoliation curve of black P. The latter is taken to be indicative for vdW interactions in P allotropes more generally, and a test for the transferability of this approach to more complex structures (Hittorf's P) is given in one of the following sections. The baseline model is subtracted from the input data, and an ML model is fitted to the energy difference, which is itself composed of two terms: a pair potential and a many-body term, both at short range (5-Å cut-off, Fig. 2a, b), linearly combined and jointly determined during the fit69. The short-range GAP and the long-range baseline model are then added up to give the final model ("Methods" section). Because of the 1/r6 dependence of the long-range part, we refer to this approach as "GAP + R6" in the following.
Fig. 2: A GAP + R6 ML model including long-range dispersion.
a Schematic sketch of the different types of structural descriptors, here illustrated for a pair of partially exfoliated phosphorene sheets—emphasising the medium-range (5 Å) and long-range (20 Å) descriptors that are combined in our approach ("Methods" section)68. b, c Modelling the different length scales: the upper panel shows the cut-off function used to bring the 2-body and SOAP descriptors smoothly to zero between 4 and 5 Å; the lower panel shows the long-range term, VR6, evaluated for an isolated pair of atoms in the absence of the ML terms. d Phosphorene exfoliation curve from our GAP + R6 model (red) compared to the DFT + MBD reference (dashed black line), giving the energy computed for black P (structure from ref. 70) as a function of the interlayer distance. A GAP fit without the long-range "+R6" term, i.e., based only on a 2b+SOAP fit with a 5-Å cut-off, is included for comparison (dashed grey line). To obtain these curves, the sheets have been shifted along the [010] direction without further relaxation, and the energy is referenced to that of a free monolayer. Benchmark results for the exfoliation energy from quantum Monte Carlo (QMC, −81 ± 6 meV/atom, with bars showing the error given by the authors, ref. 31) and coupled-cluster (CC, −92 meV/atom, ref. 32) studies are given by symbols, both plotted at the horizontal zero.
Figure 2d shows the resulting exfoliation curve: we obtain it by scaling the known black P structure70 in small steps along the [010] direction, keeping the individual puckered layers intact and computing the potential energy at each step, with the energy of a free monolayer set as the energy zero. To illustrate the need for a treatment of long-range interactions (here, achieved using the "+R6" baseline), we fitted a GAP without this term, using a 5-Å cut-off and otherwise similar parameters—this model clearly fails to capture the longer-range interactions involved in the exfoliation, as shown by a grey dashed line in Fig. 2d. In contrast, the GAP + R6 result (red) and the DFT + MBD reference data (black) are practically indistinguishable. We also include two benchmark values from high-level quantum chemistry, one from quantum Monte Carlo computations31, one from a coupled-cluster (CC) approach in ref. 32. The GAP + R6 prediction (–85 meV per atom) is in excellent agreement with both, and it matches the DFT + MBD result to within 1% (≈0.8 meV). To place our results into context, we may quote from a recent study27, which compared several computational approaches in regard to how well they describe the exfoliation energy of black P: the results varied widely, from about −10 meV (without any dispersion corrections) to between −86 and −145 meV (all at the PBE0 + D3 level but using different basis sets and damping schemes), and further to −218 meV for one specific combination of methods27. The same study provided initial evidence for the high performance of the MBD method in describing black P27.
The most direct way to ascertain the quality of the ML model is to compute energies and forces for a separate test set of structures, and to compare the results to reference computations using DFT + MBD (the ground truth to be learned). We separate the results according to various types of test configurations, which are of a very different nature.
Figure 3a shows such tests for P structures obtained from GAP–RSS58, starting with initial (random) seeds and progressively including more ordered and low-lying structures. The forces in the initial seeds range up to very high absolute values, as a response to atoms having been placed far away from local minima; the datapoints scatter but overall reveal a good correlation between DFT + MBD and GAP + R6. In contrast, Fig. 3b focuses on the manually constructed parts of our database: for the network liquid, there is still notable scatter, but for the molecular liquid and especially for the 2D and crystalline structures, the errors are much smaller. This is expected as these configurations correspond to distorted copies of only a few crystalline structures that are abundantly represented in the database. We emphasise that the test structures are not fully relaxed, on purpose (and neither are those used in the ML fit): they serve to sample slightly distorted environments where there are non-zero forces on atoms.
Fig. 3: Validation of the ML force field.
Scatterplots of Cartesian force components for a test set of structures, which has not been included in the fit, comparing DFT + MBD computations with the prediction from GAP + R6. Data are shown for different sets of the GAP–RSS-generated (panel a) and manually constructed (panel b) parts of the database. The insets show kernel-density estimates ("smoothed histograms") of absolute errors with the same colour coding. Note the difference in absolute scales for the force components between the two panels.
Numerical results for the test-set errors are given in Table 1. We emphasise that the initial (random) GAP–RSS configurations are included primarily for structural diversity, and that they experience very large absolute forces, ranging up to about 20 eV Å−1 (Fig. 3a), much more than the test-set error. The much smaller magnitude of errors for the more ordered configurations is consistent with a progressively tightened regularisation of the GAP fit61: for example, we set the force regularisation to 0.4 eV Å−1 for random GAP–RSS configurations, 0.2 eV Å–1 for liquid P, but 0.03 eV Å–1 for bulk crystalline configurations (Supplementary Table 1). The results for the subset describing the crystalline phases are in line with a recent benchmark study for six elemental systems, reporting energy RMS errors in the meV-per-atom region and force RMS errors from 0.01 eV Å–1 (crystalline Li) to 0.16 eV Å–1 (Mo) obtained from GAP fits52. Another recent test for liquid silicon showed errors of about 12 meV at.–1 and 0.2 eV Å–1 for energies and forces, respectively71, which again is qualitatively consistent with our findings—the molecular liquid primarily consists of P4 units, whereas the network liquid contains more diverse coordination numbers and environments, and its quantitative fitting error is therefore larger than that for its molecular counterpart (Table 1). We stress again that in the GAP framework, the ability to achieve good accuracy in one region of configuration space whilst retaining flexibility in others depends strongly on the judicious choice of regularisation parameters (Supplementary Note 2 and Supplementary Table 2).
Table 1 Root mean square error (RMSE) measures for energies and force componentsa.
Crystalline allotropes
Phosphorus crystallises in diverse structures—and a substantial body of literature describes their synthesis and experimental characterisation. Among these crystalline allotropes, black P has been widely studied as the precursor to phosphorene. DFT + MBD describes the structure of bulk black P remarkably well27, reproducing experimental data within any reasonable accuracy (Supplementary Note 3 and Supplementary Table 3). It is then, by extension, satisfying to observe the very high accuracy of the GAP + R6 prediction, which captures even the parameter b, corresponding to the interlayer direction, to within better than 0.5% of the DFT + MBD reference. The two inequivalent covalent bond lengths in black P, after full relaxation, are 2.225/2.255 Å (DFT + MBD) and 2.225/2.260 Å (GAP + R6), showing very good agreement.
Energies and unit-cell volumes of the main crystalline allotropes are given in Table 2. Strikingly, black, fibrous and Hittorf's P are essentially degenerate in their DFT + MBD ground-state energy, coming even closer together than an earlier study with pairwise dispersion corrections had indicated26. This de facto degeneracy is reproduced by our force field (Table 2), with all three structures being similar in energy to within 0.003 eV per atom. In terms of unit-cell volumes, black P is more compact, whereas fibrous and Hittorf's P contain more voluminous tubes and arrive at practically the same volume, as both contain the same repeat unit and only differ in how the tubes are oriented in the crystal structures. GAP + R6 reproduces all these volumes to within about 1%. White P, which we describe by the ordered β rather than the disordered α modification2, is notably higher in energy, as expected for the highly reactive material. We finally include in Table 2 the rhombohedral As-type modification, which is a hypothetical structure at ambient conditions and can only be stabilised under pressure72. It is thus somewhat surprising that DFT + MBD assigns a slightly more negative energy to As-type than to black P (Table 2)—consequently, our ML model faithfully reproduces this feature, to within 0.002 eV per atom.
Table 2 Unit-cell volumes and energies (relative to black P) for relevant allotropes, comparing DFT + MBD and GAP + R6 results.
Hittorf's phosphorus in 3D and 2D
The exfoliation of black P to form phosphorene had already served as a case to illustrate the role of short-ranged versus GAP + R6 models (Fig. 2). Whilst most of the work on 2D phases is currently focused on phosphorene, Schusteritsch et al. suggested to exfoliate Hittorf's P to give "hittorfene"73, and very recently Hittorf-based monolayers11 and nanostructures14 were indeed experimentally realised. It is therefore of interest to ask whether this exfoliation can be described by a force field for P, especially as the process involves more complex structures, making the routine application of DFT + MBD more computationally costly than for phosphorene. The exfoliation of Hittorf's P is also a more challenging test for our method: regarding black P, we had included multiple partially exfoliated mono- and bilayer structures in the database (Fig. 1), whereas for Hittorf's, we only include distorted variants of the experimentally reported bulk structure but no exfoliation snapshots or monolayers. Testing the ML force field on the full exfoliation curve therefore constitutes a more sensitive test for its usefulness in computational practice.
Figure 4 shows the exfoliation similar to Fig. 2d, but now for Hittorf's P, using two different structures. One is the initially reported refinement result by Thurn and Krebs (1969, purple in Fig. 4)9. The other was recently reported by Zhang et al. (2020, cyan)11. The samples in both studies have been synthesised in very different ways: the earlier study followed the original synthesis route by Hittorf74, viz. slow cooling of a melt of white P and excess Pb; the 2020 study used a chemical vapour transport route11, which may have led to slightly different ways in which the tubes are packed.
Fig. 4: Exfoliation of Hittorf's phosphorus.
Exfoliation into monolayer "hittorfene"73, similar to Fig. 2d, but now for a more complex structure where training data are only available around the minimum. Two different experimental structural models are used as a starting point: the initial P2/c structure (1969, ref. 9, magenta), and a very recently proposed P2/n structure (2020, ref. 11, cyan). The results of our GAP + R6 model are given by solid and dashed lines, respectively, and reference DFT + MBD computations are indicated by circles and crosses.
Remarkably, DFT + MBD places the two structures at practically degenerate exfoliation energies (about 35 meV/atom below the respective monolayer), without a discernible preference for one over the other, despite the different synthesis pathways and crystallographically dissimilar structure solutions9,11. Our ML force field fully recovers this degeneracy at around the minimum (corresponding to the bulk phases) and at large interlayer spacing (above + 4 Å), as well as a subtle difference between the phases at intermediate separation. As pointed out by Schusteritsch et al.73, the overall interlayer binding energy of Hittorf's P is very low, notably smaller than that of black P.
Nanoribbons
Akin to graphene nanoribbons, phosphorene can be cut into nanoribbons as well, as predicted computationally75 and later demonstrated in experiment76. Such ribbons have been studied, e.g., in ref. 77, using empirical potential models. In Fig. 5a, we show the two fundamental types of phosphorene nanoribbons, referred to as "armchair" and "zigzag". The latter is clearly favoured among the two, and GAP + R6 reproduces the associated energetics to within 5–6% of the DFT + MBD result. The ratio between the formation energies of the armchair and zigzag ribbon, as the most important indicator for the stability preference, is even better reproduced, viz. 1.75 (DFT + MBD) compared to 1.76 (GAP + R6)75.
Fig. 5: Phosphorene nanoribbons.
a The two fundamental types of ribbons, obtained by cleaving along the two in-plane directions of phosphorene, leading to armchair- (left) and zigzag- (right) type ribbons, with the boundaries of the periodic simulation cells indicated. The energies are given relative to a phosphorene monolayer; all structures are cleaved from the bulk without further relaxation. b Demonstration of the applicability to a much larger system (15,744 atoms), shown for a GAP + R6-driven MD snapshot after 10 ps in the NVT ensemble, with a thermostat set to 300 K, and then another 40 ps of constant-energy (NVE) dynamics. Colour coding indicates the atomic positions in the direction normal to the layer.
The test in Fig. 5a assesses very small ribbons, because the effect of nanostructuring is most pronounced for those—in contrast, larger ribbons are more similar to 2D phosphorene, which is already ubiquitously represented in the database (Fig. 1). However, beyond this initial test, the ML force field brings substantially larger system sizes within reach. Figure 5b shows a zigzag phosphorene nanoribbon that is >80 nm in length, with a width that is consistent with experimental reports76. After a short NVT simulation, the system is allowed to evolve over 40 ps, leading to the visible formation of nanoscale ripples—each extending over several nanometres. This computational task may be compared with an earlier study using an empirical potential to simulate water diffusion on rippled graphene (over much longer timescales)78: with typical system sizes of 15 × 15 nm2, and reaching up to 30 × 30 nm2, such simulations are completely out of reach for quantum-mechanical methods, but they are accessible to ML force fields. Beyond the capability test in Fig. 5b, similar simulation cells, but with added heat sources and sinks, are widely used in computational studies of thermal transport, normally in combination with empirical potentials—as has indeed been shown for phosphorene nanoribbons77. The high accuracy of our ML model for predicting interatomic forces (0.07 eV Å–1 for the 2D configurations, Table 1) allows one to anticipate a good performance for properties that are directly derived from the force constants, viz. phonon dispersions and thermal transport, as demonstrated previously for silicon (see refs. 61,71, and references therein). A rigorous study of phonons and thermal transport in phosphorene with GAP + R6 is envisioned for the future.
Liquid phosphorus
Liquid phases provide a highly suitable test case for the quality of a force field—indeed, the very first high-dimensional ML force field, an artificial neural-network model for silicon, was tested for the RDF of the liquid phase42. Phosphorus is, again, interesting in this regard, because two physically distinct phases and the occurrence of a first-order LLPT have been reported19,20,21. In Fig. 6, we validate our method for both phases, using simulation cells containing 248 atoms. The former (Fig. 6a–c) contains P4 molecules; the latter (Fig. 6d–f) describes a covalently connected network liquid. We performed DFT-MD computations for reference; due to the high computational cost, these had to be carried out at the pairwise dispersion-corrected PBE + TS (rather than MBD) level79. Two different temperatures, 1000 and 2000 K, span the approximate temperature range in which phase transitions in P have been experimentally studied20.
Fig. 6: Liquid phosphorus.
MD simulations in the NVT ensemble, benchmarking the quality of the GAP + R6 ML force field for the description of liquid phosphorus. a Snapshot of a DFT-MD simulation of a system containing 62 P4 molecules at a fixed density of 1.5 g cm−3, corresponding to the low-density liquid (or fluid). b Radial distribution functions for this system at two different temperatures, taken from the last 10 ps of the trajectories. Solid lines indicate GAP-MD simulations, whereas dashed lines show the results of reference DFT-MD trajectories. c Same for angular distribution functions (ADF), computed using a radial cut-off of 2.7 Å. d–f Same but for the network liquid at a much higher density of 2.5 g cm−3. The slightly more "jagged" appearance of the DFT data in panel f is due to the smaller number of structures that are sampled from the trajectory.
Our GAP + R6-driven MD simulations (which we call "GAP-MD" for brevity) describing the low-density molecular phase are in excellent agreement with the DFT-MD reference. The simplest structural fingerprint is the radial distribution function (RDF), plotted in Fig. 6b: there is a clearly defined first peak (corresponding to P–P bonds inside the P4 units, with a maximum at about 2.2 Å) and, separated from it, an almost unstructured heap at larger distances beyond about 3 Å, all indicative of a molecular liquid that consists of well-defined and isolated units. Similarly, the angular distribution functions (ADF) in Fig. 6c show a single peak at ≈60°, consistent with the equilateral triangles that make up the faces of the ideal P4 molecule. The molecules are more diffusive at higher temperature, and therefore, the features in the radial and angular distributions are slightly broadened in the 2000-K data compared to those at 1000 K—but there are no qualitative changes between the two temperature settings, and the GAP-MD simulation reproduces all aspects of the DFT-MD reference.
In Fig. 6d–f, we report the same tests but now for the network liquid. In this case, at 1000 K, the GAP-MD-simulated liquid appears to be slightly more structured than that from DFT-MD, indicated by a larger magnitude of the second RDF peak between 3 and 4 Å, and a somewhat sharper peak in the angular distribution at about 100° in the GAP-MD data. Whether that is a significant difference between DFT and GAP + R6 or merely a consequence of the slightly different dispersion treatments, MD algorithm implementations, etc. remains to be seen—but it does not change the general outcome that all major features of the DFT-based trajectory are well reproduced by the GAP + R6 model. The 2000-K structures generated by DFT-MD and GAP-MD simulations agree very well with each other, likely within the expected uncertainty that is due to finite-system sizes and simulation times. A feature of note in the ADF is a secondary peak at 60°, much smaller than in the molecular liquid (Fig. 6c), but present nonetheless: the liquid, especially at higher temperature, does still contain three-membered ring environments. Comparing the 1000- and 2000-K simulations, the former reveals a clear predominance of bond angles between about 90° and 110°, whereas the bond-angle distribution in the latter is much more spread out, indicating a highly disordered liquid structure.
Liquid–liquid-phase transition
We finally carried out a simulation of the LLPT, expanding substantially on prior DFT-based work33,34,35,36 in terms of system size, as shown in Fig. 7. Our initial system contains 496 thermally randomised P4 molecules (1984 atoms in total), which are initially held at the 2000-K and 0.3-GPa state point for 25 ps. We then compress the system with a linear-pressure ramp to 1.5 GPa, over a simulation time of 100 ps. At low densities, the system consists entirely of P4 units, most having distorted tetrahedral shapes (and thus threefold coordination, indicated by light-blue colouring in Fig. 7a). Occasionally during the high-temperature dynamics, tetrahedra open up such that two atoms temporarily lose contact and thus have lower coordination numbers; sometimes two tetrahedra come closer than the distance we use to define bonded contacts (2.7 Å, as in Fig. 6b). All these effects are minor, as seen on the left-hand side of Fig. 7a. Upon compression, the atomistic structure changes drastically: having reached a pressure of 0.81 GPa, the system has transformed into a disordered, covalently bonded network, qualitatively consistent with previous simulations in much smaller unit cells33,34,35,36, but now providing insight for a system size that would have been inaccessible to DFT-MD simulations at this level. To benchmark the computational performance of GAP-MD, we repeated this simulation using 288 cores on the UK national supercomputer, ARCHER, where it required 6 h (corresponding to 0.5 ns of MD per day). The LLPT gives rise to a much larger diversity of atomic coordination environments, seen on the right-hand side of Fig. 7a. We emphasise that the liquid is held at a very high temperature of 2000 K, and therefore substantial deviations from the ideal threefold coordination (that would be found in crystalline P) are to be expected.
Fig. 7: The liquid–liquid-phase transition.
We report a GAP-MD simulation of the liquid–liquid-phase transition (LLPT) in phosphorus, described by compressing a system of 1984 atoms from 0.3 to 1.5 GPa over 100 ps (105 timesteps), with the temperature set to 2000 K. a Consecutive snapshots from the trajectory, with coordination numbers, N, indicated by colour coding. b Evolution of density and atomic connectivity through the LLPT. The former, shown in the upper panel, starting with a low-density, compressible molecular liquid, increases rapidly between about 0.7 and 0.8 GPa, and then reaches much higher densities for the less compressible network liquid. The fraction of 3-coordinated atoms is unity in ideal P4, but strongly lowered because of the LLPT; the network liquid contains much higher- and some lower-coordinated environments (cf. panel a). The count of three-membered rings can similarly be taken as an indicator for the presence of molecular P4 units: in the ideal molecular liquid, only P4 tetrahedra are found (each having four faces, and hence four three-membered rings, one per atom); in the network liquid, three-membered rings are still present, but their count is reduced to about a fifth, making way for larger rings consistent with a covalently bonded network.
We analyse this GAP-MD simulation in Fig. 7b. We first record the density of the system as a function of applied pressure. The molecular liquid is quite compressible, indicated by a density increase of about 40% during compression from 0.3 to 0.7 GPa, consistent with the presence of only dispersive interactions between the molecules. When the system is compressed further, between 0.7 and 0.8 GPa, the density increases rapidly, concomitant with the observation of the LLPT in our simulation (Fig. 7a). The network liquid is much less compressible, and it is predicted to have a density of about 2.6–2.7 g cm−3—very similar to the crystallographic density of black P (2.7 g cm−3 at atmospheric pressure)80, and smaller than 3.5 g cm−3 reported for As-type P at about 6 GPa72, in line with expectations. The transition, in fact, begins to occur earlier in the trajectory, as seen by analysing the count of threefold coordinated atoms and three-membered rings (the latter being a structural signature of the P4 molecules). Coexistence simulations and thermodynamic integration are now planned to map out the high-temperature/high-pressure LLPT in comparison to experimental data20.
We have developed a general-purpose ML force field for atomistic simulations of bulk and nanostructured forms of phosphorus, one of the structurally most complex elemental systems. Our study showed how a largely automatically generated GAP–RSS database can be suitably extended based on chemical understanding (in the ML jargon, "domain knowledge") whenever a highly accurate description of specific materials properties is sought. The present work might therefore serve as a blueprint for how general reference databases for GAP, and in fact other types of ML force fields for materials, can be constructed. In the present case, for example, reference data for layered (phosphorene) structures were added as well as for the LLPT, and our tests suggest the resulting force field to be suitable for simulations of all these practically relevant scenarios. Proof-of-concept simulations were presented for a large (>80-nm-long) phosphorene nanoribbon, as well as for the liquid phases, showcasing the ability of ML-driven simulations to tackle questions that are out of reach for even the fastest DFT codes. Future work will include a more detailed simulation study of the liquid phases, as well as new investigations of red (amorphous) P, now all carried out at the DFT + MBD level of quality and with access to tens of thousands of atoms in the simulation cells. We certainly expect that phosphorus will continue to remain exciting, in the words of a recent highlight article1. We also expect that the approaches described here will be beneficial for the modelling of other systems with complex structural chemistry—including, but not limited to, other 2D materials that are amenable to exfoliation and could be described by GAP + R6 models in the future.
Dispersion-corrected DFT reference data were obtained at two different levels. Initially, we used the pairwise Tkatchenko–Scheffler (TS) correction79 to the Perdew–Burke–Ernzerhof (PBE) functional81, as implemented in CASTEP 8.082. For the final dataset, we employed the MBD approach59,60. We expect that a similar "upgrading" of existing fitting databases with new data at higher levels of theory will be useful in the future, especially as higher levels of computational methods are coming progressively within reach (cf. the emergence of high-level reference computations for black P31,32), as has indeed been shown in the field of molecular ML potentials (see, e.g., ref. 83). PBE + MBD data were computed using the projector-augmented wave method84 as implemented in VASP85,86. The cut-off energy for plane waves was 500 eV; the criterion to break the SCF loop was a 10−8-eV energy threshold. Computations were carried out in spin-restricted mode. We used Γ-point calculations and real-space projectors (LREAL = Auto) for the large supercells representing liquid and amorphous structures; the remainder of the computations was carried out with automatic k-mesh generators with l = 30, where l is a parameter that determines the number of divisions along each reciprocal lattice vector.
The GAP + R6 force field combines short-range ML terms and a long-range baseline (Fig. 2a) as follows. We start by fitting a Lennard–Jones (LJ) potential to the DFT + MBD exfoliation curve of black P at interatomic distances between 4 and 20 Å. We then define a cubic spline model, denoted VR6, using the same idea as in ref. 68. The baseline is described by a cubic spline fit that comprises the point (3.0 Å, 0 eV) together with the LJ potential between 4.0 and 20 Å, using spline points at 0.1-Å spacing up to 4.5 Å, and 0.5-Å spacing beyond that. The derivative of the potential is brought to zero at 3.0 and 20 Å; its shape is plotted in Fig. 2c. The fitted LJ parameters for our model are ϵ6 = 6.2192 eV; ϵ12 = 0 (i.e., only the attractive longer-range part of the LJ potential is used); σ = 1.52128 Å. The baseline model is subtracted from the input data, and an ML model is constructed by fitting to
$$\Delta E = E_{{\mathrm{DFT}} + {\mathrm{MBD}}} - \mathop {\sum}\limits_{i > j} {V_{{\mathrm{R}}6}(r_{ij})},$$
where we denote the long-range potential by VR6 for simplicity (because of its 1/R6 term), and i and j are atomic indices. The final model for the machine-learned energy of a given atom, ε(i), thus reads
$$\varepsilon \left( i \right) = \left\{ {\delta ^{\left( {2{\mathrm{b}}} \right)}\mathop {\sum}\limits_q {\varepsilon _i^{\left( {2{\mathrm{b}}} \right)}\left( q \right) + \delta ^{\left( {{\mathrm{MB}}} \right)}} \mathop {\sum}\limits_{{\mathbf{q}}{\prime} } {\varepsilon _i^{\left( {{\mathrm{MB}}} \right)}({\mathbf{q}}{\prime} )} } \right\} +\frac{1}{2} \mathop {\sum}\limits_j {V_{{\mathrm{R}}6}\left( {r_{ij}} \right)} .$$
The first two sums in Eq. (2) together constitute the GAP model, combined using a properly scaled linear combination with scaling factors, δ (which are here given as dimensionless), and the last term, VR6, is added to the ML prediction to give the final result. The two-body ("2b") and many-body (Smooth Overlap of Atomic Positions, SOAP64) models are defined by the respective descriptor terms: q is a simple distance between atoms, which enters a squared-exponential kernel, and q′ is the power-spectrum vector constructed from the SOAP expansion coefficients for the atomic neighbour density64. The ML fit itself is carried out using sparse Gaussian process regression as implemented in the GAP framework43, employing a sparsification procedure that includes 15 representative points for the two-body descriptor and 8000 for SOAP. The full descriptor string used in the GAP fit is provided in Listing 1, and together with the data and their associated regularisation parameters (Supplementary Notes 1 and 2), it defines the required input for the model. The potential is described by an XML file (see "Data availability" and "Code availability" statements).
MD simulations
DFT-MD simulations were done with VASP85,86, using the pairwise TS correction for dispersion interactions79 and an integration timestep of 2 fs. GAP-MD simulations were carried out with LAMMPS87, either at constant volume for comparison with the DFT data (Fig. 6), or using a built-in barostat for pressurisation simulations (Fig. 7)88,89,90. The timestep in all GAP-MD simulations was 1 fs, which was found to improve the quality of the simulations compared to a 2-fs timestep. Whether this is a consequence of the somewhat different thermostats and MD implementations or, in fact, a consequence of the shape of the potential remains to be investigated—for the time being, we are content with running all GAP-MD simulations at the (more computationally costly) timestep of 1 fs.
Listing 1: definition of the descriptor string used in the GAP fit
gap={distance_Nb order=2 cutoff=5.0 n_sparse=15 covariance_type=ard_se delta=2.0 theta_uniform=2.5 sparse_method=uniform compact_clusters=T: soap l_max=6 n_max=12 atom_sigma=0.5 cutoff=5.0 radial_scaling=−0.5 cutoff_transition_width=1.0 central_weight=1.0 n_sparse=8000 delta=0.2 f0=0.0 covariance_type=dot_product zeta=4 sparse_method=cur_points}.
The potential model described herein as well as the DFT+MBD reference data used for fitting the model are openly available through the Zenodo repository (https://doi.org/10.5281/zenodo.4003703). The unique identifier of the potential is GAP_2020_5_23_60_1_23_12_19. In addition, the (DFT+MBD-computed) testing data used in this paper are available at https://github.com/libAtoms/testing-framework/tree/public/tests/P/.
Code availability
The GAP code, which was used to carry out the fitting of the potential and the validation shown throughout this work, is freely available at https://www.libatoms.org/ for non-commercial research. The interface to LAMMPS (allowing GAPs to be used through a pair_style definition) is provided by the QUIP code, which is freely available at https://github.com/libAtoms/QUIP/.
Pfitzner, A. Phosphorus remains exciting! Angew. Chem. Int. Ed. 45, 699–700 (2006).
Simon, A., Borrmann, H. & Horakh, J. On the polymorphism of white phosphorus. Chem. Ber. 130, 1235–1240 (1997).
Roth, W. L., DeWitt, T. W. & Smith, A. J. Polymorphism of red phosphorus. J. Am. Chem. Soc. 69, 2881–2885 (1947).
Elliott, S. R., Dore, J. C. & Marseglia, E. The structure of amorphous phosphorus. J. Phys. Colloq. 46, C8-349–C8-353 (1985).
Zaug, J. M., Soper, A. K. & Clark, S. M. Pressure-dependent structures of amorphous red phosphorus and the origin of the first sharp diffraction peaks. Nat. Mater. 7, 890–899 (2008).
Article ADS CAS PubMed Google Scholar
Liu, H. et al. Phosphorene: an unexplored 2D semiconductor with a high hole mobility. ACS Nano 8, 4033–4041 (2014).
Li, L. et al. Black phosphorus field-effect transistors. Nat. Nanotechnol. 9, 372–377 (2014).
Carvalho, A. et al. Phosphorene: from theory to applications. Nat. Rev. Mater. 1, 16061 (2016).
Article ADS CAS Google Scholar
Thurn, H. & Krebs, H. Über Struktur und Eigenschaften der Halbmetalle. XXII. Die Kristallstruktur des Hittorfschen Phosphors [in German]. Acta Crystallogr. Sect. B 25, 125–135 (1969).
Ruck, M. et al. Fibrous red phosphorus. Angew. Chem. Int. Ed. 44, 7616–7619 (2005).
Zhang, L. et al. Structure and properties of violet phosphorus and its phosphorene exfoliation. Angew. Chem. Int. Ed. 59, 1074–1080 (2020).
Pfitzner, A., Bräu, M. F., Zweck, J., Brunklaus, G. & Eckert, H. Phosphorus nanorods—two allotropic modifications of a long-known element. Angew. Chem. Int. Ed. 43, 4228–4231 (2004).
Smith, J. B., Hagaman, D., DiGuiseppi, D., Schweitzer-Stenner, R. & Ji, H.-F. Ultra-long crystalline red phosphorus nanowires from amorphous red phosphorus thin films. Angew. Chem. Int. Ed. 55, 11829–11833 (2016).
Zhu, Y. et al. A [001]-oriented hittorf's phosphorus nanorods/polymeric carbon nitride heterostructure for boosting wide-spectrum-responsive photocatalytic hydrogen evolution from pure water. Angew. Chem. Int. Ed. 59, 868–873 (2020).
Karttunen, A. J., Linnolahti, M. & Pakkanen, T. A. Icosahedral and ring-shaped allotropes of phosphorus. Chem. Eur. J. 13, 5232–5237 (2007).
Wu, M., Fu, H., Zhou, L., Yao, K. & Zeng, X. C. Nine new phosphorene polymorphs with non-honeycomb structures: a much extended family. Nano Lett. 15, 3557–3562 (2015).
Zhuo, Z., Wu, X. & Yang, J. Two-dimensional phosphorus porous polymorphs with tunable band gaps. J. Am. Chem. Soc. 138, 7091–7098 (2016).
Deringer, V. L., Pickard, C. J. & Proserpio, D. M. Hierarchically structured allotropes of phosphorus from data-driven exploration. Angew. Chem. Int. Ed. 59, 15880–15885 (2020).
Katayama, Y. et al. A first-order liquid–liquid phase transition in phosphorus. Nature 403, 170–173 (2000).
Monaco, G., Falconi, S., Crichton, W. A. & Mezouar, M. Nature of the first-order phase transition in fluid phosphorus at high temperature and pressure. Phys. Rev. Lett. 90, 255701 (2003).
Katayama, Y. Macroscopic separation of dense fluid phase and liquid phase of phosphorus. Science 306, 848–851 (2004).
Böcker, S. & Häser, M. Covalent structures of phosphorus: a comprehensive theoretical study. Z. Anorg. Allg. Chem. 621, 258–286 (1995).
Hohl, D. & Jones, R. O. Amorphous phosphorus: a cluster-network model. Phys. Rev. B 45, 8995–9005 (1992).
Appalakondaiah, S., Vaitheeswaran, G., Lebègue, S., Christensen, N. E. & Svane, A. Effect of van der Waals interactions on the structural and elastic properties of black phosphorus. Phys. Rev. B 86, 035105 (2012).
Qiao, J., Kong, X., Hu, Z.-X., Yang, F. & Ji, W. High-mobility transport anisotropy and linear dichroism in few-layer black phosphorus. Nat. Commun. 5, 4475 (2014).
Bachhuber, F. et al. The extended stability range of phosphorus allotropes. Angew. Chem. Int. Ed. 53, 11629–11633 (2014).
Sansone, G. et al. On the exfoliation and anisotropic thermal expansion of black phosphorus. Chem. Commun. 54, 9793–9796 (2018).
Jiang, J.-W. & Park, H. S. Negative poisson's ratio in single-layer black phosphorus. Nat. Commun. 5, 4727 (2014).
Liu, Y., Xu, F., Zhang, Z., Penev, E. S. & Yakobson, B. I. Two-dimensional mono-elemental semiconductor with electronically inactive defects: the case of phosphorus. Nano Lett. 14, 6782–6786 (2014).
Ong, Z.-Y., Cai, Y., Zhang, G. & Zhang, Y.-W. Strong thermal transport anisotropy and strain modulation in single-layer phosphorene. J. Phys. Chem. C 118, 25272–25277 (2014).
Shulenburger, L., Baczewski, A. D., Zhu, Z., Guan, J. & Tománek, D. The nature of the interlayer interaction in bulk and few-layer phosphorus. Nano Lett. 15, 8170–8175 (2015).
Schütz, M., Maschio, L., Karttunen, A. J. & Usvyat, D. Exfoliation energy of black phosphorus revisited: a coupled cluster benchmark. J. Phys. Chem. Lett. 8, 1290–1294 (2017).
Hohl, D. & Jones, R. O. Polymerization in liquid phosphorus: simulation of a phase transition. Phys. Rev. B 50, 17047–17053 (1994).
Morishita, T. Liquid-liquid phase transitions of phosphorus via constant-pressure first-principles molecular dynamics simulations. Phys. Rev. Lett. 87, 105701 (2001).
Article ADS CAS PubMed MATH Google Scholar
Ghiringhelli, L. M. & Meijer, E. J. Phosphorus: first principle simulation of a liquid–liquid phase transition. J. Chem. Phys. 122, 184510 (2005).
Article ADS PubMed CAS Google Scholar
Zhao, G. et al. Anomalous phase behavior of first-order fluid-liquid phase transition in phosphorus. J. Chem. Phys. 147, 204501 (2017).
Jiang, J.-W. Parametrization of Stillinger–Weber potential based on valence force field model: application to single-layer MoS2 and black phosphorus. Nanotechnology 26, 315706 (2015).
Midtvedt, D. & Croy, A. Valence-force model and nanomechanics of single-layer phosphorene. Phys. Chem. Chem. Phys. 18, 23312–23319 (2016).
Xiao, H. et al. Development of a transferable reactive force field of P/H systems: application to the chemical and mechanical properties of phosphorene. J. Phys. Chem. A 121, 6135–6149 (2017).
Hackney, N. W., Tristant, D., Cupo, A., Daniels, C. & Meunier, V. Shell model extension to the valence force field: application to single-layer black phosphorus. Phys. Chem. Chem. Phys. 21, 322–328 (2019).
Sresht, V., Pádua, A. A. H. & Blankschtein, D. Liquid-phase exfoliation of phosphorene: design rules from molecular dynamics simulations. ACS Nano 9, 8255–8268 (2015).
Behler, J. & Parrinello, M. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett. 98, 146401 (2007).
Bartók, A. P., Payne, M. C., Kondor, R. & Csányi, G. Gaussian approximation potentials: the accuracy of quantum mechanics, without the electrons. Phys. Rev. Lett. 104, 136403 (2010).
Thompson, A. P., Swiler, L. P., Trott, C. R., Foiles, S. M. & Tucker, G. J. Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials. J. Comput. Phys. 285, 316–330 (2015).
Article ADS MathSciNet CAS MATH Google Scholar
Shapeev, A. V. Moment tensor potentials: a class of systematically improvable interatomic potentials. Multiscale Model. Simul. 14, 1153–1173 (2016).
Article MathSciNet MATH Google Scholar
Smith, J. S., Isayev, O. & Roitberg, A. E. ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost. Chem. Sci. 8, 3192–3203 (2017).
Chmiela, S. et al. Machine learning of accurate energy-conserving molecular force fields. Sci. Adv. 3, e1603015 (2017).
Article ADS PubMed PubMed Central CAS Google Scholar
Zhang, L., Han, J., Wang, H., Car, R. & E, W. Deep potential molecular dynamics: a scalable model with the accuracy of quantum mechanics. Phys. Rev. Lett. 120, 143001 (2018).
Behler, J. First principles neural network potentials for reactive simulations of large molecular and condensed systems. Angew. Chem. Int. Ed. 56, 12828–12840 (2017).
Deringer, V. L., Caro, M. A. & Csányi, G. Machine learning interatomic potentials as emerging tools for materials science. Adv. Mater. 31, 1902765 (2019).
Noé, F., Tkatchenko, A., Müller, K.-R. & Clementi, C. Machine learning for molecular simulation. Annu. Rev. Phys. Chem. 71, 361–390 (2020).
Zuo, Y. et al. Performance and cost assessment of machine learning interatomic potentials. J. Phys. Chem. A 124, 731–745 (2020).
Jinnouchi, R., Lahnsteiner, J., Karsai, F., Kresse, G. & Bokdam, M. Phase transitions of hybrid perovskites simulated by machine-learning force fields trained on the fly with Bayesian inference. Phys. Rev. Lett. 122, 225701 (2019).
Deringer, V. L., Csányi, G. & Proserpio, D. M. Extracting crystal chemistry from amorphous carbon structures. ChemPhysChem 18, 873–877 (2017).
Eivari, H. A. et al. Two-dimensional hexagonal sheet of TiO2. Chem. Mater. 29, 8594–8603 (2017).
Tong, Q., Xue, L., Lv, J., Wang, Y. & Ma, Y. Accelerating CALYPSO structure prediction by data-driven learning of a potential energy surface. Faraday Discuss. 211, 31–43 (2018).
Podryabinkin, E. V., Tikhonov, E. V., Shapeev, A. V. & Oganov, A. R. Accelerating crystal structure prediction by machine-learning interatomic potentials with active learning. Phys. Rev. B 99, 064114 (2019).
Deringer, V. L., Proserpio, D. M., Csányi, G. & Pickard, C. J. Data-driven learning and prediction of inorganic crystal structures. Faraday Discuss. 211, 45–59 (2018).
Tkatchenko, A., DiStasio, R. A., Car, R. & Scheffler, M. Accurate and efficient method for many-body van der Waals interactions. Phys. Rev. Lett. 108, 236402 (2012).
Ambrosetti, A., Reilly, A. M., DiStasio, R. A. & Tkatchenko, A. Long-range correlation energy calculated from coupled atomic response functions. J. Chem. Phys. 140, 18A508 (2014).
Bartók, A. P., Kermode, J., Bernstein, N. & Csányi, G. Machine learning a general-purpose interatomic potential for silicon. Phys. Rev. X 8, 041048 (2018).
Deringer, V. L., Pickard, C. J. & Csányi, G. Data-driven learning of total and local energies in elemental boron. Phys. Rev. Lett. 120, 156001 (2018).
Bernstein, N., Csányi, G. & Deringer, V. L. De novo exploration and self-guided learning of potential-energy surfaces. npj Comput. Mater. 5, 99 (2019).
Bartók, A. P., Kondor, R. & Csányi, G. On representing chemical environments. Phys. Rev. B 87, 184115 (2013).
Cheng, B. et al. Mapping materials and molecules. Acc. Chem. Res. 53, 1981–1991 (2020).
Pickard, C. J. & Needs, R. J. Ab initio random structure searching. J. Phys. 23, 053201 (2011).
Jamieson, J. C. Crystal structures adopted by black phosphorus at high pressures. Science 139, 1291–1292 (1963).
Rowe, P., Deringer, V. L., Gasparotto, P., Csányi, G. & Michaelides, A. An accurate and transferable machine learning potential for carbon. J. Chem. Phys. 153, 034702 (2020).
Deringer, V. L. & Csányi, G. Machine learning based interatomic potential for amorphous carbon. Phys. Rev. B 95, 094203 (2017).
Brown, A. & Rundqvist, S. Refinement of the crystal structure of black phosphorus. Acta Cryst. 19, 684–685 (1965).
George, J., Hautier, G., Bartók, A. P., Csányi, G. & Deringer, V. L. Combining phonon accuracy with high transferability in Gaussian approximation potential models. J. Chem. Phys. 153, 044104 (2020).
Scelta, D. et al. Interlayer bond formation in black phosphorus at high pressure. Angew. Chem. Int. Ed. 56, 14135–14140 (2017).
Schusteritsch, G., Uhrin, M. & Pickard, C. J. Single-layered hittorf's phosphorus: a wide-bandgap high mobility 2D material. Nano Lett. 16, 2975–2980 (2016).
Hittorf, W. Zur Kenntniß des Phosphors [in German]. Ann. Phys. Chem. 202, 193–228 (1865).
Zhang, J. et al. Phosphorene nanoribbon as a promising candidate for thermoelectric applications. Sci. Rep. 4, 6452 (2015).
Watts, M. C. et al. Production of phosphorene nanoribbons. Nature 568, 216–220 (2019).
Hong, Y., Zhang, J., Huang, X. & Zeng, X. C. Thermal conductivity of a two-dimensional phosphorene sheet: a comparative study with graphene. Nanoscale 7, 18716–18724 (2015).
Ma, M., Tocci, G., Michaelides, A. & Aeppli, G. Fast diffusion of water nanodroplets on graphene. Nat. Mater. 15, 66–71 (2016).
Tkatchenko, A. & Scheffler, M. Accurate molecular Van Der Waals interactions from ground-state electron density and free-atom reference data. Phys. Rev. Lett. 102, 073005 (2009).
Lange, S., Schmidt, P. & Nilges, T. Au3SnP7@black phosphorus: an easy access to black phosphorus. Inorg. Chem. 46, 4028–4035 (2007).
Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865–3868 (1996).
Clark, S. J. et al. First principles methods using CASTEP. Z. Krist. 220, 567–570 (2005).
Smith, J. S. et al. Approaching coupled cluster accuracy with a general-purpose neural network potential through transfer learning. Nat. Commun. 10, 2903 (2019).
Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953–17979 (1994).
Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169–11186 (1996).
Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 59, 1758–1775 (1999).
Plimpton, S. Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995).
Article ADS CAS MATH Google Scholar
Parrinello, M. & Rahman, A. Polymorphic transitions in single crystals: a new molecular dynamics method. J. Appl. Phys. 52, 7182–7190 (1981).
Martyna, G. J., Tobias, D. J. & Klein, M. L. Constant pressure molecular dynamics algorithms. J. Chem. Phys. 101, 4177–4189 (1994).
Shinoda, W., Shiga, M. & Mikami, M. Rapid estimation of elastic constants by molecular dynamics simulation under constant stress. Phys. Rev. B 69, 134103 (2004).
Hjorth Larsen, A. et al. The atomic simulation environment—a Python library for working with atoms. J. Phys. 29, 273002 (2017).
Momma, K. & Izumi, F. VESTA 3 for three-dimensional visualization of crystal, volumetric and morphology data. J. Appl. Crystallogr. 44, 1272–1276 (2011).
Stukowski, A. Visualization and analysis of atomistic simulation data with OVITO—the open visualization tool. Model. Simul. Mater. Sci. Eng. 18, 015012 (2010).
We thank N. Bernstein and J.R. Kermode for developing substantial parts of the potential testing framework (described in ref. 61), which we have used in the present work. V.L.D. thanks C.J. Pickard and D.M. Proserpio for ongoing valuable discussions and the Leverhulme Trust for an Early Career Fellowship. Parts of this work were carried out during V.L.D.'s previous affiliation with the University of Cambridge (until August 2019) with additional support from the Isaac Newton Trust. V.L.D. and M.A.C. acknowledge travel support from the HPC-Europa3 initiative (in the framework of the European Union's Horizon 2020 research and innovation programme, Grant Agreement 730897). M.A.C. acknowledges personal funding from the Academy of Finland (grant number #310574) and computational resources from CSC—IT Center for Science. This work used the ARCHER UK National Supercomputing Service through EPSRC grant EP/P022596/1. The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work (https://doi.org/10.5281/zenodo.22558). Post processing and visualisation of structural data was made possible by the freely available ASE91, VESTA92 and OVITO93 software.
Department of Chemistry, Inorganic Chemistry Laboratory, University of Oxford, Oxford, OX1 3QR, UK
Volker L. Deringer
Department of Electrical Engineering and Automation, Aalto University, Espoo, 02150, Finland
Miguel A. Caro
Department of Applied Physics, Aalto University, Espoo, 02150, Finland
Engineering Laboratory, University of Cambridge, Cambridge, CB2 1PZ, UK
Gábor Csányi
V.L.D. initiated and coordinated the study. V.L.D. developed the reference database and fitted initial potential versions at the PBE+TS level; M.A.C. performed and analysed the reference computations at the PBE+MBD level; G.C. fitted the final potential version, including the long-range baseline. V.L.D. and G.C. jointly analysed and validated the potential. V.L.D. studied the liquid phases. V.L.D. wrote the paper with input from all authors.
Correspondence to Volker L. Deringer.
G.C. is listed as inventor on a patent filed by Cambridge Enterprise Ltd. related to SOAP and GAP (US patent 8843509, filed on 5 June 2009 and published on 23 September 2014). The remaining authors declare no competing interests.
Peer review information Nature Communications thanks Pablo Piaggi and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Deringer, V.L., Caro, M.A. & Csányi, G. A general-purpose machine-learning force field for bulk and nanostructured phosphorus. Nat Commun 11, 5461 (2020). https://doi.org/10.1038/s41467-020-19168-z
DOI: https://doi.org/10.1038/s41467-020-19168-z
Huziel E. Sauceda
Luis E. Gálvez-González
Alexandre Tkatchenko
Compressing local atomic neighbourhood descriptors
James P. Darby
James R. Kermode
npj Computational Materials (2022)
Graph neural networks for materials science and chemistry
Patrick Reiser
Marlen Neubert
Pascal Friederich
Communications Materials (2022)
Table2Vec-automated universal representation learning of enterprise data DNA for benchmarkable and explainable enterprise data science
Longbing Cao
Chengzhang Zhu
Scientific Reports (2021)
SpookyNet: Learning force fields with electronic degrees of freedom and nonlocal effects
Oliver T. Unke
Stefan Chmiela
Klaus-Robert Müller
Computation and Machine Learning for Chemistry
|
CommonCrawl
|
Kilowatt hour
{{#invoke:Hatnote|hatnote}}Template:Main other
Residential electricity meter located in Canada
The kilowatt hour (symbol kWh, kW·h, or kW h) is a unit of energy equal to 1,000 watt-hours, or 3.6 megajoules.[1][2] If the energy is being transmitted or used at a constant rate (power) over a period of time, the total energy in kilowatt-hours is the product of the power in kilowatts and the time in hours. The kilowatt-hour is commonly used as a billing unit for energy delivered to consumers by electric utilities.
3 Symbol and abbreviation for kilowatt hour
4 Conversions
5 Watt hour multiples and billing units {{safesubst:#invoke:anchor|main}}
6 Confusion of kilowatt-hours (energy) and kilowatts (power)
7 Misuse of watts per hour
8 Other energy-related units
The kilowatt-hour (symbolized kWh) is a unit of energy equivalent to one kilowatt (1 kW) of power expended for one hour.
kW⋅h=(3600s)[kW]=3600[s][kJs]=3600kJ=3.6MJ{\displaystyle \mathrm {kW\cdot h} =(3600\,\mathrm {s} )\lbrack \mathrm {kW} \rbrack =3600\,\lbrack \mathrm {s} \rbrack {\Bigg \lbrack }{\frac {\mathrm {kJ} }{\mathrm {s} }}{\Bigg \rbrack }=3600\,\mathrm {kJ} =3.6\,\mathrm {MJ} }
One watt is equal to 1 J/s. One kilowatt-hour is 3.6 megajoules, which is the amount of energy converted if work is done at an average rate of one thousand watts for one hour.
Note that the International Standard SI unit of energy is the joule. The hour is a unit of time "outside the SI",[3] so the kilowatt-hour is a non-SI unit of energy.
A heater rated at 1000 watts (1 kilowatt), operating for one hour uses one kilowatt-hour (equivalent to 3.6 megajoules) of energy. A 40-watt light bulb operating for 25 hours uses one kilowatt-hour. Electrical energy is sold in kilowatt-hours; cost of running equipment is the product of power in kilowatts multiplied by running time in hours and price per kilowatt-hour. The unit price of electricity may depend upon the rate of consumption and the time of day. Industrial users may also have extra charges according to their peak usage and the power factor.
Symbol and abbreviation for kilowatt hour
The symbol "kWh" is most commonly used in commercial, educational, scientific and media publications,[4] and is the usual practice in electrical power engineering.[5]
Other abbreviations and symbols may be encountered:
"kW h" is less commonly used. It is consistent with SI standards (but note that the kilowatt-hour is a non-SI unit). The international standard for SI[3] states that in forming a compound unit symbol, "Multiplication must be indicated by a space or a half-high (centered) dot (·), since otherwise some prefixes could be misinterpreted as a unit symbol" (i.e., kW h or kW·h). This is supported by a voluntary standard[6] issued jointly by an international (IEEE) and national (ASTM) organization. However, at least one major usage guide[7] and the IEEE/ASTM standard allow "kWh" (but do not mention other multiples of the watt hour). One guide published by NIST specifically recommends avoiding "kWh" "to avoid possible confusion".[8]
The US official fuel-economy window sticker for electric vehicles uses the abbreviation "kW-hrs",[9] though the related website uses the more usual "kWh".[10]
Variations in capitalization are sometimes seen: KWh, KWH, kwh etc.
"kW·h" is, like "kW h", also consistent with SI standards, but it is very rarely used in practice.
The notation "kW/h", as a symbol for kilowatt-hour, is not correct.
Template:Rellink To convert a quantity measured in a unit in the left column to the units in the top row, multiply by the factor in the cell where the row and column intersect.
watt hour
electronvolt
1 J = 1 kg·m2 s−2 =
1 2.77778 × 10−4 2.77778 × 10−7 6.241 × 1018 0.239
1 W·h =
3,600 1 0.001 2.247 × 1022 859.8
1 kW·h =
3.6 × 106 1,000 1 2.247 × 1025 8.598 × 105
1 eV =
1.602 × 10−19 4.45 × 10−23 4.45 × 10−26 1 3.827 × 10−20
1 cal =
4.1868 1.163 × 10−3 1.163 × 10−6 2.613 × 1019 1
Watt hour multiples and billing units {{safesubst:#invoke:anchor|main}}
The kilowatt-hour is commonly used by electrical distribution providers for purposes of billing, since the monthly energy consumption of a typical residential customer ranges from a few hundred to a few thousand kilowatt-hours. Megawatt-hours, gigawatt-hours, and terawatt-hours are often used for metering larger amounts of electrical energy to industrial customers and in power generation. The terawatt-hour and petawatt-hour are large enough to conveniently express annual electricity generation for whole countries.
Template:SI multiples 2
In India, the kilowatt-hour is often simply called a Unit of energy. A million units, designated MU, is a gigawatt-hour and a BU (billion units) is a terawatt-hour.[11][12]
Confusion of kilowatt-hours (energy) and kilowatts (power)
The terms power and energy are frequently confused. Physical power can be defined as work per unit time, measured in units of joules per second or watts. To produce power over any given period of time requires energy. Either higher levels of power (for a given period) or longer periods of run time (at a given power level) require more energy.
An electrical load (e.g. a lamp, toaster, electric motor, etc.) has a rated "size" in watts. This is its running power level, which equates to the instantaneous rate at which energy must be generated and consumed to run the device. How much energy is consumed at that rate depends on how long you run the device. However, its power level requirements are basically constant while running. The unit of energy for residential electrical billing, kilowatt-hours, integrates changing power levels in use at the residence over the past billing period (nominally 720 hours for a 30-day month), thus showing cumulative electrical energy use for the month.
For another example, when a light bulb with a power rating of 100 watts is turned on for one hour, the energy used is 100 watt hours (W·h), 0.1 kilowatt-hour, or 360 kilojoules. This same amount of energy would light a 40-watt bulb for 2.5 hours, or a 10-watt low-energy bulb for 10 hours. A power station electricity output at any particular moment would be measured in multiples of watts, but its annual energy sales would be in multiples of watt-hours. A kilowatt-hour is the amount of energy equivalent to a steady power of 1 kilowatt running for 1 hour, or 3.6 megajoules.
Whereas individual homes only pay for the kilowatt-hours consumed, commercial buildings and institutions also pay for peak power consumption (the greatest power recorded in a fairly short time, such as 15 minutes). This compensates the power company for maintaining the infrastructure needed to provide higher-than-normal power. These charges show up on electricity bills in the form of demand charges.[13]
Major energy production or consumption is often expressed as terawatt-hours (TWh) for a given period that is often a calendar year or financial year. One terawatt-hour is equal to a sustained power of approximately 114 megawatts for a period of one year.
Misuse of watts per hour
Power units measure the rate of energy per unit time. Many compound units for rates explicitly mention units of time, for example, miles per hour, kilometers per hour, dollars per hour. Kilowatt-hours are a product of power and time, not a rate of change of power with time. Watts per hour (W/h) is a unit of a change of power per hour. It might be used to characterize the ramp-up behavior of power plants. For example, a power plant that reaches a power output of 1 MW from 0 MW in 15 minutes has a ramp-up rate of 4 MW/h. Hydroelectric power plants have a very high ramp-up rate, which makes them particularly useful in peak load and emergency situations.
The proper use of terms such as watts per hour is uncommon, whereas misuse[14] may be widespread.
Other energy-related units
Several other units are commonly used to indicate power or energy capacity or use in specific application areas. All the SI prefixes may be applied to the watt-hour: a kilowatt-hour is 1,000 W·h (symbols kW·h, kWh or kW h; a megawatt-hour is 1 million W·h, (symbols MW·h, MWh or MW h); a milliwatt-hour is 1/1000 W·h (symbols mW·h, mWh or mW h) and so on.
Average annual power production or consumption can be expressed in kilowatt-hours per year; for example, when comparing the energy efficiency of household appliances whose power consumption varies with time or the season of the year, or the energy produced by a distributed power source. One kilowatt-hour per year equals about 114.08 milliwatts applied constantly during one year.
The energy content of a battery is usually expressed indirectly by its capacity in ampere-hours; to convert watt-hours (W·h) to ampere-hour (A·h), the watt-hour value must be divided by the voltage of the power source. This value is approximate since the voltage is not constant during discharge of a battery.
The Board of Trade unit (BOTU) is an obsolete UK synonym for kilowatt-hour. The term derives from the name of the Board of Trade which regulated the electricity industry until 1942 when the Ministry of Power took over.[15] The B.O.T.U. should not be confused with the British thermal unit or BTU, which is a much smaller quantity of thermal energy. To further the confusion, at least as late as 1937, Board of Trade unit was simply abbreviated BTU.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Burnup of nuclear fuel is normally quoted in megawatt-days per tonne (MW·d/MTU), where tonne refers to a metric ton of uranium metal or its equivalent, and megawatt refers to the entire thermal output, not the fraction which is converted to electricity.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
{{#invoke:Portal|portal}}
Ampere-hour
Orders of magnitude (energy)
Electric energy consumption
Watt second
↑ Thompson, Ambler and Taylor, Barry N. (2008). Guide for the Use of the International System of Units (SI) (Special publication 811). Gaithersburg, MD: National Institute of Standards and Technology. 12.
↑ "Half-high dots or spaces are used to express a derived unit formed from two or more other units by multiplication." Barry N. Taylor. (2001 ed.) The International System of Units. (Special publication 330). Gaithersburg, MD: National Institute of Standards and Technology. 20.
↑ 3.0 3.1 The International System of Units (SI). (2006, 8th ed.) Paris: International Bureau of Weights and Measures. 130.
↑ See for example: Wind Energy Reference Manual Part 2: Energy and Power DefinitionsTemplate:Dead link Danish Wind Energy Association. Retrieved 9 January 2008; "Kilowatt-Hour (kWh)" BusinessDictionary.com. Retrieved 9 January 2008; "US Nuclear Power Industry" www.world-nuclear.org. Retrieved 9 January 2008; "Energy. A Beginners Guide: Making Sense of Units" Renew On Line (UK). The Open University. Retrieved 9 January 2008.
↑ American National Standard for Metric Practice IEEE/ASTM SI 10™-2010 Revision of IEEE/ASTM�SI�10�2002), IEEE, NY, 11 April 2011. "The symbols for certain compound units of electrical power engineering are usually written without separation, thus: watthour (Wh), kilowatthour (kWh), voltampere (VA), and kilovoltampere (kVA)"
↑ Standard for the Use of the International System of Units (SI): The Modern Metric System. (1997). (IEEE/ASTM SI 10-1997). New York and West Conshohocken, PA: Institute of Electrical and Electronics Engineers and ASTM. 15.
↑ Chicago Manual of Style. (14th ed., 1993) University of Chicago Press. 482.
↑ Guide for the Use of the International System of Units (SI) p.12
↑ Template:Cite news
↑ " Understanding Electric Demand", National Grid
Power and Energy in the Home: The Trustworthy Cyber Infrastructure for the Power Grid (TCIP) group at the University of Illinois at Urbana-Champaign has developed an applet which illustrates the consumption and cost of energy in the home, and allows the user to see the effects of manipulating the flow of electricity to various household appliances.
Prices per kilowatt hour in the USA, Energy Information Administration
Retrieved from "https://en.formulasearchengine.com/index.php?title=Kilowatt_hour&oldid=286529"
Units of energy
Non-SI metric units
|
CommonCrawl
|
Transfer Credit Network
AcademicPrograms
Business Administration, Associate of Arts
Applied Computing, Associate of Science
Liberal Studies, Associate of Arts
Explore associate degrees
College Writing I
Unlock the power of words.
Intro to Financial Accounting
Find the story in numbers.
Master the building blocks of Calculus.
Math rules everything around us.
Economics in a nutshell.
Intro to Psychology
The science of the mind.
Data Science Prep
Launch your career in data science.
College Foundation
Explore majors on your path to a degree.
Business Foundation
Accelerate your path to a Business degree.
Request infoApply now
Student Sign InApply nowRequest info
Business Administration, Associate of ArtsApplied Computing, Associate of ScienceLiberal Studies, Associate of Arts
College Writing IIntro to Financial AccountingPrecalculusCollege AlgebraPrinciples of EconomicsIntro to Psychology
Data Science PrepCollege FoundationBusiness Foundation
EmployersHigh SchoolsTransfer Credit Network
What Is the Coefficient of Variation?
01.24.2023 • 6 min read
Sarah Thomas
Learn all about the coefficient of variation. Included are explanations of the standard deviation and the mean as well as examples and common applications.
Interpreting the Coefficient of Variation
What Is the Standard Deviation?
Coefficient of Variation Examples
Major Advantages and Disadvantages of CV
Common Applications for the Coefficient of Variation
How To Calculate CV in Excel, Google Sheets, and R
The coefficient of variation (CV)—also called the relative standard deviation (RSD)—is the ratio of the standard deviation to the mean. It is a parameter or statistic used to convey the variability of your data in relation to its mean.
You can express the coefficient of variation as a decimal or a percentage. To convert the coefficient into a percentage, just multiply the ratio of the standard deviation to the mean by 100.
For example, if the standard deviation of your data is 5 and the mean value is 50, the value of the coefficient of variation is equal to 550=0.10 or 0.10 x 100 = 10%.
Unlike absolute measures of dispersion—such as quartiles, mean absolute deviation, variance, and standard deviation—the coefficient of variation is a relative measure of dispersion. It compares how large the standard deviation is relative to the mean in proportional terms rather than absolute terms.
If you find a coefficient of variation of 0.10 or 10%, the standard deviation is one-tenth or 10% of the mean.
Intro to Statistics
Explore course
How data describes our world.
The standard deviation of a data set is the square root of its variance. Both variance and standard deviation are absolute measures of variability. Still, variance measures the dispersion of data in squared units that are hard to interpret. In contrast, the standard deviation converts variance into "standardized," easy-to-interpret units that are the same as the units used in your data.
Standard deviations are always positive or 0. The larger your standard deviation, the more spread out your data. If the standard deviation is zero, your data has no variation. This means all values in the data set are the same.
How To Calculate Standard Deviation
You can calculate standard deviation in 5 steps:
You calculate the distances between each point in your data and the mean. You do this by subtracting the mean from each data point.
You square each distance found in the previous step.
You sum all the squared distances.
In the case of a population standard deviation, you divide the sum of squared distances by the number of values in your data set (N). In the case of a sample standard deviation, you divide the sum of squared distances by the number of values in your data set minus one (n -1).
Finally, you take the square root of the value you calculated in Step 4.
Now that you know what the coefficient of variation and standard deviation are, let's work through two examples of calculating the CV.
Sample Data A
The sample mean is 6
The standard deviation is 3.16
The coefficient of variation is 0.53 (rounded to the nearest hundredth) or 53%.
Remember, this sample data set has 5 numbers, so n=5.
Mean(xˉ)=2+4+6+8+105=6\text{Mean} (\bar{x}) = \dfrac{2+4+6+8+10}{5}=6Mean(xˉ)=52+4+6+8+10=6
Standard Deviation(s)=∑(xi−xˉ)2n−1=3.16\text{Standard Deviation}(s) = \sqrt{\frac{\sum_{}^{}(x_i-\bar{x})^2}{n-1}} = 3.16Standard Deviation(s)=n−1∑(xi−xˉ)2=3.16
Coefficient of Variation(CV)=sxˉ=3.165=0.53=53%\text{Coefficient of Variation}(CV)=\dfrac{s}{\bar{x}}=\dfrac{3.16}{5}=0.53=53\%Coefficient of Variation(CV)=xˉs=53.16=0.53=53%
Here is another sample data set. See if you can calculate the coefficient of variation without looking at the answer. This data set only has three values, so n=3.
Sample Data Set B
Answer: sss=10, xˉ\bar{x}xˉ=100, CVCVCV=0.10
Advantages of Coefficient of Variation
The main reason we use the coefficient of variation is that it is dimensionless.
A dimensionless quantity is a number without any units. For example, if you have data for temperatures measured in Fahrenheit, both the mean and standard deviation of the data will be measured in Fahrenheit.
However, when you calculate the coefficient of variation (sxˉ\dfrac{s}{\bar{x}}xˉs), the units disappear, and the resulting coefficient is dimensionless.
As a dimensionless quantity, the coefficient of variation offers two main advantages.
You can compare dispersion in data sets with drastically different means
You can compare dispersion of data sets measured in different units
As an example, consider the hypothetical data below. The data on the left shows how the price of a carton of milk varied in a U.S. grocery store over the course of a year. The prices are measured in U.S. dollars. The data on the right shows how the price of a carton of milk varied in a Japanese grocery store over the course of a year. The data is measured in Japanese yen.
PRICE OF CARTON OF MILK IN THE U.S. PRICE OF CARTON OF MILK IN JAPAN
[ $0.75, $0.90, $1.50, $2.20, $4.10 ] [¥178, ¥198, ¥205, ¥210, ¥220 ]
Mean (xˉ\bar{x}xˉ) - wumbo.net = $1.89
Mean (xˉ\bar{x}xˉ) = ¥202.20
Standard Deviation (s) = $1.36 Standard Deviation (s) = ¥15.72
Coefficient of Variation (CV) = 0.72 Coefficient of Variation (CV) = 0.08
If you want to compare the variation in prices in the U.S. grocery store and the Japanese store, you cannot simply compare standard deviations since they are measured in different units—yen and U.S. dollars.
You could convert the Japanese prices into dollars, but that would be more work than simply calculating the coefficient of variation. When we compare the coefficient of variation for two data sets, we see that the coefficient of 0.08 is much smaller than the coefficient of variation for the U.S. data, 0.72. This tells you that the prices varied more in the U.S. grocery store than in the Japanese grocery store.
Disadvantages of Coefficient of Variation
The coefficient of variation has three main disadvantages.
You can only use it for data measured on a ratio scale; you cannot use it for data measured on an interval scale.
For data with a mean close to zero, the coefficient of variation will approach infinity.
You cannot use it to construct confidence intervals for the mean.
The coefficient of variation is used in many fields.
In analytical chemistry, researchers use the CV to express the precision and repeatability of an assay.
In applied probability, the CV is used in areas such as renewal theory, queuing theory, and reliability theory.
Archaeologists use the CV to compare the standardization of ancient artifacts.
Economists use the CV as a way of measuring economic inequality.
Engineers use the CV in quality assurance studies.
Financial analysts and investors use the CV to calculate risk-reward ratios and to determine whether the expected return makes up for the volatility of an investment.
Neuroscientists utilize the CV when studying brain activity.
In Excel and Google Sheets, you can calculate the coefficient of variation by first using the functions for mean and standard deviation. These are =AVG(), and =STDEV(). The range of cells containing your data should be selected and placed within the parentheses of each function. Once you have the mean and standard deviation, simply divide the standard deviation by the mean.
In R, create a vector containing your data and give it a name like "data." Then use the command cv(data, na.rm = FALSE) to calculate the coefficient of variation.
Explore Outlier's Award-Winning For-Credit Courses
Outlier (from the co-founder of MasterClass) has brought together some of the world's best instructors, game designers, and filmmakers to create the future of online college.
Check out these related courses:
Intro to Microeconomics
Why small choices have big impact.
Intro to Macroeconomics
How money moves our world.
A Step-by-Step Guide on How to Calculate Standard Deviation
Standard deviation is one of the most crucial concepts in the field of Statistics. Here, we'll take you through its definition and uses, and then teach you step by step how to calculate it for any data set.
How To Calculate Variance In 4 Simple Steps
The article explains what variance means, how to calculate it, how to use the formula and the main differences between variance and standard deviation.
Sample Standard Deviation: What is It & How to Calculate It
This article is a guide on sample standard deviation, including concepts, a step-by-step process to calculate it, and a list of examples.
Mean Absolute Deviation (MAD) - Meaning & How To Find It
Understanding the Normal Distribution Curve
How To Find Critical Value In Statistics
Binomial Distribution: Meaning & Formula
Understanding Sampling Distributions: What Are They and How Do They Work?
How to Find the Median
About Outlier
240 Kent Avenue, Brooklyn, NY, 11249, United States
Questions? Email us at [email protected]
Be the first to hear about new classes and breaking news.
By signing up for our email list, you indicate that you have read and agree to our Terms of Use. We respect your privacy.
© 2023 OUTLIER.ORG, INC
|
CommonCrawl
|
Why is Philae not provided with a propulsion system?
Rosetta is en route to rendezvous with Chury. Briefly, the mission comprises an orbiter, and a lander. The latter named Philae.
Wikipedia writes to say
... The lander is designed to touch down on the comet's surface after detaching itself from the main spacecraft body and "falling" towards the comet along a ballistic trajectory. ...
Could a random event (say, a tumble even outgassing), potentially, deflect Philae's planned trajectory?
Is it's mass of 100kg on Earth, and Chury's infinitesimal gravity adequate to ensure touch-down on the comet along the planned trajectory?
propulsion rosetta lander philae
TildalWave
EveryoneEveryone
13.3k2525 silver badges131131 bronze badges
Let me just expand a bit on @Tildalwave's answer. Most landers on airless bodies need a propulsion system, because they will be going too fast otherwise to land. But that's only because most landings have been done on objects with a lot of gravitational mass. Let's just try and figure out what the escape velocity would be. Wikipedia gives us the following clues, including some comet info:
Size: 4 km diameter
Shape: Irregular, but roughly oval shaped.
Average comet density: 0.6 g/cm^3
Okay, that's not a lot to go on, but what can we gleam from that? Well, not a lot, but let's just try and figure out a mass, and then escape velocity from that. Here's the numbers leading up to that:
Volume: (4 km diameter sphere) $3.4 \times 10^{16}$ cm$^3$
Mass: $2.0 \times 10^{13}$ kg
Escape Velocity (2 km from center of mass): 1.2 m/s
Okay, that's a pretty low escape velocity, walking would more than do it, and you could easily jump off of the comet! So, what would you need to do if you were going to actively try to land on it?
Some sort of a radar system, to know how close you were to the ground
Those take power and mass, adding much complexity. The alternative is to have Rosetta put you in to a ballistic trajectory, landing at exactly the escape velocity. It's not hard to absorb an impact of 1 m/s. In fact, the Mars Phoenix Lander landed at 2.4 m/s on Mars, making this even more realistic.
The thruster system makes the whole thing more complicated. The only real advantage would be some sort of an abort capability, and a slightly improved landing from Rosetta, but that's fairly insignificant compared to the added complexity of the mission. The delta v required to change from orbiting to ballistic is negligible for Rosetta, and it already requires such capabilities anyways. Why go through the bother of adding on another several complex systems?
edited Oct 6 '14 at 3:30
pericynthion
PearsonArtPhoto♦PearsonArtPhoto
Actually, Philae does have a propulsion system. As explained in this related question, its Active Descent System uses a cold-gas thruster to propel the lander towards the comet if needed.
HobbesHobbes
$\begingroup$ Didn't that break or leak or something? (Oh, heh, they only discovered the problem yesterday, and this post is 7 months old. oops) $\endgroup$ – Mooing Duck Nov 13 '14 at 23:17
It doesn't really need it and it would needlessly add to its mass. Rosetta will assume a relatively slow and probably highly elliptical orbit around 67P/Churyumov-Gera...aaagh! Chury!, with the perigee at only roughly a kilometer away from it. The orbit has not yet been determined though, see my related question and the answer there. It likely won't be until Rosetta transmits its own close proximity observations of the comet, and might even change in time as the comet's coma, tails and in general its surface activity increases during its flyby closer to the Sun. Rosetta of course also has its own semi-autonomous collision avoidance system onboard, that will enable it to react to any debris in its path and adjust its trajectory.
Anyway, such elliptical orbits leave mission control plenty of leeway to later (when the Rosetta will be in orbit) decide at what point to release Philae on its ballistic trajectory. My guess would be this trajectory will be attempted from the point of Rosetta's flying past the comet in the opposite direction of its movement, to reduce the chance of incoming debris to near zero. The lander also has a harpoon (see the spike in the middle of the legs frame) that will fire towards the comet immediately after the touchdown to hook itself on it. Additionally, each of the three legs have battery powered ice screws in between the two padded gears, to additionally grip to the comet and pads to cushion the collision / landing:
Training sessions for the Philae comet lander (Source: DLR, National Aeronautics and Space Research Centre of Germany)
To answer your questions more directly tho, adding propulsion system to it would be needless weight. The Philae lander wouldn't be able to react fast enough with its Attitude Control System (ACS), if it had any onboard, and to avoid any possible obstacles. Instead, the lander will be put on a collision course (or ballistic trajectory, your pick) with the comet with an approximate velocity of 1 m/s. That is, I would say, higher relative velocity than lowest achievable with its release system, and the resulting kinetic potential should suffice to negate effects of any collisions with smaller debris or pressure in the opposite vector to its movement from outgassing, as you mention. The lander's mass is negligible in gravitational sense, but it still adds to its inertial mass. With a bit of smart decision making from the mission control, they ought to have plenty of opportunities to assure its touchdown. The harpoon, soft pads and powered drills should do the rest in assuring the lander stays put where it landed.
Of course, there are chances all of this will go sour. One of the main concerns I've seen mentioned online is, that the lander itself creates with the harpoon and the screws deep enough cracks for a part of the comet on which it landed to simply chip away from the comet's main body. That doesn't sound too serious as the lander could still do all of its experiments (some might become less conclusive tho, like for example the CONSERT comet nucleus radio wave sounding experiment), just on a smaller piece of the icy rock, but it could cause the lander to completely unhook from any piece of the comet's surface, or the chipped away piece starts spinning, squashes the lander between the chipped away piece and the comet's main body and causes it to lose its grip to it.
I guess we'll have to wait and see, but adding propulsion system to it wouldn't have mitigated main concerns I've seen mentioned over its chance of success either. In fact, it would only add to the problem, with more parts that could malfunction and larger mass of the lander increasing its physical influence on the comet.
TildalWaveTildalWave
$\begingroup$ Wouldn't go ballistic/no-go ballistic be impaired by the lag between Mission Control, and Rosetta? How much mass would a minimal propulsion system add? Albeit that last may qualify as a separate question ... $\endgroup$ – Everyone Oct 2 '13 at 1:26
$\begingroup$ @Everyone - Yup, better a separate question for the other one. As for the first one, all of these go/no-go decisions will be programmed into the Rosetta and Philae before the lander is deployed. They will target a specific landing site from a specific orbital position, none of which are yet determined. I'm also waiting for more data with my mentioned question, I guess we'll both have to wait till April 2014. ;) $\endgroup$ – TildalWave Oct 2 '13 at 1:30
$\begingroup$ Downvoted because Philae does have a propulsion system... $\endgroup$ – Hobbes Mar 25 '14 at 12:03
$\begingroup$ @Hobbes Undeleted because it turns out it doesn't. Well, it was supposed to fire a simple upward pointing thruster as its two harpoons would have fired, but since that didn't happen and it was never meant as an attitude control during descent (NASA's term Active Descent System is misleading), I consider the rest of my answer relevant. Philae did land by help of Newton and that alone, to quote one of presenters during the live landing event from ESOC. In fact, it did that three times (so far). I'll probably update it as some point in time to make this clearer, time permitting. $\endgroup$ – TildalWave Nov 13 '14 at 15:29
The odds of any unexpected event altering Philae's trajectory is negligible. I suspect that the most likely cause for putting it off-target, ironically, would be a propulsion system malfunction, or a leak in a propellant tank!
Not sure what you mean with the last question, "Is it's mass of 100kg on Earth, and Chury's infinitesimal gravity adequate to ensure touch-down on the comet along the planned trajectory?", but Philae is equipped with harpoons to anchor it - otherwise, even if it had thrusters and guidance systems, it would be very difficult to keep it from bouncing off the comet.
Russell BorogoveRussell Borogove
Not the answer you're looking for? Browse other questions tagged propulsion rosetta lander philae or ask your own question.
Will Rosetta have to adjust its orbit around Chury due to the comet's coma and tails?
Are any precautions in place to prevent loss of the Rosetta Lander the way Hayabusa's MINERVA was?
Has any CubeSat flown with an active propulsion system?
Why is there no POV video of the Philae lander landing?
Could an upcoming comet lander be designed to cope with a greater variety of terrain types, compared to Philae?
Why did Philae the comet lander bounce?
Will the International Space Station's propulsion system be changed to electric propulsion?
How much does it cost to fill an ion thuster with Xenon for a spacecraft propulsion system?
chemical propulsion with separate fuel and propellant?
Why is (conventional) ramjet not used for 2nd stage of rocket propulsion?
Proposed spacecraft propulsion with particle accelerator
|
CommonCrawl
|
Dissipativity analysis of neutral-type memristive neural network with two additive time-varying and leakage delays
Cuiping Yang1,
Zuoliang Xiong ORCID: orcid.org/0000-0003-2113-20131 &
Tianqing Yang1
Advances in Difference Equations volume 2019, Article number: 6 (2019) Cite this article
In this paper, we offer an approach about the dissipativity of neutral-type memristive neural networks (MNNs) with leakage, additive time, and distributed delays. By applying a suitable Lyapunov–Krasovskii functional (LKF), some integral inequality techniques, linear matrix inequalities (LMIs) and free-weighting matrix method, some new sufficient conditions are derived to ensure the dissipativity of the aforementioned MNNs. Furthermore, the global exponential attractive and positive invariant sets are also presented. Finally, a numerical simulation is given to illustrate the effectiveness of our results.
In the recent decades, neural networks have been widely applied in many areas, such as automatic control engineering, image processing, associative memory, pattern recognition, parallel computing, and so on [1, 2]. Therefore, it is extremely meaningful to study neural networks. Based on the completeness of circuit theory, Chua firstly proposed the fourth fundamental electrical circuit element memristor besides the known capacitance, inductance and resistance [3]. Subsequently, HP researchers discovered that memristors exist in nanoscale systems [4]. Memristor is a circuit element with memory function in the neural networks, whose resistance slowly changes depending on the quantity of passing electric charge by supplying a voltage or current. The working mechanism of a memristor is similar to that of the human brain. Thus, the research of MNNs is more valuable than we have realized [5, 6].
In the real world, time delays are ubiquitous. They may cause complex dynamical behaviors such as periodic oscillations, dissipation, divergence and chaos [7, 8]. Hence, the dynamic behaviors of neural networks with time delays have received lots of attention [9,10,11]. The existing studies on delayed neural networks can be divided into four categories dealing with constant, time-varying, distribution, and mixed delays. While a majority of literature is concentrated on the former three simple cases, mixed delays are more effective than simple delays in MNNs [12,13,14,15,16]. So the system of MNNs with mixed delays is worth a further study.
Dissipativity, known as a generalization of Lyapunov stability, is a common concept in dynamical systems. It focuses on the diverse dynamics of systems, not only on the equilibrium dynamics. Many systems are stable at the equilibrium points, but in some cases, the systems' orbits do not converge to equilibrium points, or even not have any equilibrium point at all. As a consequence, dissipative systems play an important role in the field of control. Dissipative system theory provides a framework for the design and analysis of control systems based on energy-related considerations [17]. At present, although there are some studies on the dissipativity of neural networks [18,19,20], most of them are focusing on the synchronization of neural networks [21,22,23,24]. For the dissipativity analysis of neural networks, it is essential to find global exponentially attracting sets. Some researchers have investigated the global dissipativity of neural networks with mixed delays, by giving some sufficient conditions to obtain the global exponentially attracting sets [25, 26]. To the best of our knowledge, few studies have considered the dissipativity of neutral-type memristive neural networks with mixed delays.
In this paper, we will investigate the dissipative of neutral-type memristive neural networks with mixed delays. The highlights of our work include:
We consider not only two additive time-varying and distribution time delays, but also time-varying leakage delays.
We obtain the dissipativity of the system by using a combination of the appropriate LKF and the reciprocally convex combination method, some integral inequality techniques, LMI and some delay-dependent dissipative criteria.
Our results are more general than those for the ordinary neural networks.
The paper is organized as follows: in Sect. 2, the preliminaries are presented; in Sect. 3, the dissipative properties of neural network models with mixed delays are analyzed; in Sect. 4 a numerical example is given to demonstrate the effectiveness of our analytical results; in Sect. 5, the work is summarized.
Neural network model and some preliminaries
\(R^{n}\) (resp., \(R^{n\times m}\)) is the n-dimensional Euclidean space (resp., the set of \(n\times m\) matrices) with entries from R; \(X>0\) (resp., \(X\geq 0\)) implies that the matrix X is a real positive-definite matrix (resp., positive semi-definite). When A and B are symmetric matrices, if \(A>B\) then \(A-B\) is a positive definite matrix. The superscript T denotes transpose of the matrix; ∗ denotes the elements below the main diagonal of a symmetric matrix; I and O are the identity and zero matrices, respectively, with appropriate dimensions; \(\operatorname{diag}\{ \ldots \}\) denotes a diagonal matrix; \(\lambda _{\max }(C)\) (resp., \(\lambda _{\min }(C)\)) denotes the maximum (resp., minimum) eigenvalue of matrix C. For any interval \(V\subseteq R\), let \(S\subseteq R^{k}\) (\(1 \leq k \leq n\)), \(C(V,S)=\{\varphi :V\rightarrow S\text{ is continuous}\}\) and \(C^{1}(V,S)=\{\varphi :V\rightarrow S \text{ is continuous differentiable}\}\); \(\operatorname{co}\{b_{1} , b_{2}\}\) represents closure of the convex hull generated by \(b_{1}\) and \(b_{2}\). For constants a, b, we set \(a\vee b = \max \{a, b\}\). Let \(L_{2}^{n}\) be the space of square integrable functions on \(R^{+}\) with values in \(R^{n}\); \(L_{2e}^{n}\) the extended \(L_{2}^{n}\) space defined by \(L_{2e}^{n}=\{f:f\text{ is a measurable function on }R^{+}\}\), \(P_{T}f\in L_{2}^{N}\), \(\forall T \in R^{+}\), where \((P_{T}f)(t)=f(t)\) if \(t \leq T\), and 0 if \(t>T\). For any functions \(x=\{x(t)\}\), \(y=\{y(t)\}\in L_{2e}^{n}\) and matrix Q, we define \(\langle x,Qy\rangle =\int _{0}^{T} x^{T}(t)Qy(t)\,dt\).
In this paper, we consider the following neutral-type memristor neural network model with leakage, as well as two additive time-varying and distributed time-varying delays:
$$ \textstyle\begin{cases} \dot{x_{i}}(t)=-c_{i}x_{i}(t-\eta (t))+\sum_{j=1}^{n}a_{ij}(x_{i}(t))f _{j}(x_{j}(t)) +\sum_{j=1}^{n}b_{ij}(x_{i}(t))f(x_{j}(t-\tau _{j1}(t) \\ \hphantom{\dot{x_{i}}(t)=}{} -\tau _{j2}(t)))+\sum_{j=1}^{n}d_{ij}(x_{i}(t))\int _{t-\delta _{2}(t)} ^{t-\delta _{1}(t)}f_{j}(x_{j}(s))\,ds +e_{i}\dot{x}_{i}(t-h(t))+u_{i}(t), \\ y_{i}(t)=f_{i}(x_{i}(t)), \\ x_{i}(t)=\phi _{i}(t), \quad t\in (-\tau ^{*},0), \end{cases} $$
where n is the number of cells in a neural network; \(x_{i}(t)\) is the voltage of the capacitor; \(f_{i}(\cdot )\) denotes the neuron activation functions of the ith neuron at time t; \(y_{i}\) is the output of the ith neural cell; \(u_{i}(t)\in L_{\infty }\) is the external constant input of the ith neuron at time t; \(\eta (t)\) denotes the leakage delay satisfying \(0\leq \eta (t)\leq \eta \); \(\tau _{j1}(t)\) and \(\tau _{j2}(t)\) are two additive time varying delays that are assumed to satisfy the conditions \(0\leq \tau _{j1}(t)\leq \tau _{1}<\infty \), \(0\leq \tau _{j2}(t)\leq \tau _{2}<\infty \); \(\delta _{1}(t)\), \(\delta _{2}(t)\) and \(h(t)\) are the time-varying delays with \(0\leq \delta _{1}\leq \delta _{1}(t)\leq \delta _{2}(t)\leq \delta _{2}\), \(0 \leq h(t)\leq h\); η, \(\tau _{1}\), \(\tau _{2}\), \(\delta _{1}\), \(\delta _{2}\) and h are nonnegative constants; \(\tau ^{*}=\eta \vee (\delta _{2} \vee (\tau \vee h))\); \(C=\operatorname{diag}(c_{1},c_{2},\ldots,c_{n})\) is a self-feedback connection matrix; \(E=\operatorname{diag}(e_{1},e_{2},\ldots,e_{n})\) is the neutral-type parameter; \(a_{ij}(t)\), \(b_{ij}(t)\), and \(d_{ij}(t)\) represent the memristive-based weights, which are defined as follows:
$$\begin{aligned}& a_{ij} \bigl(x_{i}(t) \bigr)=\frac{\mathbf{W}_{(1)ij}}{\mathbf{C}_{i}} \times \operatorname {sign}_{ij}, \qquad b_{ij} \bigl(x_{i}(t) \bigr)=\frac{\mathbf{W}_{(2)ij}}{\mathbf{C}_{i}}\times \operatorname {sign}_{ij}, \\& d_{ij} \bigl(x_{i}(t) \bigr)=\frac{\mathbf{W}_{(3)ij}}{\mathbf{C}_{i}}\times \operatorname {sign}_{ij}, \qquad \operatorname{sign}_{ij}= \textstyle\begin{cases} 1, & i\neq j, \\ -1,& i=j. \end{cases}\displaystyle \end{aligned}$$
Here \(\mathbf{W}_{(k)ij}\) denote the memductances of memristors \(\mathbf{R}_{(k)ij}\), \(k=1,2,3\). In view of memristor property, we set
$$\begin{aligned}& a_{ij} \bigl(x_{i}(t) \bigr)= \textstyle\begin{cases} \hat{a}_{ij}, & \vert x_{i}(t) \vert \leq \gamma _{i}, \\ \check{a}_{ij}, & \vert x_{i}(t) \vert >\gamma _{i}, \end{cases}\displaystyle \quad\quad b_{ij} \bigl(x_{i}(t) \bigr)= \textstyle\begin{cases} \hat{b}_{ij}, & \vert x_{i}(t) \vert \leq \gamma _{i}, \\ \check{b}_{ij}, & \vert x_{i}(t) \vert >\gamma _{i}, \end{cases}\displaystyle \\& d_{ij} \bigl(x_{i}(t) \bigr)= \textstyle\begin{cases} \hat{d}_{ij}, & \vert x_{i}(t) \vert \leq \gamma _{i}, \\ \check{d}_{ij}, & \vert x_{i}(t) \vert >\gamma _{i}, \end{cases}\displaystyle \end{aligned}$$
where the switching jumps \(\gamma _{i}>0\), \(\hat{a}_{ij}\), \(\check{a} _{ij}\), \(\hat{b}_{ij}\), \(\check{b}_{ij}\), \(\hat{d}_{ij}\) and \(\check{d} _{ij}\) are known constants with respect to memristances.
In the recent years, the dissipativity problem of MNNs has received a lot of attention. So far, substantial important results on dissipativity have been obtained for MNNs. Unfortunately, the work in [27, 28] only considered the leakage delay, while that in [29, 30] considered additive time-varying delays, but not distribution delays. In fact, the leakage delay and multiple signal transmission delays coexist in the system of MNNs. Because few results are found in the existing literature on the dissipativity analysis of neutral-type MNNs with multiple time delays, this paper attempts to extend our knowledge in this field by studying the dissipativity of such systems, and an example is given to prove the effectiveness of our results. Thus, the obtained results extend the study of the dynamic characteristics of MNNs.
In many real applications, signals transmitted from one point to another may experience a few segments of networks, which can possibly induce successive delays with different properties due to the variable network transmission conditions, and when \(\tau _{1}(t)+\tau _{2}(t)\) reaches its maxima, we do not necessarily have both \(\tau _{1}(t)\) and \(\tau _{2}(t)\) reach their maxima at the same time. Therefore, in this paper, we will consider the two additive delay components in (2.1) separately.
Furthermore, the above systems are switching systems whose connection weights vary due to their states. Although smooth analysis is suitable for studying continuous nonlinear systems, the nonsmooth analysis is suitable for studying switching nonlinear systems. Therefore, it is necessary to introduce some definitions of nonsmooth theory, such as differential inclusion and set-valued maps.
Let \(\underline{a}_{ij}=\min \{\hat{a}_{ij}, \check{a}_{ij}\}\), \(\overline{a}_{ij}=\max \{\hat{a}_{ij}, \check{a}_{ij}\}\), \(\underline{b}_{ij}=\min \{\hat{b}_{ij}, \check{b}_{ij}\}\), \(\overline{b}_{ij}=\max \{\hat{b}_{ij}, \check{b}_{ij}\}\), \(\underline{d}_{ij}=\min \{\hat{d}_{ij}, \check{d}_{ij}\}\), \(\overline{d}_{ij}=\max \{\hat{d}_{ij}, \check{d}_{ij}\}\), for \(i,j =1,2,\ldots,n\). By applying the theory of differential inclusions and set-valued maps in system (2.1) [31, 32], it follows that
$$ \textstyle\begin{cases} \dot{x_{i}}(t)\in -c_{i}x_{i}(t-\eta (t))+\sum_{j=1}^{n}\operatorname{co}[ \underline{a}_{ij}, \overline{a}_{ij}]f_{j}(x_{j}(t))+\sum_{j=1}^{n}\operatorname{co}[ \underline{b}_{ij},\overline{b}_{ij}]f(x_{j}(t \\ \hphantom{\dot{x_{i}}(t)\in}{} -\tau _{j1}(t)-\tau _{j2}(t))) +\sum_{j=1}^{n}\operatorname{co}[\underline{d}_{ij}, \overline{d}_{ij}]\int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f_{j}(x_{j}(s))\,ds \\ \hphantom{\dot{x_{i}}(t)\in}{}+{e}_{i}\dot{x}_{i}(t-h(t))+u_{i}(t), \\ y_{i}(t)=f_{i}(x_{i}(t)), \\ x_{i}(t)=\phi _{i}(t), \quad t\in (-\tau ^{*},0). \end{cases} $$
Using Filippov's theorem in [33], there exist \(a_{ij}^{\prime }(t)\in \operatorname{co}[\underline{a}_{ij}, \overline{a}_{ij}]\), \(b_{ij}^{\prime }(t)\in \operatorname{co}[\underline{b}_{ij},\overline{b}_{ij}]\), \(d_{ij}^{\prime }(t)\in \operatorname{co}[\underline{d}_{ij}, \overline{d}_{ij}]\), and \(A=(a_{ij}^{\prime }(t))_{n\times n}\), \(B=(b_{ij}^{\prime }(t))_{n\times n}\), \(D=(d_{ij}^{\prime }(t))_{n\times n} \), such that
$$ \textstyle\begin{cases} \dot{x}(t)=-Cx(t-\eta (t))+Af(x(t))+Bf(x(t-\tau _{1}(t)-\tau _{2}(t))) \\ \hphantom{\dot{x}(t)=}{} +D\int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f(x(s))\,ds+E\dot{x}(t-h(t))+u(t), \\ y(t)=f(x(t)), \\ x(t)=\phi (t), \quad t\in (-\tau ^{*},0), \end{cases} $$
where \(x(t)=(x_{1}(t),x_{2}(t),\ldots,x_{n}(t))^{T}\), \(x(t-\eta (t))=(x _{1}(t-\eta (t)), x_{2}(t-\eta (t)),\ldots, x_{n}(t-\eta (t)))^{T}\), \(f(x(t))=(f_{1}(x_{1}(t)), f_{2}(x_{2}(t)),\ldots, f_{n}(x_{1}(n)))^{T}\), \(f(x(t-\tau _{1}(t)-\tau _{2}(t)))=(f_{1}(x_{1}(t- \tau _{11}-\tau _{12})), f_{2}(x_{2}(t-\tau _{21}-\tau _{22})),\ldots,f _{n}(x_{n}(t-\tau _{n1}-\tau _{n2})))^{T}\), \(\dot{x}(t-h(t))=(\dot{x} _{1}(t-h(t)), \dot{x}_{2}(t-h(t)),\ldots, \dot{x}_{n}(t-h(t)))^{T}\), \(u(t)=(u_{1}(t),u_{2}(t),\ldots,u_{n}(t))^{T} \).
To prove our main results, the following assumptions, definitions and lemmas are needed.
The time-varying delays \(\tau _{1}(t)\), \(\tau _{2}(t)\) and \(\eta (t) \) satisfy the conditions \(\vert \dot{\tau }_{1}(t)\vert \leq \mu _{1}\); \(\vert \dot{\tau }_{2}(t)\vert \leq \mu _{2}\); \(\vert \dot{\eta }(t)\vert \leq \mu _{3}\) where μ, \(\mu _{1}\), \(\mu _{2}\) and \(\mu _{3}\) are nonnegative constants, and we denote \(\tau (t)=\tau _{1}(t)+\tau _{2}(t)\), \(\mu =\mu _{1}+\mu _{2} \) and \(\tau =\tau _{1}+\tau _{2}\).
For all \(\alpha ,\beta \in R\) and \(\alpha \neq \beta \), \(i=1,2,\ldots,n\), the activation function f is bounded and there exist constants \(k_{i}^{-}\) and \(k_{i}^{+} \) such that
$$ k_{i}^{-}\leq \frac{f_{i}(\alpha )-f_{i}(\beta )}{\alpha -\beta } \leq k_{i}^{+}, $$
where let \(F_{i}=\vert k_{i}^{-}\vert \vee \vert k_{i}^{+}\vert \), \(f=(f_{1},f_{2},\ldots,f_{n})^{T}\) and for any \(i\in \{1,2,\ldots,n\}\), \(f_{i}(0)=0\). For presentation convenience, in the following we denote
$$ K_{1}=\operatorname{diag} \bigl\{ {k_{1}^{-}k_{1}^{+},k_{2}^{-}k_{2}^{+}, \ldots,k _{n}^{-}k_{n}^{+}} \bigr\} , \quad \quad K_{2}=\operatorname{diag} \biggl\{ {\frac{k_{1}^{-}+k_{1}^{+}}{2}, \frac{k_{2}^{-}+k _{2}^{+}}{2},\ldots, \frac{k_{n}^{-}+k_{n}^{+}}{2}} \biggr\} . $$
\(\phi (t)\in \mathbb{C}^{1}:\mathbb{C}([\tau *,0],R ^{n}) \) is the initial function with the norm
$$ \Vert \phi \Vert _{\tau *}=\sup_{s\in [\tau *,0]} \bigl\{ \bigl\vert \phi (s) \bigr\vert , \bigl\vert \dot{\phi }(s) \bigr\vert \bigr\} . $$
Definition 1
([34, 35])
Let \(x(t,0,\phi )\) be the solution of neural network (2.2) through \((0,\phi )\), \(\phi \in \mathbb{C}^{1} \). Suppose there exists a compact set \(S\subseteq R^{n}\) such that for every \(\phi \in \mathbb{C}^{1}\), there exists \(T(\phi )>0\) such that, when \(t\geq T(\phi )\), \(x(t,0,\phi )\subseteq S\). Then the neural network (2.2) is said to be a globally dissipative system, and S is called a globally attractive set. The set S is called positively invariant if for every \(\phi \in S\), it holds that \(x(t,0,\phi ) \subseteq S\) for all \(t\in R_{+}\).
Let S be a globally attractive set of neural network (2.2). The neural network (2.2) is said to be globally exponentially dissipative if there exist constant \(a>0 \) and compact \(S^{*} \supset S\) in \(R^{n}\) such that for every \(\phi \in R^{n} \backslash S^{*} \), there exists a constant \(M(\phi )>0\) such that
$$ \inf_{\tilde{x}\in S } \bigl\{ \bigl\vert x(t,0,\phi )-\tilde{x} \bigr\vert :x\in R^{n} \backslash S^{*} \bigr\} \leq M(\phi )e^{-at},\quad t\in R_{+}. $$
Here \(x\in R^{n}\) but \(x\notin S^{*}\). Set \(S^{*}\) is called a globally exponentially attractive set.
Lemma 1
([36])
Consider a given matrix \(R>0\). Then, for all continuous functions \(\omega (\cdot ):[a,b]\rightarrow R^{n}\), such that the considered integral is well defined, one has
$$ \int _{a}^{b}\omega ^{T}(u)R\omega (u) \,du \geq \frac{1}{b-a} \biggl[ \int _{a}^{b}\omega (u)\,du \biggr] ^{T}R \biggl[ \int _{a}^{b}\omega (u)\,du \biggr]. $$
For any given matrices H, E, a scalar \(\varepsilon >0\) and F with \(F^{T} F\leq I\), the following inequality holds:
$$ HFE+(HFE)^{T}\leq \varepsilon HH^{T}+\varepsilon ^{-1}E^{T}E. $$
For any constant matrix \(H\in {R}^{n\times n}\) and two scalars \(b\geq a\geq 0\), the following inequality holds:
$$ \begin{aligned} &-\frac{(b^{2}-a^{2})}{2} \int _{-b}^{-a} \int ^{t}_{t+\theta }x^{T}(s)Hx(s)\,ds\,d \theta \\ &\quad \leq - \biggl[ \int _{-b}^{-a} \int ^{t}_{t+\theta }x(s)\,ds\,d\theta \biggr] ^{T} H \biggl[ \int _{-b}^{-a} \int ^{t}_{t+\theta }x(s)\,ds\,d\theta \biggr]. \end{aligned} $$
Let the functions \(f_{1}(t), f_{2}(t) ,\ldots, f _{N}(t):R^{m} \rightarrow R\) have positive values in an open subset D of \(R^{m}\) and satisfy
$$ \frac{1}{\alpha _{1}}f_{1}(t)+\frac{1}{\alpha _{2}}f_{2}(t)+ \cdots +\frac{1}{ \alpha _{N}}f_{N}(t):D\rightarrow R^{n}, $$
with \(\alpha _{i}>0\) and \(\sum_{i}\alpha _{i}=1\), then the reciprocal convex combination of \(f_{i}(t)\) over the set D satisfies
$$\begin{aligned}& \forall g_{i,j}(t):R^{m}\rightarrow R^{n},\quad \quad g_{i,j}(t)\doteq g_{j,i}(t), \\& \sum_{i}\frac{1}{\alpha _{i}}f_{i}(t)\geq \sum_{i}f_{i}(t)+\sum _{i \neq j}g_{i,j}(t),\quad\quad \begin{bmatrix} f_{i}(t)&g_{i,j}(t)\\ g_{j,i}(t)&f_{j}(t) \end{bmatrix} \geq 0. \end{aligned}$$
In this section, under Assumptions 1–3 and by using Lyapunov–Krasovskii functional method and LMI technique, the delay-dependent dissipativity criterion of system (2.2) is derived in the following theorem.
Theorem 3.1
Under Assumptions 1–3, if there exist symmetric positive definite matrices \(P>0\), \({Q_{i}>0}\), \(V_{i} >0 \), \(U_{i}>0\) (\(i=1,2,3\)), \(R_{j}>0\), \(T_{j}>0\) (\(j=1,2,3,4,5\)), \(G_{k}>0\) (\(k=1,2,3,4\)), \(L_{1}>0\), \(L_{2}>0\), \(S_{2}>0\), \(S_{3}>0\), three \(n\times n \) diagonal matrices \(M>0\), \(\beta _{1}>0\), \(\beta _{2}>0 \), \(n\times n\) real matrix \(S_{1} \) such that the following LMIs hold:
$$ \varPhi _{k}=\varPsi -e^{-2\alpha \tau }\varUpsilon _{k}^{T} \begin{bmatrix} U_{1}&V_{1}&0&0&0&0 \\ *&U_{1}&0&0&0&0 \\ *&*&U_{2}&V_{2}&0&0 \\ *&*&*&U_{2}&0&0 \\ *&*&*&*&U_{3}&V_{3} \\ *&*&*&*&*&U_{3} \end{bmatrix} \varUpsilon _{k}< 0 \quad (k=1,2,3,4), $$
where \(\varPsi =[\psi ]_{l\times n}\) (\(l,n=1,2,\ldots,25\)); \(\psi _{1,1}=-PM-M ^{T}P+2\alpha P+2Q_{1}+Q_{2}+Q_{3}+R_{1}+R_{2}+R_{3}+R_{4}+R_{5} -4e ^{-2\alpha \tau _{1}}T_{1}-4e^{-2\alpha \tau _{2}}T_{2}-4e^{-2\alpha \tau }T_{3} -4e^{-2\alpha \eta }T_{4}-4e^{-2\alpha h}T_{5}+\eta ^{2}L _{2}-K_{1}\beta _{1}\), \(\psi _{1,2}=-2e^{-\alpha \tau }G_{3}\), \(\psi _{1,3}=-2e ^{-\alpha \tau _{1}}G_{1}\), \(\psi _{1,4}=-2e^{-\alpha \tau _{2}}G_{2}\), \(\psi _{1,5}=PM-2e^{-2\alpha \eta }G_{4}\), \(\psi _{1,6}=e^{-2\alpha h}T _{5}\), \(\psi _{1,7}=-2e^{-2\alpha \tau }(T_{3}+2G_{3})\), \(\psi _{1,8}=-2e ^{-2\alpha \tau _{1}}(T_{1}+2G_{1})\), \(\psi _{1,9}=-2e^{-2\alpha \tau _{2}}(T_{2}+2G_{2})\), \(\psi _{1,10}=-PC+S_{1}C-2e^{-2\alpha \eta }(T _{4}+2G_{4})\), \(\psi _{1,11}=PA-S_{1}A+K_{2}\beta _{1}\), \(\psi _{1,12}=PB-S _{1}B\), \(\psi _{1,13}=M^{T}PM-\alpha PM -\alpha M^{T}P\), \(\psi _{1,14}=-6e ^{-2\alpha \eta }G_{4}\), \(\psi _{1,15}=-6e^{-2\alpha \eta }T_{4}\), \(\psi _{1,16}=-6e^{-2\alpha \tau } T_{3}\), \(\psi _{1,17}= -6e^{-2 \alpha \tau _{1}}T_{1}\), \(\psi _{1,18}=-6e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{1,19}=6e^{-2\alpha \tau }G_{3}\), \(\psi _{1,20}=6e^{-2\alpha \tau _{1}}G_{1}\), \(\psi _{1,21}=6e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{1,22}=PD-S _{1}D\), \(\psi _{1,23}=S_{1}\), \(\psi _{1,24}=PE-S_{1}E\), \(\psi _{2,2}=-e ^{-2\alpha \tau }Q_{1}-4e^{-2\alpha \tau }T_{3}\), \(\psi _{2,7}=-2e^{-2 \alpha \tau }(T_{3}+2G_{3})\), \(\psi _{2,16}=6e^{-2\alpha \tau }G_{3}\), \(\psi _{2,19}=6e^{-2\alpha \tau }T_{3}\), \(\psi _{3,3}=-e^{-2\alpha \tau _{1}}Q_{2}-4e^{-2\alpha \tau _{1}}T_{1}\), \(\psi _{3,8}=-2e^{-2 \alpha \tau _{1}}(T_{1}+2G_{1})\), \(\psi _{3,17}=6e^{-2\alpha \tau _{1}}G _{1}\), \(\psi _{3,20}=6e^{-2\alpha \tau _{1}}T_{1}\), \(\psi _{4,4}=-e^{-2 \alpha \tau _{2}}Q_{3}-4e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{4,9}=-2e ^{-2\alpha \tau _{2}}(T_{2}+2G_{2})\), \(\psi _{4,18}=6e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{4,21}=6e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{5,5}=-e ^{-2\alpha \eta }R_{2}-4e^{-2\alpha \eta }T_{4}\), \(\psi _{5,10}=-2e ^{-2\alpha \eta }(T_{4}+2G_{4})\), \(\psi _{5,13}=-M^{T}PM\), \(\psi _{5,14}=6e ^{-2\alpha \eta }T_{4}\), \(\psi _{5,15}=6e^{-2\alpha \eta }G_{4}\), \(\psi _{6,6}=-e^{-2\alpha h}T_{5}\), \(\psi _{7,7}=-(1-\mu )e^{-2\alpha \tau }R_{3}-4e^{-2\alpha \tau }(2T_{3}+G_{3})-K_{1}\beta _{2}\), \(\psi _{7,12}=K_{2}\beta _{2}\), \(\psi _{7,16}=6e^{-2\alpha \tau }(T_{3}+G _{3})\), \(\psi _{7,19}=6e^{-2\alpha \tau }(T_{3}+G_{3})\), \(\psi _{8,8}=-(1- \mu _{1})e^{-2\alpha \tau _{1}}R_{4}-4e^{-2\alpha \tau _{1}}(2T_{1}+G _{1})\), \(\psi _{8,17}=6e^{-2\alpha \tau _{1}}(T_{1}+G_{1})\), \(\psi _{8,20}=6e ^{-2\alpha \tau _{1}}(T_{1}+G_{1})\), \(\psi _{9,9}=-(1-\mu _{2})e^{-2 \alpha \tau _{2}}R_{5}-4e^{-2\alpha \tau _{2}}(2T_{2}+G_{2})\), \(\psi _{9,18}=6e^{-2\alpha \tau _{2}}(T_{2}+G_{2})\), \(\psi _{9,21}=6e^{-2 \alpha \tau _{2}}(T_{2}+G_{2})\), \(\psi _{10,13}=M^{T}P{C}\), \(\psi _{10,10}=-(1- \mu _{3})e^{-2\alpha \eta }R_{1}-4e^{-2\alpha \eta }(2T_{4}+G_{4})\), \(\psi _{10,14}=6e^{-2\alpha \eta }(T_{4}+G_{4})\), \(\psi _{10,15}=6e^{-2 \alpha \eta }(T_{4}+G_{4})\), \(\psi _{10,23}=-S_{2}C\), \(\psi _{10,24}=-S _{3}C\), \(\psi _{11,11}=(\delta _{2}-\delta _{1})^{2}L_{1}-\beta _{1}\), \(\psi _{11,13}=-M^{T}PA\), \(\psi _{11,23}=S_{2}A\), \(\psi _{11,24}=S_{3}A\), \(\psi _{12,12}=-\beta _{2}\), \(\psi _{12,13}=-M^{T}PB\), \(\psi _{12,23}=S _{2}B\), \(\psi _{12,24}=S_{3}B\), \(\psi _{13,13}=\alpha M^{T}PM-2e^{-2 \alpha \eta }L_{2}\), \(\psi _{13,22}=-M^{T}PD\), \(\psi _{13,24}=-M^{T}PE\), \(\psi _{13,25}=-MP\), \(\psi _{14,14}=-12e^{-2\alpha \eta }T_{4}\), \(\psi _{14,15}=-12e^{-2\alpha \eta }G_{4}\), \(\psi _{15,15}=-12e^{-2 \alpha \eta }T_{4}\), \(\psi _{16,16}=-12e^{-2\alpha \tau }T_{3}\), \(\psi _{16,19}=-12e^{-2\alpha \tau }G_{3}\), \(\psi _{17,17}=-12e^{-2 \alpha \tau _{1}}T_{1}\), \(\psi _{17,20}=-12e^{-2\alpha \tau _{1}}G_{1}\), \(\psi _{18,18}=-12e^{-2 \alpha \tau _{2}}T_{2}\), \(\psi _{18,21}=-12e^{-2\alpha \tau _{2}}G_{2}\), \(\psi _{19,19}=-12e^{-2\alpha \tau }T_{3}\), \(\psi _{20,20}=-12e^{-2 \alpha \tau _{1}} T_{1}\), \(\psi _{21,21}=-12e^{-2\alpha \tau _{2}}T_{2}\), \(\psi _{22,22}=-e^{-2\alpha \delta _{2}}L_{1}\), \(\psi _{22,23}=S_{2}D\), \(\psi _{22,24}=S_{3}D\), \(\psi _{23,23}=\frac{\tau _{1}^{4}}{4}U_{1}+\frac{\tau _{2}^{4}}{4}U_{2} +\frac{\tau ^{4}}{4}U _{3}-S_{2}+\tau _{1}^{2}T_{1}+\tau _{2}^{2}T_{2}+\tau ^{2}T_{3}+\eta ^{2}T _{4}+h^{2}T_{5}\), \(\psi _{23,24}=S_{2}E\), \(\psi _{24,24}=S_{3}E+E^{T}S _{3}+S_{3}\), \(\psi _{25,25}=S_{2}\), \(\varUpsilon _{k}^{T}=[\varGamma _{1k},\varGamma _{2k},\varGamma _{3k},\varGamma _{4k}, \varGamma _{5k},\varGamma _{6k}]^{T}\) (\(k=1,2,3,4\)), \(\varGamma _{11}^{T}=\varGamma _{12}^{T}=\tau _{1}(e_{1}-e_{20})\), \(\varGamma _{13}^{T}=\varGamma _{14}^{T}=\mathbf{0}\), \(\varGamma _{21}^{T}=\varGamma _{22}^{T}=\mathbf{0}\), \(\varGamma _{23}^{T}=\varGamma _{24}^{T}=\tau _{1}(e_{1}-e_{17})\), \(\varGamma _{31}^{T}=\varGamma _{33}^{T}=\tau _{2}(e_{1}-e_{21})\), \(\varGamma _{32} ^{T}=\varGamma _{34}^{T}=\mathbf{0}\), \(\varGamma _{41}^{T}=\varGamma _{43}^{T}=\mathbf{0}\), \(\varGamma _{42}^{T}= \varGamma _{44}^{T}=\tau _{2}(e_{1}-e_{18})\), \(\varGamma _{51}^{T}=\tau (e _{1}-e_{19})\), \(\varGamma _{52}^{T}=\tau _{1}(e_{1}-e_{19})\), \(\varGamma _{53}^{T}=\tau _{2}(e_{1}-e_{19})\), \(\varGamma _{54}^{T}=\varGamma _{61}^{T}=\mathbf{0}\), \(\varGamma _{62}^{T}=\tau _{2}(e_{1}-e_{16})\), \(\varGamma _{63}^{T}=\tau _{1}(e_{1}-e_{16})\), \(\varGamma _{64}^{T}=\tau (e_{1}-e_{19})\), \(e_{i}=[\mathbf{0}_{n\times (i-1)n},\mathbf{I}_{n\times n},\mathbf{0} _{n\times (25-i)n}]\) (\(i=1,2,\ldots,25\)), then the neural network (2.2) is exponentially dissipative, and
$$\begin{aligned} S&= \biggl\{ x: \vert x \vert \leq \frac{ \vert (P-S_{1}) \vert +\sqrt{ \vert (P-S_{1}) \vert ^{2} + \lambda _{\min }{(Q_{1})}\lambda _{\max }(S_{3})}}{\lambda _{\min }{(Q_{1})}} \varGamma _{u} \biggr\} \end{aligned}$$
is a positively invariant and globally exponentially attractive set, where the external input \(\vert u(t)\vert \leq \varGamma _{u}\), \(\varGamma _{u}>0\) is a bound of the external input \(u(t)\) on \(R^{+}\). In addition, the exponential dissipativity rate index α can be used in the Φ.
Considering the following Lyapunov–Krasovskii function:
$$ V \bigl(t,x(t) \bigr)=\sum_{k=1} ^{6}V_{k}(t), $$
$$\begin{aligned} &V_{1} \bigl(t,x(t) \bigr)= \biggl[x(t)-M \int _{t-\eta }^{t}x(s)\,ds \biggr]^{T}P \biggl[x(t)-M \int _{t-\eta }^{t}x(s)\,ds \biggr], \\ &\begin{aligned} V_{2} \bigl(t,x(t) \bigr)&= \int _{t-\tau }^{t}e^{2\alpha (s-t)}x^{T}(s)Q_{1}x(s) \,ds + \int _{t-\tau _{1}}^{t}e^{2\alpha (s-t)}x^{T}(s)Q_{2}x(s) \,ds \\ &\quad{} + \int _{t-\tau _{2}}^{t}e^{2\alpha (s-t)}x^{T}(s)Q _{3}x(s)\,ds, \end{aligned} \\ & \begin{aligned} V_{3} \bigl(t,x(t) \bigr)&= \int _{t-\eta (t)}^{t}e^{2\alpha (s-t)}x^{T}(s)R_{1}x(s) \,ds + \int _{t-\eta }^{t}e^{2\alpha (s-t)}x^{T}(s)R_{2}x(s) \,ds \\ &\quad{} + \int _{t-\tau (t)}^{t}e^{2\alpha (s-t)}x(s)^{T} R_{3}x(s)\,ds + \int _{t-\tau _{1}(t)}^{t}e^{2\alpha (s-t)}x(s)^{T} R_{4}x(s)\,ds \\ &\quad{} + \int _{t-\tau _{2}(t)}^{t}e^{2\alpha (s-t)}x(s)^{T} R _{5}x(s)\,ds, \end{aligned} \\ & V_{4} \bigl(t,x(t) \bigr)=\tau _{1} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t}e^{2 \alpha (s-t)}\dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)}\quad{} +\tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)} \dot{x}^{T}(s)T_{2} \dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)} \quad{} +\tau \int _{-\tau }^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}\dot{x} ^{T}(s)T_{3}\dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)} \quad{} +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}\dot{x} ^{T}(s)T_{4}\dot{x}(s)\,ds\,d\theta \\ & \hphantom{V_{4} \bigl(t,x(t) \bigr)} \quad{} +h \int _{-h}^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)T _{5}\dot{x}(s)\,ds\,d\theta , \\ & \begin{aligned} V_{5} \bigl(t,x(t) \bigr)&=(\delta _{2}- \delta _{1}) \int _{-\delta _{2}}^{-\delta _{1}} \int _{t+\theta }^{t}e^{2\alpha (s-t)}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds\,d \theta \\ & \quad{} +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}e^{2\alpha (s-t)}x^{T}(s)L _{2}x(s)\,ds\,d\theta , \end{aligned} \\ & \begin{aligned} V_{6} \bigl(t,x(t) \bigr)&=\frac{\tau _{1}^{2}}{2} \int _{-\tau _{1}}^{0} \int _{ \theta }^{0} \int _{t+\lambda }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\lambda \,d\theta \\ & \quad{} +\frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds\,d \lambda \,d \theta \\ & \quad{} +\frac{\tau ^{2}}{2} \int _{-\tau }^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t}e^{2\alpha (s-t)}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds\,d \lambda \,d \theta . \end{aligned} \end{aligned}$$
Calculating the derivative of \(V(t,x(t))\) along the trajectory of neural network (2.2), it can be deduced that
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{1} \bigl(t,x(t) \bigr)&=2 \biggl[x^{T}(t)- \int _{t-\eta }^{t}x^{T}(s)\,ds \times M \biggr]P \biggl[-Cx \bigl(t-\eta (t) \bigr)+Af \bigl(x(t) \bigr) \\ &\quad{} +Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr)+D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds+E\dot{x} \bigl(t-h(t) \bigr) \\ &\quad{} +u(t)-Mx(t)+Mx(t-\eta ) \biggr], \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{2} \bigl(t,x(t) \bigr)&\leq x(t)^{T}[ Q_{1}+Q_{2}+Q_{3}]x(t) -e^{-2 \alpha \tau }x(t-\tau )^{T}Q_{1}x(t-\tau ) \\ &\quad{} -e^{-2\alpha \tau _{1}}x(t-\tau _{1})^{T}Q_{2}x(t- \tau _{1}) -e^{-2\alpha \tau _{2}}x(t-\tau _{2})Q_{3}x(t- \tau _{2}) \\ &\quad{} -2\alpha V_{2} \bigl(t,x(t) \bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{3} \bigl(t,x(t) \bigr)&\leq x^{T}(t)[R_{1}+R_{2}+R_{3}+R_{4}+R_{5}]x(t) -e^{-2\alpha \eta }x^{T}(t-\eta )R_{2}x(t-\eta ) \\ &\quad{} -(1-\mu _{3})e^{-2\alpha \eta }x^{T} \bigl(t-\eta (t) \bigr)R _{1}x \bigl(t-\eta (t) \bigr) \\ &\quad{} -(1-\mu )e^{-2\alpha \tau }x^{T} \bigl(t-\tau (t) \bigr)R _{3}x \bigl(t-\tau (t) \bigr) \\ &\quad{} -(1-\mu _{1})e^{-2\alpha \tau _{1}(t)}x^{T} \bigl(t- \tau _{1}(t) \bigr)R_{4}x(t-\tau _{1}) \\ &\quad{}-(1-\mu _{2})e^{-2\alpha \tau _{2}}x^{T} \bigl(t-\tau _{2}(t) \bigr)R_{5}x \bigl(t-\tau _{2}(t) \bigr) -2 \alpha V_{3} \bigl(t,x(t) \bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \dot{V}_{4} \bigl(t,x(t) \bigr)\leq \tau _{1}^{2}\dot{x}^{T}(t)T_{1}\dot{x}(t)- \tau _{1}e^{-2\alpha \tau _{1}} \int _{t-\tau _{1}}^{t}\dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +\tau _{2}^{2}\dot{x}^{T}(t)T_{2} \dot{x}(t)-\tau _{2}e^{-2\alpha \tau _{2}} \int _{t-\tau _{2}}^{t}\dot{x}^{T}(s)T_{2} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +\tau ^{2}\dot{x}^{T}(t)T_{3} \dot{x}(t)-\tau e^{-2\alpha \tau } \int _{t-\tau }^{t}\dot{x}^{T}(s)T_{3} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +\eta ^{2}\dot{x}^{T}(t)T_{4} \dot{x}(t)-\eta e^{-2\alpha \eta } \int _{t-\eta }^{t}\dot{x}^{T}(s)T_{4} \dot{x}(s)\,ds \\ & \hphantom{\dot{V}_{4} \bigl(t,x(t) \bigr)}\quad{} +h^{2}\dot{x}^{T}(t)T_{5} \dot{x}(t)-he^{-2\alpha h} \int _{t-h}^{t} \dot{x}^{T}(s)T_{5} \dot{x}(s)\,ds-2\alpha V_{4} \bigl(t,x(t) \bigr), \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{5} \bigl(t,x(t) \bigr)&\leq (\delta _{2}-\delta _{1})^{2}f^{T} \bigl(x(t) \bigr)L_{1}f \bigl(x(t) \bigr)+ \eta ^{2}x^{T}(t)L_{2}x(t) \\ &\quad{} -e^{2\alpha \delta _{2}} \bigl(\delta _{2}(t)-\delta _{1}(t) \bigr) \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds \\ &\quad{} -\eta e^{-2\alpha \eta } \int _{t-\eta }^{t}x^{T}(s)L_{2}x(s) \,ds-2\alpha V_{5} \bigl(t,x(t) \bigr), \end{aligned} \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \dot{V}_{6} \bigl(t,x(t) \bigr)&\leq \frac{\tau _{1}^{4}}{4}\dot{x}(t)U_{1} \dot{x}(t)+\frac{\tau _{2}^{4}}{4} \dot{x}(t)U_{2}\dot{x}(t) +\frac{\tau ^{4}}{4}\dot{x}(t)U_{3} \dot{x}(t) \\ &\quad{} -\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds \\ &\quad{} -\frac{\tau _{2}^{2}}{2}e^{-2\alpha \tau _{2}} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds \\ &\quad{} -\frac{\tau ^{2}}{2}e^{-2\alpha \tau } \int _{-\tau }^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds-2\alpha V_{6} \bigl(t,x(t) \bigr). \end{aligned} \end{aligned}$$
For any matrix \(G_{1}\) with [ T 1 G 1 ∗ T 1 ]≥0, by using Lemmas 1 and 4, we can obtain the following:
$$\begin{aligned} &-\tau _{1}e ^{-2\alpha \tau _{1}} \int _{t-\tau _{1}}^{t}\dot{x}^{T}(s)T _{1}\dot{x}(s)\,ds \\ &\quad =-\tau _{1}e^{-2\alpha \tau _{1}} \biggl[ \int _{t-\tau _{1}}^{t-\tau _{1}(t)} \dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds + \int _{t-\tau _{1}(t)}^{t}\dot{x}^{T}(s)T _{1}\dot{x}(s)\,ds \biggr] \\ &\quad \leq e^{-2\alpha \tau _{1}} \biggl\{ -\frac{\tau _{1}}{\tau _{1}-\tau _{1}(t)} \bigl[ \vartheta _{1}^{T}(t)T_{1}\vartheta _{1} +3 \vartheta _{2}^{T}(t)T_{1} \vartheta _{2}(t) \bigr] \\ &\quad\quad{} -\frac{\tau _{1}}{\tau _{1}(t)} \bigl[\vartheta _{3}^{T}(t)T_{1} \vartheta _{3}(t)+3\vartheta _{4}^{T}(t)T_{1} \vartheta _{4}(t) \bigr] \biggr\} \\ &\quad \leq e^{-2\alpha \tau _{1}} \bigl[-\vartheta _{1}^{T}(t)T_{1} \vartheta _{1}(t)-3 \vartheta _{2}^{T}(t)T_{1} \vartheta _{2}(t)- \vartheta _{3}^{T}(t)T_{1} \vartheta _{3}(t) \\ &\quad \quad{} -3\vartheta _{4}^{T}(t)T_{1} \vartheta _{4}(t)-2\vartheta _{1} ^{T}(t)G_{1} \vartheta _{3}(t)-6\vartheta _{2}^{T}(t)G_{1} \vartheta _{4}(t) \bigr], \end{aligned}$$
$$\begin{aligned}& \vartheta _{1}(t)=x \bigl(t-\tau _{1}(t) \bigr)-x(t-\tau _{1}); \\& \vartheta _{2}(t)=x \bigl(t- \tau _{1}(t) \bigr)+x(t-\tau _{1})-\frac{2}{\tau _{1}-\tau _{1}(t)} \int _{t-\tau _{1}}^{t-\tau _{1}(t)}x(s)\,ds; \\& \vartheta _{3}(t)=x(t)-x \bigl(t-\tau _{1}(t) \bigr); \qquad \vartheta _{4}(t)=x(t)+x \bigl(t-\tau _{1}(t) \bigr)- \frac{2}{\tau _{1}(t)} \int _{t-\tau _{1}(t)}^{t}x(s)\,ds. \end{aligned}$$
Similarly, it holds that
$$\begin{aligned}& -\tau _{2} e^{-2\alpha \tau _{2}} \int _{t-\tau _{2}}^{t}\dot{x}^{T}(s)T _{2}\dot{x}(s)\,ds \\& \quad \leq e^{-2\alpha \tau _{2}} \bigl[-\vartheta _{5}^{T}(t)T_{2} \vartheta _{5}(t)-3 \vartheta _{6}^{T}(t)T_{2} \vartheta _{6}(t)- \vartheta _{7}^{T}(t)T_{2} \vartheta _{7}(t)-3\vartheta _{8}^{T}(t)T_{2} \vartheta _{8}(t) \\& \quad \quad{} -2\vartheta _{5}^{T}(t)G_{2} \vartheta _{7}(t)-6\vartheta _{6} ^{T}(t)G_{2} \vartheta _{8}(t) \bigr], \end{aligned}$$
$$\begin{aligned}& -\tau e^{-2\alpha \tau } \int _{t-\tau }^{t}\dot{x}^{T}(s)T_{3} \dot{x}(s)\,ds \\& \quad \leq e^{-2\alpha \tau } \bigl[-\vartheta _{9}^{T}(t)T_{3} \vartheta _{9}(t)-3 \vartheta _{10}^{T}(t)T_{3} \vartheta _{10}(t)- \vartheta _{11}^{T}(t)T _{3}\vartheta _{11}(t)-3\vartheta _{12}^{T}(t)T_{3} \vartheta _{12}(t) \\& \quad \quad{} -2\vartheta _{9}^{T}(t)G_{3} \vartheta _{11}(t)-6\vartheta _{10} ^{T}(t)G_{3} \vartheta _{12}(t) \bigr], \end{aligned}$$
$$\begin{aligned}& -\eta e^{-2\alpha \eta } \int _{t-\eta }^{t}\dot{x}^{T}(s)T_{4} \dot{x}(s)\,ds \\& \quad \leq e^{-2\alpha \eta } \bigl[-\vartheta _{13}^{T}(t)T_{4} \vartheta _{13}(t)-3 \vartheta _{14}^{T}(t)T_{4} \vartheta _{14}(t)- \vartheta _{15}^{T}(t)T _{4}\vartheta _{15}(t)-3\vartheta _{16}^{T}(t)T_{4} \vartheta _{16}(t) \\& \quad \quad{} -2\vartheta _{13}^{T}(t)G_{4} \vartheta _{15}(t)-6\vartheta _{14} ^{T}(t)G_{4} \vartheta _{16}(t) \bigr], \end{aligned}$$
$$\begin{aligned}& \vartheta _{5}(t)=x \bigl(t-\tau _{2}(t) \bigr)-x(t-\tau _{2}); \\& \vartheta _{6}(t)=x \bigl(t- \tau _{2}(t) \bigr)+x(t-\tau _{2})-\frac{2}{\tau _{2}-\tau _{2}(t)} \int _{t-\tau _{2}}^{t-\tau _{2}(t)}x(s)\,ds; \\& \vartheta _{7}(t)=x(t)-x \bigl(t-\tau _{2}(t) \bigr); \quad\quad \vartheta _{8}(t)=x(t)+x \bigl(t-\tau _{2}(t) \bigr)- \frac{2}{\tau _{2}(t)} \int _{t-\tau _{2}(t)} ^{t}x(s)\,ds; \\& \vartheta _{9}(t)=x \bigl(t-\tau (t) \bigr)-x(t-\tau ); \\& \vartheta _{10}(t)=x \bigl(t- \tau (t) \bigr)+x(t-\tau )- \frac{2}{\tau -\tau (t)} \int _{t-\tau }^{t-\tau (t)}x(s)\,ds; \quad\quad \vartheta _{11}(t)=x(t)-x \bigl(t-\tau (t) \bigr); \\& \vartheta _{12}(t)=x(t)+x \bigl(t- \tau (t) \bigr)-\frac{2}{\tau (t)} \int _{t-\tau (t)}^{t}x(s)\,ds; \quad\quad \vartheta _{13}(t)=x \bigl(t-\eta (t) \bigr)-x(t-\eta ); \\& \vartheta _{14}(t)=x \bigl(t- \eta (t) \bigr)+x(t-\eta )- \frac{2}{\eta -\eta (t)} \int _{t-\eta }^{t-\eta (t)}x(s)\,ds; \quad\quad \vartheta _{15}(t)=x(t)-x \bigl(t-\eta (t) \bigr); \\& \vartheta _{16}(t)=x(t)+x \bigl(t- \eta (t) \bigr)-\frac{2}{\eta (t)} \int _{t-\eta (t)}^{t}x(s)\,ds. \end{aligned}$$
Applying Lemma 1 and Newton–Leibniz formula, we have
$$\begin{aligned} &-he^{-2\alpha h} \int _{t-h}^{t}\dot{x}^{T}(s)T_{5} \dot{x}(s)\,ds \\ &\quad \leq -e^{-2\alpha h} \biggl[ \int _{t-h}^{t}\dot{x}(s)\,ds \biggr] ^{T}T_{5} \biggl[ \int _{t-h}^{t}\dot{x}(s)\,ds \biggr] \\ &\quad \leq \bigl[x(t)-x(t-h) \bigr]^{T} \bigl[-e^{-2\alpha h}T_{5} \bigr] \bigl[x(t)-x(t-h) \bigr]. \end{aligned}$$
$$\begin{aligned} &{-e^{2\alpha \delta _{2}} \bigl(\delta _{2}(t)-\delta _{1}(t) \bigr) \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds} \\ &\quad \leq -e^{2\alpha \delta _{2}} \biggl[ \int _{t-\delta _{2}(t)} ^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \biggr]^{T} L_{1} \biggl[ \int _{t-\delta _{2}(t)} ^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \biggr], \end{aligned}$$
$$\begin{aligned} &{-e^{2\alpha \eta }\eta \int _{t-\eta }^{t}(x(s)^{T}L_{2}x(s) \,ds} \\ &\quad \leq -e^{2\alpha \eta } \biggl[ \int _{t-\eta }^{t}x(s)\,ds \biggr] ^{T}L_{2} \biggl[ \int _{t-\eta }^{t}x(s)\,ds \biggr]. \end{aligned}$$
The second term of Eq. (3.8) can be written as
$$\begin{aligned} &-\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad =-\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{- \tau _{1}(t)} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d \theta \\ &\quad \quad{} -\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}(t)} ^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta . \end{aligned}$$
By Lemma 3, we obtain
$$\begin{aligned} &-\frac{\tau _{1}^{2}}{2}e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad \leq -\frac{\tau _{1}^{2}}{\tau _{1}^{2}-\tau _{1}^{2}(t)}e^{-2\alpha \tau _{1}} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t} \dot{x}(s)\,ds\,d\theta \biggr]^{T}U_{1} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t} \dot{x}(s)\,ds\,d\theta \biggr] \\ &\quad \quad{} -\frac{\tau _{1}^{2}}{\tau _{1}^{2}(t)}e^{-2\alpha \tau _{1}} \biggl[ \int _{-\tau _{1}(t)}^{0} \int _{t+\theta }^{t} \dot{x}(s)\,ds \,d\theta \biggr]^{T}U_{1} \biggl[ \int _{-\tau _{1}(t)}^{0} \int _{t+ \theta }^{t} \dot{x}(s)\,ds \,d\theta \biggr]. \end{aligned}$$
Applying Lemma 4, for any matrix \(V_{1}\) with [ U 1 V 1 ∗ U 1 ]≥0, the above inequality becomes:
$$\begin{aligned} &-\frac{\tau _{1}^{2}}{2} e^{-2\alpha \tau _{1}} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad \leq e^{-2\alpha \tau _{1}} \biggl\{ - \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr]^{T} U_{1} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d \theta \biggr] \biggr\} \\ &\quad \quad{} +e^{-2\alpha \tau _{1}} \biggl\{ - \biggl[ \int _{-\tau _{1}(t)} ^{0} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr]^{T} 2V_{1} \biggl[ \int _{-\tau _{1}}^{-\tau _{1}(t)} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr] \biggr\} \\ &\quad\quad{} +e^{-2\alpha \tau _{1}} \biggl\{ - \biggl[ \int _{-\tau _{1}(t)} ^{0} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr]^{T} U_{1} \biggl[ \int _{-\tau _{1}(t)}^{0} \int _{t+\theta }^{t}\dot{x}(s)\,ds\,d\theta \biggr] \biggr\} \\ &\quad \leq e^{-2\alpha \tau } \bigl(-\varsigma _{1}^{T}U_{1} \varsigma _{1}-2\varsigma _{1}^{T}V_{1} \varsigma _{2}-\varsigma _{2}^{T} U _{1} \varsigma _{2} \bigr) \\ &\quad =\xi ^{T}(t)e^{-2\alpha \tau } \bigl[-\varGamma _{1}^{T}(t)U_{1}\varGamma _{1}(t)-2 \varGamma _{2}^{T}(t)V_{1}\varGamma _{1}(t)-\varGamma _{2}^{T}(t)U_{1} \varGamma _{2}(t) \bigr] \xi (t), \end{aligned}$$
$$\begin{aligned} &\varsigma _{1}= \bigl(\tau _{1}-\tau _{1}(t) \bigr)x(t)- \int _{t-\tau _{1}}^{t-\tau _{1}(t)}x(s)\,ds;\quad\quad \varsigma _{2}=\tau _{1}(t)- \int _{t-\tau _{1}(t)} ^{t}x(s)\,ds; \\ &\varGamma _{1}(t)= \bigl(\tau _{1}-\tau _{1}(t) \bigr) (e_{1}-e_{20});\quad \quad \varGamma _{2}(t)=\tau _{1}(t) (e_{1}-e_{17}). \end{aligned}$$
Similarly, by Lemmas 3 and 4, we have
$$\begin{aligned}& -\frac{\tau _{2}^{2}}{2} e^{-2\alpha \tau _{2}} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds\,d\theta \\& \quad \leq e^{-2\alpha \tau } \bigl(-\varsigma _{3}^{T}U_{2} \varsigma _{3}-2 \varsigma _{3}^{T}V_{2} \varsigma _{4}-\varsigma _{4}^{T} U_{2} \varsigma _{4} \bigr) \\& \quad =\xi ^{T}(t)e^{-2\alpha \tau } \bigl[-\varGamma _{3}^{T}(t)U_{2}\varGamma _{3}(t)-2 \varGamma _{4}^{T}(t)V_{2}\varGamma _{3}(t)-\varGamma _{4}^{T}(t)U_{4} \varGamma _{2}(t) \bigr] \xi (t), \end{aligned}$$
$$\begin{aligned}& -\frac{\tau ^{2}}{2} e^{-2\alpha \tau } \int _{-\tau }^{0} \int _{t+ \theta }^{t}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds\,d\theta \\& \quad \leq e^{-2\alpha \tau } \bigl(-\varsigma _{5}^{T}U_{3} \varsigma _{5}-2 \varsigma _{5}^{T}V_{3} \varsigma _{6}-\varsigma _{6}^{T} U_{3} \varsigma _{6} \bigr) \\& \quad =\xi ^{T}(t)e^{-2\alpha \tau } \bigl[-\varGamma _{5}^{T}(t)U_{3}\varGamma _{5}(t)-2 \varGamma _{6}^{T}(t)V_{3}\varGamma _{5}(t)-\varGamma _{6}^{T}(t)U_{3} \varGamma _{6}(t) \bigr] \xi (t), \end{aligned}$$
$$\begin{aligned} &\varsigma _{3}= \bigl(\tau _{2}-\tau _{2}(t) \bigr)x(t)- \int _{t-\tau _{2}}^{t-\tau _{2}(t)}x(s)\,ds;\quad \quad \varsigma _{4}=\tau _{2}(t)- \int _{t-\tau _{2}(t)} ^{t}x(s)\,ds; \\ &\varGamma _{3}(t)= \bigl(\tau _{2}-\tau _{2}(t) \bigr) (e_{1}-e_{21});\quad \quad \varGamma _{4}(t)=\tau _{2}(t) (e_{1}-e_{18}); \\ &\varsigma _{4}= \bigl(\tau -\tau (t) \bigr)x(t)- \int _{t-\tau }^{t-\tau (t)}x(s)\,ds; \quad \quad \varsigma _{5}=\tau (t)- \int _{t-\tau (t)}^{t}x(s)\,ds; \\ &\varGamma _{4}(t)= \bigl(\tau -\tau (t) \bigr) (e_{1}-e_{19}); \quad \quad \varGamma _{5}(t)=\tau (t) (e_{1}-e_{16}). \end{aligned}$$
By using Assumption 2, we can obtain the following:
$$ \bigl[f_{i} \bigl(x(t) \bigr)-l_{i}^{-}x(t) \bigr] \bigl[f_{i} \bigl(x(t) \bigr)-l_{i}^{+}x(t) \bigr] \leq 0 \quad (i=1,2,\ldots,n), $$
which can be compactly written as
$$\begin{aligned}& \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}&-K_{2}\\ *&I \end{bmatrix} \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} \leq 0, \\& \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}&-K_{2}\\ *&I \end{bmatrix} \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} \leq 0. \end{aligned}$$
Then for any positive matrices \(\beta _{1}=\operatorname{diag}(\beta _{1s}, \beta _{2s},\ldots,\beta _{ns})\) and \(\beta _{2}=\operatorname{diag}(\tilde{\beta }_{1s},\tilde{\beta }_{2s},\ldots,\tilde{\beta }_{ns})\), the following inequalities hold true:
$$\begin{aligned}& \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}\beta _{1}&-K_{2}\beta _{1}\\ *&\beta _{1} \end{bmatrix} \begin{bmatrix} x(t)\\ f(x(t)) \end{bmatrix} \leq 0, \end{aligned}$$
$$\begin{aligned}& \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} ^{T} \begin{bmatrix} K_{1}\beta _{2}&-K_{2}\beta _{2}\\ *&\beta _{2} \end{bmatrix} \begin{bmatrix} x(t-\tau _{1}(t)-\tau _{2}(t))\\ f(x(t-\tau _{1}(t)-\tau _{2}(t))) \end{bmatrix} \leq 0. \end{aligned}$$
Note that
$$ \begin{aligned} &\dot{x}(t) +Cx \bigl(t-\eta (t) \bigr)-Af \bigl(x(t) \bigr)-Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr) \\ &\quad{} -D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds-E\dot{x} \bigl(t-h(t) \bigr)-u(t)=0. \end{aligned} $$
For any appropriately dimensioned matrix \(S_{1}\), the following is satisfied:
$$\begin{aligned}& 2x ^{T}(t)S_{1}\dot{x}(t)+2x^{T}(t)S_{1}Cx \bigl(t-\eta (t) \bigr)-2x^{T}(t)S_{1}Af \bigl(x(t) \bigr) \\& \quad{} -2x^{T}(t)S_{1}Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr)-2x^{T}(t)S_{1}D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \\& \quad{} -2x^{T}(t)S_{1}E\dot{x} \bigl(t-h(t) \bigr)-2x^{T}(t)S_{1}u(t)=0. \end{aligned}$$
Similarly, we have
$$\begin{aligned} &2 \dot{x}^{T}(t)S_{2}\dot{x}(t)+2\dot{x}^{T}(t)S_{2}Cx \bigl(t-\eta (t) \bigr)-2 \dot{x}^{T}(t)S_{2}Af \bigl(x(t) \bigr) \\ &\quad{} -2\dot{x}^{T}(t)S_{2}Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr)-2\dot{x}^{T}(t)S _{2}D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds \\ &\quad{} -2\dot{x}^{T}(t)S_{2}E\dot{x} \bigl(t-h(t) \bigr)-2 \dot{x}^{T}(t)S_{2}u(t)=0, \end{aligned}$$
$$\begin{aligned} &2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}\dot{x}(t)+2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}Cx \bigl(t- \eta (t) \bigr)-2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}Af(x(t) \\ &\quad{} -2\dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}Bf \bigl(x \bigl(t-\tau _{1}(t)-\tau _{2}(t) \bigr) \bigr) -2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}E\dot{x} \bigl(t-h(t) \bigr) \\ &\quad{} -2\dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}D \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds -2 \dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}u(t)=0. \end{aligned}$$
In addition, it follows from Lemma 2 that for every \(H\geq 0\), \(N\geq 0\),
$$\begin{aligned} &2\dot{x}^{T}(t)S_{2}u(t)\leq \dot{x}^{T}(t)H \dot{x}(t)+u^{T}(t)S_{2}H ^{-1}S_{2}u(t), \end{aligned}$$
$$\begin{aligned} &2\dot{x}^{T} \bigl(t-h(t) \bigr)S_{3}u(t) \leq \dot{x}^{T} \bigl(t-h(t) \bigr)N \dot{x} \bigl(t-h(t) \bigr)+u^{T}(t)S_{3}N^{-1}S_{3}u(t). \end{aligned}$$
From Eqs. (3.2)–(3.27), if we let \(H=S_{2}\), \(N=S _{3}\), we can derive that
$$\begin{aligned} &\dot{V} \bigl(t,x(t) \bigr)+2\alpha V \bigl(t,x(t) \bigr) \\ & \quad \leq {-x^{T}(t)Q_{1}x(t)}+x^{T}(t)[2P-2S_{1}]u(t)+u^{T}(t)S_{3}u(t)+ \xi ^{T}(t)\varPhi \xi (t), \end{aligned}$$
$$\begin{aligned}& \xi (t)= \biggl[ x(t), x(t-\tau ), x(t-\tau _{1}), x(t-\tau _{2}), x(t-\eta ), x(t-h), x \bigl(t-\tau (t) \bigr), \\& \hphantom{\xi (t)=}{}x \bigl(t-\tau _{1}(t) \bigr),x \bigl(t-\tau _{2}(t) \bigr), x \bigl(t-\eta (t) \bigr), f \bigl(x(t) \bigr), f \bigl(x \bigl(t- \tau (t) \bigr) \bigr), \\& \hphantom{\xi (t)=}{} \int _{t-\eta }^{t}x(s)\,ds, \frac{1}{\eta -\eta (t)} \int _{t-\eta } ^{t-\eta (t)}x(s)\,ds, \frac{1}{\eta (t)} \int _{t-\eta (t)}^{t}x(s)\,ds, \\& \hphantom{\xi (t)=}{} \frac{1}{\tau (t)} \int _{t-\tau (t)}^{t}x(s)\,ds, \frac{1}{\tau _{1}(t)} \int _{t-\tau _{1}(t)}^{t}x(s)\,ds, \frac{1}{\tau _{2}(t)} \int _{t-\tau _{2}(t)}^{t}x(s)\,ds, \\& \hphantom{\xi (t)=}{}\frac{1}{\tau -\tau (t)} \int _{t-\tau }^{t-\tau (t)}x(s)\,ds, \frac{1}{ \tau _{1}-\tau _{1}(t)} \int _{t-\tau _{1}}^{t-\tau _{1}(t)}x(s)\,ds, \\& \hphantom{\xi (t)=}{}\frac{1}{\tau _{2}-\tau _{2}(t)} \int _{t-\tau _{2}}^{t-\tau _{2}(t)}x(s)\,ds, \int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f \bigl(x(s) \bigr)\,ds, \dot{x}(t), \dot{x} \bigl(t-h(t) \bigr), u(t) \biggr]^{T}, \\& \begin{aligned} \varPhi ={} &\varPsi -e^{-2\alpha \tau } \bigl[-\varGamma _{1}^{T}(t)U_{1}\varGamma _{1}(t)-2 \varGamma _{2}^{T}(t)V_{1}\varGamma _{1}(t)-\varGamma _{2}^{T}(t)U_{1} \varGamma _{2}(t) \\ &{}-\varGamma _{3}^{T}(t)U_{2}\varGamma _{3}(t)-2\varGamma _{4}^{T}(t)V_{2} \varGamma _{3}(t)-\varGamma _{4}^{T}(t)U_{2} \varGamma _{4}(t) \\ &{}-\varGamma _{5}^{T}(t)U_{3}\varGamma _{5}(t)-2\varGamma _{6}^{T}(t)V_{3} \varGamma _{5}(t)-\varGamma _{6}^{T}(t)U_{3} \varGamma _{6}(t) \bigr]. \end{aligned} \end{aligned}$$
Letting \(\tau _{1}(t)=0\), \(\tau _{1}(t)=\tau _{1}\) and \(\tau _{2}(t)=0\), \(\tau _{2}(t)=\tau _{2}\), we can get
$$ \textstyle\begin{cases} \varPhi _{1} = \varPhi (0,0), \\ \varPhi _{2} = \varPhi (0,\tau _{2}), \\ \varPhi _{3} = \varPhi (\tau _{1},0), \\ \varPhi _{4} = \varPhi (\tau _{1},\tau _{2}). \end{cases} $$
From Eq. (3.2) it is easy to deduce that
$$\begin{aligned} \lambda _{1} \bigl\vert x(t) \bigr\vert ^{2}\leq V \bigl(t,x(t) \bigr)\leq \lambda _{2} \bigl\Vert x(t) \bigr\Vert ^{2}, \end{aligned}$$
$$\begin{aligned} \bigl\Vert x(t) \bigr\Vert _{\tau ^{*}}=\sup_{\theta \in [-\tau ^{*},0]} \bigl\{ \bigl\vert x(t+ \theta ) \bigr\vert , \bigl\vert \dot{x}(t+\theta ) \bigr\vert \bigr\} \end{aligned}$$
$$\begin{aligned} &\lambda _{1}=\lambda _{\min }(P), \\ & \begin{aligned} \lambda _{2}&=\lambda _{\max }(P)+\tau \lambda _{\max }(Q_{1}) +\tau _{1} \lambda _{\max }(Q_{2})+\tau _{2}\lambda _{\max }(Q_{3}) \\ &\quad{} +\eta \lambda _{\max }(R_{1})+\eta \lambda _{\max }(R_{2})+\tau \lambda _{\max }(R_{3})+ \tau _{1}\lambda _{\max }(R_{4}) \\ &\quad {}+\tau _{2}\lambda _{\max }(R_{5})+\tau _{1}^{2}\lambda _{\max }(T_{1}) +\tau _{2}^{2}\lambda _{\max }(T_{2})+\tau ^{2}\lambda _{\max }(T_{3}) \\ &\quad {}+\eta ^{2}\lambda _{\max }(T_{4})+h^{2} \lambda _{\max }(T_{5})+\frac{ \tau _{1}^{3}}{2}\lambda _{\max }(U_{1}) +\frac{\tau _{2}^{3}}{2}\lambda _{\max }(U_{2}) \\ &\quad {}+\frac{\tau ^{3}}{2}\lambda _{\max }(U_{3})+\eta ^{2}\lambda _{\max }(L _{2}) +\max _{j\in \{1,2,\ldots,n\}}F_{j}(\delta _{2}-\delta _{1})^{2}\lambda _{\max }(L_{1}). \end{aligned} \end{aligned}$$
Then according to the LMI (3.1) and Eq. (3.29), we have
$$\begin{aligned}& \dot{V} \bigl(t,x(t) \bigr)+2\alpha V \bigl(t,x(t) \bigr) \\& \quad \leq {-x^{T}(t)Q_{1}x(t)}+x^{T}(t)[2P-2S_{1}]u(t)+u^{T}(t)S_{3}u(t) \\& \quad \leq {-\lambda _{\min }(Q_{1})} \bigl\vert x(t) \bigr\vert ^{2}+2 \bigl\vert x(t) \bigr\vert \cdot \bigl\vert (P-S_{1}) \bigr\vert \cdot \bigl\vert u(t) \bigr\vert +\lambda _{\max }(S_{3}) \bigl\vert u(t) \bigr\vert ^{2} \\& \quad \leq {-\lambda _{\min }(Q_{1})} \bigl\vert x(t) \bigr\vert ^{2}+2 \bigl\vert x(t) \bigr\vert \cdot \bigl\vert (P-S_{1}) \bigr\vert \cdot \varGamma _{u}+\lambda _{\max }(S_{3})\varGamma _{u}^{2} \\& \quad \leq {-\lambda _{\min }(Q_{1})} \bigl( \bigl\vert x(t) \bigr\vert -\phi _{1} \bigr) \bigl( \bigl\vert x(t) \bigr\vert - \phi _{2} \bigr), \end{aligned}$$
$$\begin{aligned} &\phi _{1}=\frac{ \vert (P-S_{1}) \vert +\sqrt{ \vert (P-S_{1}) \vert ^{2}+{\lambda _{\min }(Q _{1})}\lambda _{\max }(S_{3})}}{{\lambda _{\min }(Q_{1})}}\varGamma _{u}, \\ &\phi _{2}=\frac{ \vert (P-S_{1}) \vert -\sqrt{ \vert (P-S_{1}) \vert ^{2}+{\lambda _{\min }(Q _{1})}\lambda _{\max }(S_{3})}}{{\lambda _{\min }(Q_{1})}}\varGamma _{u}. \end{aligned}$$
Note that \(\phi _{2}\leq 0\) and \(\phi _{2}=0\) if and only if external input \(u=0\). Hence, one may deduce that when \(\vert x(t)\vert >\phi _{1}\), i.e., \(x\notin S\), it holds that
$$\begin{aligned} &\dot{V} \bigl(t,x(t) \bigr)+2\alpha V \bigl(t,x(t) \bigr)\leq 0, \quad t\in R_{+},\quad\quad {V} \bigl(t,x(t) \bigr) \leq {V}(0,\phi )e^{-2\alpha t}, \quad t\in R_{+}, \\ &\lambda _{1} \bigl\vert x(t,0,\phi ) \bigr\vert ^{2} \leq V \bigl(t,x(t) \bigr)\leq V(0,\phi )e^{-2 \alpha t}\leq \lambda _{2}e^{-2\alpha t} \Vert \phi \Vert _{\tau *}^{2}. \end{aligned}$$
Hence when \(x\notin S \), we finally obtain that
$$\begin{aligned} & \bigl\vert x(t,0,\phi ) \bigr\vert \leq \sqrt{\frac{\lambda _{2}}{\lambda _{1}}} \Vert \phi \Vert _{\tau *}e^{-\alpha t}, \quad t\in R_{+}. \end{aligned}$$
Note that S is a sphere, when \({x\notin S}\), \(M=\sqrt{\frac{ \lambda _{2}}{\lambda _{1}}}\Vert \phi \Vert _{\tau *}\),
$$\begin{aligned} \inf_{\tilde{x}\in S} \bigl\{ \bigl\vert x(t,0,\phi )-\tilde{x} \bigr\vert \bigr\} \leq \bigl\vert x(t,0, \phi )-0 \bigr\vert \leq Me^{-\alpha t}, t\in R_{+}. \end{aligned}$$
According to Definition 2, we can get that system (2.2) is globally exponentially dissipative with positively invariant and globally exponentially attractive set S. This completes the proof. □
In the proof of Theorem 3.1, an LMI-based condition imposed on global exponential dissipativity of system (2.2) was given. It is worth mentioning that in order to derive the globally exponentially attractive set S and guarantee the practicability of dissipativity criteria, we chose two special but suitable \(H=S_{2}\) and \(N = S_{3}\) in (3.28). From Theorem 3.1, we can find that the globally exponentially attractive set S can be directly obtained by using the LMIs.
In Theorem 3.1, we firstly transform system (2.1) to system (2.2) by using a convex combination technique and Filippov's theorem. In addition, we introduce the double and triple integrals in the LKF by considering leakage, discrete and two additive time-varying delays. The problem has not been solved in [29, 30, 40]. Constructing this form of double and triple integral terms in the LKF is a recent tool to get less conservative results.
If in Theorem 3.2 we take the exponential dissipativity rate index \(\alpha =0\) and replace the exponential-type Lyapunov–Krasovskii functional in Theorem 3.1, then we can obtain the following theorem.
Under the same conditions as in Theorem 3.1, system (2.2) is global dissipative, and S given in Theorem 3.1 is the positively invariant and globally attractive set if the following LMI holds:
$$ \varPhi _{k}=\varTheta -\varUpsilon _{k}^{T} \begin{bmatrix} U_{1}&V_{1}&0&0&0&0 \\ *&U_{1}&0&0&0&0 \\ *&*&U_{2}&V_{2}&0&0 \\ *&*&*&U_{2}&0&0 \\ *&*&*&*&U_{3}&V_{3} \\ *&*&*&*&*&U_{3} \end{bmatrix} \varUpsilon _{k}< 0 \quad (k=1,2,3,4), $$
where\(\varTheta =[\varTheta ]_{l\times n}\) (\(l,n=1,2,\ldots,25\)), \(\varTheta _{1,1}=-PM-M ^{T}P+2Q_{1}+Q_{2}+Q_{3}+R_{1}+R_{2}+R_{3}+R_{4}+R_{5} -4T_{1}-4T_{2}-4T _{3}-4T_{4}-4T_{5}+\eta ^{2}L_{2}-K_{1}\beta _{1}\), \(\varTheta _{1,2}=-2G _{3}\), \(\varTheta _{1,3}=-2G_{1}\), \(\varTheta _{1,4}=-2G_{2}\), \(\varTheta _{1,5}=PM-2G _{4}\), \(\varTheta _{1,6}=T_{5}\), \(\varTheta _{1,7}=-2(T_{3}+2G_{3})\), \(\varTheta _{1,8}=-2(T_{1}+2G_{1})\), \(\varTheta _{1,9}=-2(T_{2}+2G_{2})\), \(\varTheta _{1,10}=-PC+S_{1}C-2(T_{4}+2G_{4})\), \(\varTheta _{1,11}={PA-S_{1}A+K_{2} \beta _{1}}\), \(\varTheta _{1,12}=PB-S_{1}B\), \(\varTheta _{1,13}=M^{T}PM\), \(\varTheta _{1,14}=-6G_{4}\), \(\varTheta _{1,15}= -6T_{4}\), \(\varTheta _{1,16}=-6T _{3}\), \(\varTheta _{1,17}=-6T_{1}\), \(\varTheta _{1,18}=-6T_{2}\), \(\varTheta _{1,19}=6G_{3}\), \(\varTheta _{1,20}=6G_{1}\), \(\varTheta _{1,21}=6G_{2}\), \(\varTheta _{1,22}=PD-S_{1}D\), \(\varTheta _{1,23}=-S_{1}\), \(\varTheta _{1,24}=PE-S _{1}E\), \(\varTheta _{2,2}=-Q_{1}-4T_{3}\), \(\varTheta _{2,7}=-2(T_{3}+2G_{3})\), \(\varTheta _{2,18}=6G_{3}\), \(\varTheta _{2,21}=6T_{3}\), \(\varTheta _{3,3}=-Q _{2}-4T_{1}\), \(\varTheta _{3,8}=-2(T_{1}+2G_{1})\), \(\varTheta _{3,19}=6G_{1}\), \(\varTheta _{3,22}=6T_{1}\), \(\varTheta _{4,4}=-Q_{3}-4T_{2}\), \(\varTheta _{4,9}=-2(T _{2}+2G_{2})\), \(\varTheta _{4,20}=6G_{2}\), \(\varTheta _{4,23}=6T_{2}\), \(\varTheta _{5,5}=-R_{2}-4T_{4}\), \(\varTheta _{5,10}=-2(T_{4}+2G_{4})\), \(\varTheta _{5,13}=-M^{T}PM\), \(\varTheta _{5,14}=6T_{4}\), \(\varTheta _{5,15}=6G_{4}\), \(\varTheta _{6,6}=-T_{5}\), \(\varTheta _{7,7}=-(1-\mu )R_{3}-4(2T_{3}+G_{3})-K _{1}\beta _{2}\), \(\varTheta _{7,12}={-K_{2}\beta _{2}}\), \(\varTheta _{7,16}=6(T _{3}+G_{3})\), \(\varTheta _{7,19}=6(T_{3}+G_{3})\), \(\varTheta _{8,8}=-(1-\mu _{1})R_{4}-4(2T_{1}+G_{1})\), \(\varTheta _{8,17}=6(T_{1}+G_{1})\), \(\varTheta _{8,20}=6(T_{1}+G_{1})\), \(\varTheta _{9,9}=-(1-\mu _{2})R_{5}-4(2T _{2}+G_{2})\), \(\varTheta _{9,18}=6(T_{2}+G_{2})\), \(\varTheta _{9,21}=6(T_{2}+G _{2})\), \(\varTheta _{10,10}=-(1-\mu _{3})R_{1}-4(2T_{4}+G_{4})\), \(\varTheta _{10,13}=M ^{T}PC\), \(\varTheta _{10,14}=6(T_{4}+G_{4})\), \(\varTheta _{10,15}=6(T_{4}+G _{4})\), \(\varTheta _{10,23}=-S_{2}C\), \(\varTheta _{10,24}=-S_{3}C\), \(\varTheta _{11,11}=( \delta _{2}-\delta _{1})^{2}L_{1}-\beta _{1}\), \(\varTheta _{11,13}=-M^{T}PA\), \(\varTheta _{11,23}=S_{2}A\), \(\varTheta _{11,24}=-S_{3}A\), \(\varTheta _{12,12}=- \beta _{2}\), \(\varTheta _{12,13}=-M^{T}PB\), \(\varTheta _{12,23}=S_{2}B\), \(\varTheta _{12,24}=-S_{3}B\), \(\varTheta _{13,13}=-2L_{2}\), \(\varTheta _{13,21}=-M ^{T}PD\), \(\varTheta _{13,24}=-M^{T}PE\), \(\varTheta _{13,25}=-2MP\), \(\varTheta _{14,14}=-12T_{4}\), \(\varTheta _{14,15}=-12G_{4}\), \(\varTheta _{15,15}=-12T _{4}\), \(\varTheta _{16,16}=-12T_{3}\), \(\varTheta _{16,19}=-12G_{3}\), \(\varTheta _{17,17}=-12T_{1}\), \(\varTheta _{17,20}=-12G_{1}\), \(\varTheta _{18,18}=-12T _{2}\), \(\varTheta _{18,21}=-12G_{2}\), \(\varTheta _{19,19}=-12T_{3}\), \(\varTheta _{20,20}=-12T_{1}\), \(\varTheta _{21,21}=-12T_{2}\), \(\varTheta _{22,22}=-L _{1}\), \(\varTheta _{22,23}=S_{2}D\), \(\varTheta _{22,24}=-S_{3}D\), \(\varTheta _{23,23}=\frac{ \tau _{1}^{4}}{4}U_{1}+\frac{\tau _{2}^{4}}{4}U_{2} +\frac{\tau ^{4}}{4}U _{3}-S_{2}+\tau _{1}^{2}T_{1}+\tau _{2}^{2}T_{2}+\tau ^{2}T_{3} +\eta ^{2}T_{4}+h^{2}T_{5}\), \(\varTheta _{23,24}=S_{2}E\), \(\varTheta _{24,24}=S_{3}E+E ^{T}S_{3}+S_{3}\), \(\varTheta _{25,25}=S_{2}\), \(\varUpsilon _{k}^{T}=[\varGamma _{1k},\varGamma _{2k},\varGamma _{3k},\varGamma _{4k}, \varGamma _{5k},\varGamma _{6k}]^{T}\) (\(k=1,2,3,4\)), \(\varGamma _{11}^{T}=\varGamma _{12}^{T}=\tau _{1}(e_{1}-e_{20})\), \(\varGamma _{13}^{T}=\varGamma _{14}^{T}=\mathbf{0}\), \(\varGamma _{21}^{T}=\varGamma _{22}^{T}=\mathbf{0}\), \(\varGamma _{23}^{T}=\varGamma _{24}^{T}=\tau _{1}(e_{1}-e_{17})\), \(\varGamma _{31}^{T}=\varGamma _{33}^{T}=\tau _{2}(e_{1}-e_{21})\), \(\varGamma _{32} ^{T}=\varGamma _{34}^{T}=\mathbf{0}\), \(\varGamma _{41}^{T}=\varGamma _{43}^{T}=\mathbf{0}\), \(\varGamma _{42}^{T}= \varGamma _{44}^{T}=\tau _{2}(e_{1}-e_{18})\), \(\varGamma _{51}^{T}= \tau (e_{1}-e_{19})\), \(\varGamma _{52}^{T}=\tau _{1}(e_{1}-e_{19})\), \(\varGamma _{53}^{T}=\tau _{2}(e_{1}-e_{19})\), \(\varGamma _{54}^{T}=\varGamma _{61}^{T}=\mathbf{0}\), \(\varGamma _{62}^{T}=\tau _{2}(e_{1}-e_{16})\), \(\varGamma _{63}^{T}=\tau _{1}(e_{1}-e_{16})\), \(\varGamma _{64}^{T}=\tau (e_{1}-e_{19})\), \(e_{i}=[\mathbf{0}_{n\times (i-1)n},\mathbf{I}_{n\times n},\mathbf{0} _{n\times (25-i)n}]\) (\(i=1,2,\ldots,25\)).
Replace the exponential-type Lyapunov–Krasovskii functional in Theorem 3.1 by
$$\begin{aligned}& V_{1} \bigl(t,x(t) \bigr)= \biggl[x(t)-M \int _{t-\eta }^{t}x(t)\,ds \biggr]^{T}P \biggl[x(t)-M \int _{t-\eta }^{t}x(t)\,ds \biggr], \\& \begin{aligned} V_{2} \bigl(t,x(t) \bigr)&= \int _{t-\tau }^{t}x^{T}(s)Q_{1}x(s) \,ds + \int _{t-\tau _{1}}^{t}x^{T}(s)Q_{2}x(s) \,ds \\ &\quad{} + \int _{t-\tau _{2}}^{t}x^{T}(s)Q_{3}x(s) \,ds, \end{aligned} \\& \begin{aligned} V_{3} \bigl(t,x(t) \bigr)&= \int _{t-\eta (t)}^{t}x^{T}(s)R_{1}x(s) \,ds + \int _{t- \eta }^{t}x^{T}(s)R_{2}x(s) \,ds \\ &\quad{}+ \int _{t-\tau (t)}^{t}x(s)^{T} R_{3}x(s)\,ds + \int _{t-\tau _{1}(t)}^{t}x(s)^{T} R_{4}x(s)\,ds \\ &\quad{}+ \int _{t-\tau _{2}(t)}^{t}x(s)^{T} R_{5}x(s)\,ds, \end{aligned} \\& \begin{aligned} V_{4} \bigl(t,x(t) \bigr)&=\tau _{1} \int _{-\tau _{1}}^{0} \int _{t+\theta }^{t} \dot{x}^{T}(s)T_{1} \dot{x}(s)\,ds\,d\theta \\ &\quad{} +\tau _{2} \int _{-\tau _{2}}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{2} \dot{x}(s)\,ds \,d\theta \\ &\quad{} +\tau \int _{-\tau }^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{3} \dot{x}(s)\,ds\,d \theta +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{4} \dot{x}(s)\,ds\,d\theta \\ &\quad{} +h \int _{-h}^{0} \int _{t+\theta }^{t}\dot{x}^{T}(s)T_{5} \dot{x}(s)\,ds\,d \theta , \end{aligned} \\& V_{5} \bigl(t,x(t) \bigr)=(\delta _{2}-\delta _{1}) \int _{-\delta _{2}}^{-\delta _{1}} \int _{t+\theta }^{t}f^{T} \bigl(x(s) \bigr)L_{1}f \bigl(x(s) \bigr)\,ds\,d\theta +\eta \int _{-\eta }^{0} \int _{t+\theta }^{t}x^{T}(s)L_{2}x(s) \,ds\,d\theta , \\& \begin{aligned} V_{6} \bigl(t,x(t) \bigr)&=\frac{\tau _{1}^{2}}{2} \int _{-\tau _{1}}^{0} \int _{ \theta }^{0} \int _{t+\lambda }^{t}\dot{x}^{T}(s)U_{1} \dot{x}(s)\,ds\,d \lambda \,d\theta \\ &\quad{}+\frac{\tau _{2}^{2}}{2} \int _{-\tau _{2}}^{0} \int _{ \theta }^{0} \int _{t+\lambda }^{t}\dot{x}^{T}(s)U_{2} \dot{x}(s)\,ds\,d \lambda \,d \theta \\ &\quad{} +\frac{\tau ^{2}}{2} \int _{-\tau }^{0} \int _{\theta }^{0} \int _{t+\lambda }^{t}\dot{x}^{T}(s)U_{3} \dot{x}(s)\,ds\,d \lambda \,d \theta . \end{aligned} \end{aligned}$$
The rest of the proof of Theorem 3.2 is similar to that of Theorem 3.1, so the details are omitted. □
In particular, when \(E=0\) and \(D=0\), system (2.2) is written as system (4) in [19], we can see that the system is dissipative from [19]. Furthermore, we discuss the global exponential dissipativity of system (2.2): our model can be regarded as an extension of system (4) from [19].
If \(\tau _{1}(t)+\tau _{2}(t)=\tau (t)\), \(0\leq \tau (t)\leq \tau \), \(\vert \dot{\tau }(t)\leq \mu \vert \), \(E=0\) and \(\eta (t)=0\), i.e., system (2.2) is without two additive time-varying as well as leakage delays and neural term, then system (2.2) is reduced to the following neural network:
$$ \textstyle\begin{cases} \dot{x}(t)=-Cx(t)+Af(x(t))+Bf(x(t-\tau (t))) \\ \hphantom{\dot{x}(t)=}{} +D\int _{t-\delta _{2}(t)}^{t-\delta _{1}(t)}f(x(s))\,ds+\mu (t), \\ y(t)=f(x(t)), \\ x(t)=\phi (t), t\in (-\tau ^{*},0). \end{cases} $$
So the system is no longer a neutral-type memristive neural network. We find that the dissipativity of other types of neural network model has been discussed in [30, 41, 42]. When some terms are removed, the dissipativity result of Theorem 3.1 can be obtained by utilizing LMI. So our system is more general.
Example and simulation
In this section, we give a numerical example to illustrate the effectiveness of our results.
Consider the two-dimensional MNNs (2.1) with the following parameters:
$$\begin{aligned}& a_{11} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 1.2,& \vert x_{1}(t) \vert \leq 1, \\ -1, & \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \quad\quad a_{12} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 0.3, & \vert x_{1}(t) \vert \leq 1, \\ 0.5,& \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \\ & a_{21} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 0.7,& \vert x_{2}(t) \vert \leq 1, \\ -1, & \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \quad\quad a_{22} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 2.5, & \vert x_{2}(t) \vert \leq 1, \\ -0.3, & \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \\ & b_{11} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 0.8,& \vert x_{1}(t) \vert \leq 1, \\ 0.2, & \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \quad\quad b_{12} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} 0.05, & \vert x_{1}(t) \vert \leq 1, \\ -0.05,& \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \\ & b_{21} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 0.3, & \vert x_{2}(t) \vert \leq 1 , \\ 1,& \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \quad\quad b_{22} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 0.9,& \vert x_{2}(t) \vert \leq 1, \\ -0.3,& \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \\ & d_{11} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} -0.9, & \vert x_{1}(t) \vert \leq 1, \\ 2, & \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \quad\quad d_{12} \bigl(x_{1}(t) \bigr)= \textstyle\begin{cases} -0.5,& \vert x_{1}(t) \vert \leq 1, \\ -0.3,& \vert x_{1}(t) \vert > 1, \end{cases}\displaystyle \\ & d_{21} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 2, & \vert x_{2}(t) \vert \leq 1, \\ 0.3,& \vert x_{2}(t) \vert > 1, \end{cases}\displaystyle \quad\quad d_{22} \bigl(x_{2}(t) \bigr)= \textstyle\begin{cases} 1.5,& \vert x_{2}(t) \vert \leq 1, \\ 1, & \vert x_{2}(t) \vert > 1. \end{cases}\displaystyle \end{aligned}$$
The activation function are \(f_{1}(s)=\tanh (0.3s) - 0.2\sin (s)\), \(f _{2}(s)=\tanh (0.2s) + 0.3\sin (s)\). Let \(\alpha =0.01\), \(c_{1}=c_{2}=2\), \(e_{1}=e_{2}=0.2\), \(m_{1}=2\), \(m_{2}=3.56\), \(h(t)=0.1\sin (2t) + 0.5\), \(\eta (t) =0.1\sin (2t) + 0.2\), \(\tau _{1}(t)=0.1\sin (t)+0.2\), \(\tau _{2}(t)= 0.1\cos (t) + 0.5\), \(\delta _{1}(t) = 0.4\sin (t) + 0.4\), \(\delta _{2}(t) = 0.4\sin (t) + 0.6\), \(u = [0.5\sin (t); 0.25\cos (t)]^{T}\). So \(\eta =0.4\), \(\bar{h}=0.6\), \(\tau _{1} = 0.3\), \(\tau _{2} = 0.6\), \(\tau = 0.9\), \(\delta _{1} = 0\), \(\delta _{2} = 1\), \(\mu _{1} = 0.1\), \(\mu _{2} = 0.1\), \(\mu = 0.2\). Then \(K_{1}^{-}=-0.2\), \(K_{1}^{+}=0.5\), \(K_{2}^{-}=-0.3\) and \(K_{2}^{+}=0.5\), i.e.,
$$ K_{1}= \begin{bmatrix} -0.1 & 0 \\ 0 & -0.15 \end{bmatrix} ,\quad \quad K_{2}= \begin{bmatrix} 0.15 & 0 \\ 0 & 0.1 \end{bmatrix} . $$
With the above parameters, using LMI toolbox in MATLAB, we obtain the following feasible solution to LMIs in Theorem 3.1:
$$\begin{aligned} &P = 1.0\times 10^{-11} \begin{bmatrix} 0.0764 & -0.0110\\ -0.0110 & 0.1583 \end{bmatrix} , \quad\quad Q_{1} = 1.0\times 10^{-11} \begin{bmatrix} -0.6182 & -0.0001\\ -0.0001 & -0.6132 \end{bmatrix} , \\ &Q_{2} = 1.0\times 10^{-11} \begin{bmatrix} 0.1918 & 0.0006\\ 0.0006 & 0.2014 \end{bmatrix} ,\quad\quad Q_{3} = 1.0\times 10^{-11} \begin{bmatrix} 0.2056 & -0.0002\\ -0.0002 & 0.2169 \end{bmatrix} , \\ &U_{1} = 1.0\times 10^{-10} \begin{bmatrix} 0.2502 & 0.0004\\ 0.0004 & 0.2525 \end{bmatrix} ,\quad\quad U_{2} = 1.0\times 10^{-10} \begin{bmatrix} 0.1844 & 0.0010\\ 0.0010 & 0.1857 \end{bmatrix} , \\ &U_{3} = 1.0\times 10^{-12} \begin{bmatrix} 0.3907 & 0.0451\\ 0.0451 & 0.4122 \end{bmatrix} ,\quad\quad R_{1} = 1.0\times 10^{-11} \begin{bmatrix} 0.3373 & 0.0191\\ 0.0191 & 0.4588 \end{bmatrix} , \\ &R_{2} = 1.0\times 10^{-11} \begin{bmatrix} 0.3207& -0.0053\\ -0.0053 & 0.3337 \end{bmatrix} ,\quad\quad R_{3} = 1.0\times 10^{-10} \begin{bmatrix} 0.1887 & -0.0002\\ -0.0002 & 0.1863 \end{bmatrix} , \\ &R_{4} = 1.0\times 10^{-11} \begin{bmatrix} 0.2471 & 0.0008\\ 0.0008 & 0.2573 \end{bmatrix} ,\quad\quad R_{5} = 1.0\times 10^{-11} \begin{bmatrix} 0.3801 & 0.0002\\ 0.0002 & 0.3920 \end{bmatrix} , \\ &T_{1} = 1.0\times 10^{-10} \begin{bmatrix} 0.6706 & 0.0008\\ 0.0008 & 0.6752 \end{bmatrix} ,\quad\quad T_{2} = 1.0\times 10^{-11} \begin{bmatrix} 0.3678 & 0.0005\\ 0.0005 & 0.3711 \end{bmatrix} , \\ &T_{3} = 1.0\times 10^{-11} \begin{bmatrix} 0.1644 & 0.0005\\ 0.0005 & 0.1672 \end{bmatrix} ,\quad\quad T_{4} = 1.0\times 10^{-11} \begin{bmatrix} 0.5042 & -0.0251\\ -0.0251 & 0.4591 \end{bmatrix} , \\ &T_{5} = 1.0\times 10^{-12} \begin{bmatrix} -0.4935 & 0.1301\\ 0.1301 & -0.5262 \end{bmatrix} ,\quad\quad G_{1} = 1.0\times 10^{-10} \begin{bmatrix} 0.2745 & 0.0004\\ 0.0004 & 0.2766 \end{bmatrix} , \\ &G_{2} = 1.0\times 10^{-12} \begin{bmatrix} 0.1888 & 0.0007\\ 0.0007 & 0.1930 \end{bmatrix} ,\quad\quad G_{3} = 1.0\times 10^{-12} \begin{bmatrix} -0.3413 & 0.0003\\ 0.0003 & -0.3296 \end{bmatrix} , \\ &G_{4} = 1.0\times 10^{-12} \begin{bmatrix} -0.5025 & -0.0079\\ -0.0079 & -0.5993 \end{bmatrix} ,\quad\quad L_{1} = 1.0\times 10^{-12} \begin{bmatrix} -0.9111 & -0.3262\\ -0.3262 & -0.8944 \end{bmatrix} , \\ &L_{2} = 1.0\times 10^{-9} \begin{bmatrix} 0.1237 & -0.0008\\ -0.0008 & 0.1207 \end{bmatrix} ,\quad\quad S_{2} = 1.0\times 10^{-12} \begin{bmatrix} -0.4977 & 0.1688\\ 0.1688 & -0.2524 \end{bmatrix} , \\ &S_{3} = 1.0\times 10^{-13} \begin{bmatrix} 0.0904 & -0.4569\\ -0.4569 & -0.7191 \end{bmatrix} ,\quad\quad \beta _{1} = 1.0\times 10^{-9} \begin{bmatrix} 0.1429 & 0\\ 0 & 0.1061 \end{bmatrix} , \\ &\beta _{2} = 1.0\times 10^{-9} \begin{bmatrix} 0.2035 & 0\\ 0 & 0.1523 \end{bmatrix} ,\quad\quad S_{1} = 1.0\times 10^{-11} \begin{bmatrix} 0.2004 & -0.0384\\ -0.0303 & 0.1845 \end{bmatrix} , \\ &V_{1} = \begin{bmatrix} 74.2116 & 0\\ 0 &74.2116 \end{bmatrix} ,\quad\quad V_{2} = \begin{bmatrix} 74.2116 & 0\\ 0 & 74.2116 \end{bmatrix} , \\ & V_{3} = \begin{bmatrix} 74.2116 & 0\\ 0 & 74.2116 \end{bmatrix} . \end{aligned}$$
Then system (2.1) is a globally exponentially dissipative system, and the set \(S=\{x:\vert x\vert \leq 8.333\}\). Figure 1 shows trajectories of neuron states \(x_{1}(t)\) and \(x_{2}(t)\) of neutral-type MNNs (2.1). Figure 2 shows three-dimensional space trajectories of neuron states \(x_{1}(t)\) and \(x_{2}(t)\) of neutral-type MNNs (2.1). It can be seen that neuron states \(x_{1}(t)\) and \(x_{2}(t)\) are becoming periodic when the outputs of neutral-type MNNs (2.1) controllers are designed as periodic signals. According to Theorem 3.1 and Definition 2, system (2.1) is globally dissipative. Under the same conditions, if we take the external input \(u(t)=0\), then by Theorem 3.2, we know that the invariant set is \(S=\{0\}\) and system (2.1) is globally stable as shown in Fig. 3.
State trajectories of \(x_{1}(t)\), \(x_{2}(t)\)
State trajectories of \(x_{1}\), \(x_{2}\) in three-dimensional space
State trajectories of \(x_{1}\), \(x_{2}\) in three-dimensional space when \(u(t)=0\)
This paper has investigated the dissipativity of neutral-type memristive neural network with two additive time-varying delays, as well as distribution time and time-varying leakage delays. By applying novel linear matrix inequalities, Lyapunov–Krasovskii functional and Newton–Leibniz formula, the dissipativity of the system was obtained. Even though the dissipative of MNNs has been reported before, there are few references about the dissipativity of neutral-type MNNs. We have considered adding neutral terms to the model, which made the model more realistic. Finally, we have given a numerical example to illustrate the effectiveness and exactness of our results. When Markovian jumping is added to this model, how to study the dissipativity of neutral-type MNNs with mixed delays in such a model becomes an interesting question. We will extend our work towards this direction in the future.
Wang, Z., Liu, Y., Liu, X.: On global asymptotic stability of neural networks with discrete and distributed delays. Phys. Lett. A 345(4–6), 299–308 (2005)
Egmont-Petersen, M., Ridder, D.D., Handels, H.: Image processing with neural networks—a review. Pattern Recognit. 35(10), 2279–2301 (2002)
Chua, L.: Memristor-the missing circuit element. IEEE Trans. Circuit Theory 18(5), 507–519 (1971)
Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453(7191), 80–83 (2008)
Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: Neural learning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. Neural Netw. Learn. Syst. 23(4), 565–573 (2012)
Ding, S., Wang, Z., Zhang, H.: Dissipativity analysis for stochastic memristive neural networks with time-varying delays: a discrete-time case. IEEE Trans. Neural Netw. Learn. Syst. 29(3), 618–630 (2018)
Cheng, J., Park, J.H., Cao, J., Zhang, D.: Quantized \({H^{\infty }}\) filtering for switched linear parameter-varying systems with sojourn probabilities and unreliable communication channels. Inf. Sci. 466, 289–302 (2018)
Zhang, D., Cheng, J., Park, J.H., Cao, J.: Robust \({H^{\infty }}\) control for nonhomogeneous Markovian jump systems subject to quantized feedback and probabilistic measurements. J. Franklin Inst. 355(15), 6992–7010 (2018)
Sun, J., Chen, J.: Stability analysis of static recurrent neural networks with interval time-varying delay. Appl. Math. Comput. 221(9), 111–120 (2013)
Sun, Y., Cui, B.T.: Dissipativity analysis of neural networks with time-varying delays. Neurocomputing 168, 741–746 (2015)
Li, C., Feng, G.: Delay-interval-dependent stability of recurrent neural networks with time-varying delay. Neurocomputing 72, 1179–1183 (2009)
Lv, X., Li, X.: Delay-dependent dissipativity of neural networks with mixed non-differentiable interval delays. Neurocomputing 267, 85–94 (2017)
Wei, H., Li, R., Chen, C., Tu, Z.: Extended dissipative analysis for memristive neural networks with two additive time-varying delay components. Neurocomputing 216, 429–438 (2016)
Zeng, X., Xiong, Z., Wang, C.: Hopf bifurcation for neutral-type neural network model with two delays. Appl. Math. Comput. 282, 17–31 (2016)
MathSciNet Google Scholar
Xu, C., Li, P., Pang, Y.: Exponential stability of almost periodic solutions for memristor-based neural networks with distributed leakage delays. Neural Comput. 28(12), 1–31 (2016)
Zhang, Y., Gu, D.W., Xu, S.: Global exponential adaptive synchronization of complex dynamical networks with neutral-type neural network nodes and stochastic disturbances. IEEE Trans. Circuits Syst. I, Regul. Pap. 60(10), 2709–2718 (2013)
Brogliato, B., Maschke, B., Lozano, R., Egeland, O.: Dissipative Systems Analysis and Control. Springer, Berlin (2007)
Huang, Y., Ren, S.: Passivity and passivity-based synchronization of switched coupled reaction–diffusion neural networks with state and spatial diffusion couplings. Neural Process. Lett. 5, 1–17 (2017)
Fu, Q., Cai, J., Zhong, S., Yu, Y.: Dissipativity and passivity analysis for memristor-based neural networks with leakage and two additive time-varying delays. Neurocomputing 275, 747–757 (2018)
Willems, J.C.: Dissipative dynamical systems part I: general theory. Arch. Ration. Mech. Anal. 45(5), 321–351 (1972)
Hong, D., Xiong, Z., Yang, C.: Analysis of adaptive synchronization for stochastic neutral-type memristive neural networks with mixed time-varying delays. Discrete Dyn. Nat. Soc. 2018, 8126127 (2018)
Cheng, J., Park, J.H., Karimi, H.R., Shen, H.: A flexible terminal approach to sampled-data exponentially synchronization of Markovian neural networks with time-varying delayed signals. IEEE Trans. Cybern. 48(8), 2232–2244 (2018)
Zhang, D., Cheng, J., Cao, J., Zhang, D.: Finite-time synchronization control for semi-Markov jump neural networks with mode-dependent stochastic parametric uncertainties. Appl. Math. Comput. 344–345, 230–242 (2019)
Zhang, W., Yang, S., Li, C., Zhang, W., Yang, X.: Stochastic exponential synchronization of memristive neural networks with time-varying delays via quantized control. Neural Netw. 104, 93–103 (2018)
Duan, L., Huang, L.: Global dissipativity of mixed time-varying delayed neural networks with discontinuous activations. Commun. Nonlinear Sci. Numer. Simul. 19(12), 4122–4134 (2014)
Tu, Z., Cao, J., Alsaedi, A., Alsaadi, F.: Global dissipativity of memristor-based neutral type inertial neural networks. Neural Netw. 88, 125–133 (2017)
Manivannan, R., Cao, Y.: Design of generalized dissipativity state estimator for static neural networks including state time delays and leakage delays. J. Franklin Inst. 355, 3990–4014 (2018)
Xiao, J., Zhong, S., Li, Y.: Relaxed dissipativity criteria for memristive neural networks with leakage and time-varying delays. Neurocomputing 171, 708–718 (2016)
Samidurai, R., Sriraman, R.: Robust dissipativity analysis for uncertain neural networks with additive time-varying delays and general activation functions. Math. Comput. Simul. 155, 201–216 (2019)
Lin, W.J., He, Y., Zhang, C., Long, F., Wu, M.: Dissipativity analysis for neural networks with two-delay components using an extended reciprocally convex matrix inequality. Inf. Sci. 450, 169–181 (2018)
Aubin, J.P., Cellina, A.: Differential Inclusions. Springer, Berlin (1984)
Arscott, F.M.: Differential Equations with Discontinuous Righthand Sides. Kluwer Academic, Amsterdam (1988)
Filippov, A.F.: Classical solutions of differential equations with multi-valued right-hand side. SIAM J. Control Optim. 5(4), 609–621 (1967)
Song, Q., Cao, J.: Global dissipativity analysis on uncertain neural networks with mixed time-varying delays. Chaos 18, 043126 (2008)
Liao, X., Wang, J.: Global dissipativity of continuous-time recurrent neural networks with time delay. Phys. Rev. E 68(1 Pt 2), 016118 (2003)
Seuret, A., Gouaisbaut, F.: Wirtinger-based integral inequality: application to time-delay systems. Automatica 49, 2860–2866 (2013)
Wang, Z., Liu, Y., Fraser, K., Liu, X.: Stochastic stability of uncertain Hopfield neural networks with discrete and distributed delays. Phys. Lett. A 354(4), 288–297 (2006)
Kwon, O.M., Lee, S.M., Park, J.H., Cha, E.J.: New approaches on stability criteria for neural networks with interval time-varying delays. Appl. Math. Comput. 218(19), 9953–9964 (2012)
Park, P.G., Ko, J.W., Jeong, C.: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47(1), 235–238 (2011)
Xin, Y., Li, Y., Cheng, Z., Huang, X.: Global exponential stability for switched memristive neural networks with time-varying delays. Neural Netw. 80, 34–42 (2016)
Guo, Z., Wang, J., Yan, Z.: Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays. Neural Netw. 48, 158–172 (2013)
Nagamani, G., Joo, Y.H., Radhika, T.: Delay-dependent dissipativity criteria for Markovian jump neural networks with random delays and incomplete transition probabilities. Nonlinear Dyn. 91(4), 2503–2522 (2018)
The authors would like to thank the referees for their valuable comments on an earlier version of this article.
Email address: [email protected] (Cuiping Yang), [email protected] (Zuoliang Xiong), [email protected] (Tianqing Yang).
This work is supported by National Natural Science Foundation of China (No. 61563033).
Department of Math, Nanchang University, Nanchang, China
Cuiping Yang, Zuoliang Xiong & Tianqing Yang
Cuiping Yang
Zuoliang Xiong
Tianqing Yang
All authors contributed equally to the writing of this paper. All authors of the manuscript have read and agreed to its content and are accountable for all aspects of the accuracy and integrity of the manuscript.
Correspondence to Zuoliang Xiong.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Yang, C., Xiong, Z. & Yang, T. Dissipativity analysis of neutral-type memristive neural network with two additive time-varying and leakage delays. Adv Differ Equ 2019, 6 (2019). https://doi.org/10.1186/s13662-018-1941-z
DOI: https://doi.org/10.1186/s13662-018-1941-z
Neutral-type memristive neural networks
Lyapunov–Krasovskii functional
Dissipativity
Mixed delays
|
CommonCrawl
|
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.
A lost lemma about periodicity in a grid of long exact sequences?
This is a question about finding references and hopefully a larger context for a lemma in homological algebra I proved recently. The motivation is to understand properties of characteristic classes of $T_f$, the mapping torus of a diffeomorphism $f$ of a closed manifold, by applying the lemma to Mayer-Vietoris and a change-of-coefficients sequence for the cohomology of $T_f$.
Let $C_{ij}, 1 \leq i,j \leq 3$ be cochain complexes, and $$ \begin{matrix} & & 0 & & 0 & & 0 & & \\ & & \downarrow & & \downarrow & & \downarrow & & \\ 0 & \to & C_{11} & \stackrel{g}\to & C_{21} & \stackrel{h}\to & C_{31} & \to & 0 \\ & & {\scriptstyle u}\downarrow\ & & {\scriptstyle u}\downarrow\ & & {\scriptstyle u}\downarrow\ & & \\ 0 & \to & C_{12} & \stackrel{g}\to & C_{22} & \stackrel{h}\to & C_{32} & \to & 0 \\ & & {\scriptstyle v}\downarrow\ & & {\scriptstyle v}\downarrow\ & & {\scriptstyle v}\downarrow\ & & \\ 0 & \to & C_{13} & \stackrel{g}\to & C_{23} & \stackrel{h}\to & C_{33} & \to & 0 \\ & & \downarrow & & \downarrow & & \downarrow & & \\ & & 0 & & 0 & & 0 & & \end{matrix}$$
a commuting diagram where the rows and columns are short exact sequences. Let $\delta_H : H^k(C_{3j}) \to H^{k+1}(C_{1j})$ and $\delta_V : H^k(C_{i3}) \to H^{k+1}(C_{i1})$ denote the boundary homomorphisms in the associated long exact sequences. The long exact sequences can be arranged into a commuting grid
$$ \begin{matrix} H^{k-2}(C_{33}) & \stackrel{\delta_H}\to & H^{k-1}(C_{13}) & \stackrel{g}\to & H^{k-1}(C_{23}) & \stackrel{h}\to & H^{k-1}(C_{33}) & \stackrel{\delta_H}\to & H^k(C_{13}) \\ {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ \\ H^{k-1}(C_{31}) & \stackrel{\delta_H}\to & H^k(C_{11}) & \stackrel{g}\to & H^k(C_{21}) & \stackrel{h}\to & H^k(C_{31}) & \stackrel{\delta_H}\to & H^{k+1}(C_{11}) \\ {\scriptstyle u}\downarrow\ & & {\scriptstyle u}\downarrow\ & & {\scriptstyle u}\downarrow\ & & {\scriptstyle u}\downarrow\ & & {\scriptstyle u}\downarrow\ \\ H^{k-1}(C_{32}) & \stackrel{\delta_H}\to & H^k(C_{12}) & \stackrel{g}\to & H^k(C_{22}) & \stackrel{h}\to & H^k(C_{32}) & \stackrel{\delta_H}\to & H^{k+1}(C_{12})\\ {\scriptstyle v}\downarrow\ & & {\scriptstyle v}\downarrow\ & & {\scriptstyle v}\downarrow\ & & {\scriptstyle v}\downarrow\ & & {\scriptstyle v}\downarrow\ \\ H^{k-1}(C_{33}) & \stackrel{\delta_H}\to & H^k(C_{13}) & \stackrel{g}\to & H^k(C_{23}) & \stackrel{h}\to & H^k(C_{33}) & \stackrel{\delta_H}\to & H^{k+1}(C_{13}) \\ {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ & & {\scriptstyle \delta_V}\downarrow\ \ \\ H^k(C_{31}) & \stackrel{\delta_H}\to & H^{k+1}(C_{11}) & \stackrel{g}\to & H^{k+1}(C_{21}) & \stackrel{h}\to & H^{k+1}(C_{31}) & \stackrel{\delta_H}\to & H^{k+2}(C_{11}) \\ \end{matrix}$$
The grid is symmetric under translation by 3 steps up and 3 to the right.
Lemma. If $[\alpha] \in H^k(C_{12})$ and $[\beta] \in H^k(C_{21})$ are classes such that $g[\alpha] = u[\beta] \in H^k(C_{22})$ then there is some $[\gamma] \in H^{k-1}(C_{33})$ such that both $\delta_H[\gamma] = v[\alpha] \in H^k(C_{13})$ and $\delta_V[\gamma] = -h[\beta] \in H^k(C_{31})$.
Proof. Take $\chi \in C^{k-1}_{22}$ such that $d\chi = g\alpha - u\beta$. By the definition of the boundary homomorphisms, $d(v\chi) = g(v\alpha)$ implies that $\delta_H([h(v\chi)]) = [v\alpha]$, and $d(h\chi) = -u(h\beta)$ implies that $\delta_V([v(h\chi)]) = -[h\beta]$. Hence we can set $\gamma = vh\chi$.
Does this lemma look familiar? Do you know some place where it's written down?
Edit: Corrected subscripts in statement of lemma.
Update: Thanks for the alternative proofs. However, what I'm after is rather a bibliography reference that I can cite when writing up my application, just to emphasise that it is an instance of something that someone somewhere has already considered (as I imagine it is).
ac.commutative-algebra at.algebraic-topology homological-algebra
edited Jan 4 '13 at 17:04
Johannes Nordström
Johannes NordströmJohannes Nordström
$\begingroup$ $C_{23}$ should be $C_{13}$ and $C_{32}$ should be $C_{31}$ in the statement of the lemma, no? $\endgroup$ – Mariano Suárez-Álvarez Jan 3 '13 at 20:10
$\begingroup$ Look at your grid as a double complex, and let $C$ be the total complex. There is an action of $G=\mathbb Z$ on it by the translation you described, so we can compute hypercohomology $\mathbb H^\bullet(G,C)$. Using one of the two hypercohomology spectral sequences, we see is zero because of $C$ is exact; the other hypercohomology spectral sequence has then $E_2$ page looking like $H^\bullet(H^\bullet(\mathbb Z,C))$ and converges to zero. Since $\mathbb Z$ has global dimension $1$, this spectral sequence has only two rows (columns?) and degenerates at $E_3$; since the limit is zero, ... $\endgroup$ – Mariano Suárez-Álvarez Jan 3 '13 at 20:24
$\begingroup$ ... we really get a lot of isomorphisms. Maybe this is what you are seeing? (I am assuming everything converges; this should follow from the fact that your $C_{i,j}$ are bounded, I think!) $\endgroup$ – Mariano Suárez-Álvarez Jan 3 '13 at 20:25
$\begingroup$ Re: your update: You can refer to this MO question! $\endgroup$ – Mariano Suárez-Álvarez Jan 4 '13 at 17:05
$\begingroup$ It looks like it is the $3\times 3$ lemma (either for complexes in abelian categories, or in triangulated categories). $\endgroup$ – ACL Jan 16 '17 at 20:06
This has a simple interpretation in terms of spectral sequences. Think of the top left 2x2 square of the original square as a triple complex. Call the 3 dimensions $x$ (horizontal), $y$ (vertical), and $z$ ($C_{ij}$ differential). By using either double complex spectral sequence, we see that the total cohomology of the $xy$-plane is just $C_{33}$. Thus the total cohomology of the triple complex is $H^*(C_{33})$.
On the other hand, we can also compute the total cohomology of the triple complex by a spectral sequence that first takes the $z$-cohomology and then takes the $xy$-cohomology. A pair $([\alpha],[\beta])$ in your lemma gives a class that survives this spectral sequence: $g([\alpha])-u([\beta])$ is the $d_1$ differential, and the $d_2$ differential will vanish for degree reasons. The operation taking $([\alpha],[\beta])$ to $[\gamma]$ is just the isomorphism between the limit of this spectral sequence and the total cohomology $H^*(C_{33})$.
Note that in your proof, $\chi$ is only defined up to a cocycle in $C_{22}$, and so $[\gamma]$ will only be defined modulo the image of $vh:H^{k-1}(C_{22})\to H^{k-1}(C_{33})$. This indeterminacy reflects exactly the fact that $([\alpha],[\beta])$ corresponds to an element of the associated graded of a filtration on $H^{k-1}(C_{33})$ (whose first term is the image of $vh$), rather than an element of $H^{k-1}(C_{33})$ itself.
Everything can be reduced to long exact sequences induced by short exact sequences of complexes.
In your setting, there are short exact sequences of complexes as follows
$$0\rightarrow C_{11}\stackrel{(u,g)}\longrightarrow C_{12}\oplus C_{21}\longrightarrow C_{12}\cup_{C_{11}}C_{21}\rightarrow 0$$
$$0\rightarrow C_{12}\cup_{C_{11}}C_{21}\stackrel{(g,-u)}\longrightarrow C_{22}\stackrel{h\nu}\longrightarrow C_{33}\rightarrow 0$$
This produces long exact sequences
$$\cdots\rightarrow H^{k}C_{11}\longrightarrow H^{k}C_{12}\oplus H^{k}C_{21}\longrightarrow H^{k}(C_{12}\cup_{C_{11}}C_{21})\longrightarrow H^{k+1}C_{11}\rightarrow \cdots$$
$$\cdots\rightarrow H^{k}(C_{12}\cup_{C_{11}}C_{21})\longrightarrow H^{k}C_{22}\longrightarrow H^{k}C_{33}\longrightarrow H^{k+1}(C_{12}\cup_{C_{11}}C_{21})\rightarrow \cdots$$
Your hypotheses say that
$$H^{k}C_{12}\oplus H^{k}C_{21}\longrightarrow H^{k}(C_{12}\cup_{C_{11}}C_{21})\longrightarrow H^{k}C_{22}$$
$$([\alpha],[\beta])\mapsto [\alpha-\beta]\mapsto 0$$
therefore there exists $[\gamma]\in H^{k-1}(C_{33})$ such that
$$H^{k-1}C_{33}\longrightarrow H^{k}(C_{12}\cup_{C_{11}}C_{21})$$
$$[\gamma]\mapsto [\alpha-\beta]$$
Now it is enough to compose with the morphism induced in cohomology by
$$\left(\begin{smallmatrix}\nu&0\\0&h\end{smallmatrix}\right)\colon C_{12}\cup_{C_{11}}C_{21}\longrightarrow C_{13}\oplus C_{31}$$
in order to obtain the thesis of your lemma. (BTW, notice that there is a misprint in your subscripts, you must replace two 2s by 1s)
Michael Albanese
Fernando MuroFernando Muro
One application of your lemma is in differential cohomology. See e.g. Ex. 3.25 in arxiv.
I would be very interested in a generalization of this lemma to triangulated categories. So replace your grid of exact sequences by a grid of triangles in a triangulated category. Instead of cohomology you consider the group Hom(T,...) for a fixed object T. Then you get similar long exact sequence and can state an analogous lemma. Is there a proof in this generality?
ubunkeubunke
$\begingroup$ Fernando's proof works for any homological functor on a triangulated category. $\endgroup$ – Eric Wofsey Jan 3 '13 at 21:39
$\begingroup$ Thanks. The claim in the example from your paper looks slightly different to me, but this kind of reference is helpful. $\endgroup$ – Johannes Nordström Jan 3 '13 at 23:19
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged ac.commutative-algebra at.algebraic-topology homological-algebra or ask your own question.
Five lemma in HoTop* and arbitrary pointed model categories
Commutativity of a diagram of boundary morphisms from the long exact sequence of homotopy groups of a fibration and its loop spaces
Pulling back roots from the Completion
Five-lemma for the end of long exact sequences of homotopy groups
Classification of long exact sequences
Algebraic Morse theory
Compatibility of connecting homomorphisms for Tor/Ext
Spliting of short exact exact sequences of partially ordered groups
A lemma on a sequence of three morphisms
|
CommonCrawl
|
frogeyedpeas
How can we simplify the expression $P+\sqrt{P^2+\sqrt{P^4+\sqrt{ P^8+\cdots)}}}$?
What does a "half derivative" mean?
Aug 28 '13 at 4:48
Difference between formula and algorithm
Why is $x^{-1} = \frac{1}{x}$?
Aug 4 '16 at 3:27
What is the most efficient way to calculate the sine of a rational number?
Aug 5 '14 at 17:49
Differentiability of $f(x) = x^2 \sin{\frac{1}{x}}$ and $f'$
Find a example such $\frac{(x+y)^{x+y}(y+z)^{y+z}(x+z)^{x+z}}{x^{2x}y^{2y}z^{2z}}=2016$
Finding the asymptotes of a general hyperbola
Need solution to Volerra integro-diff equation
Jun 15 '15 at 2:45
Prove that $\int_{-\infty}^{\infty} x^2 e^{-\alpha x^2} dx = \frac{1}{2} \sqrt{\frac{\pi}{\alpha^3}}$
A generalized derivative
What should an amateur do with a proof of an open problem?
Recurrence with varying coefficient
Open Problems for High School Students
Why are conjectures about the primes so hard to prove?
Solving or approximating infinitely nested integral
coefficient of $x^{17}$ in the expansion of $(1+x^5+x^7)^{20}$
Jul 27 '15 at 15:24
Is $\sqrt {2 \sqrt {3 \sqrt {4 \ldots}}}$ algebraic or transcendental?
Find the value of $\int_{1}^{e} \frac{\ln x}{x+1}dx$
The total number of subarrays
find a and b using the information given
Which polynomial has similar properties with Legendre?
Jul 2 '15 at 18:46
Solving a system of linear ODEs using variables
Calculate the sum of three series which may be telescoping
Proof of convergence of a recursive sequence
How do I solve $\int_{\frac{\pi}{6}}^{\frac{\pi}{4}}\frac{4\,dx}{\sin^2(x)\cos^2(x)}$?
What method I should use to solve this differantial equation?
Does convex optimization belong to linear or nonlinear programming?
How to differentiate $F(y) = \left(\frac{1}{y^2}-(\frac{-2}{y^4})\right)\cdot\left(y+5y^{3}\right)$
Proving $∆^nf(x_0;h_1,\cdots,h_n)=f^{(n)}(ξ)h_1\cdots h_n$
Mar 18 at 1:06
|
CommonCrawl
|
What are these orientations called in orbit?
Let's say a spacecraft is in an orbit like this one:
If the red arrows point to prograde and retrograde, and the blue arrows point to normal and antinormal, what do the green arrows point to?
In other words, what does one call the orientations that are perpendicular to both the orbit prograde and the orbit normal?
Note that it's not necessarily correct to say "towards the planet" or "away from the planet." In highly eccentric orbits like the one pictured above, both orientations can point "away."
orbital-mechanics orbit terminology
Maxpm
MaxpmMaxpm
$\begingroup$ In Kerbal Space Program they are often referred to as radial and anti-radial. $\endgroup$ – Avi Jul 29 '17 at 11:48
I'm not sure there's any generally agreed on convention, but spaceships are often regarded as, well, ships, so similar terms will be often used. E.g. for the red arrows that you describe as prograde and retrograde orientation vectors, from a spaceship's point of view and relative to its movement, also ram-facing and wake-facing is often used to describe sides, but could be also forward and aft, or even bow and stern. For the blue orientation vectors, these could then be port for left and starboard for right, relative to the vehicle's movement, facing forward. The green ones are most commonly called the nadir and zenith facing sides, but as the terminology varies depending on who's referring to it, I'd guess there's all kinds of other terms used too, from obvious down, downward, and up, upward, to deck and overhead, or even towards and away from something, in our case the body it orbits around. So for orientation relative to the ship's movement we have:
ram-facing, forward, bow,...
wake-facing, aft, stern,...
starboard, right,...
port, left,...
nadir, deck, down, downward, towards sth,...
zenith, overhead, up, upward, away from sth,...
For example, from Reference guide to the International Space Station, Assembly complete edition, NASA 2010 (PDF), four of these sides are described in the definitions section as:
nadir: Direction directly below (opposite zenith)
port: Direction to the left side (opposite starboard)
starboard: Direction to the right side (opposite port)
zenith: Directly above, opposite nadir
And the remaining two sides mentioned in text as:
ram (forward) or wake (aft) pointing
NASA's Guide to the International Space Station Laboratory Racks Interactive however names direction towards nadir as deck and direction towards zenith overhead, and alternatively also the +/- axial values that follow the right-hand rule more commonly used by astronaut pilots during navigation or to describe station's attitude (such as during docking):
Image above: The International Space Station's coordinate system. Credit: NASA
Alternatively, movement relative to these three axes could be described using aviation terms roll, pitch and yaw to describe attitude of a satellite, but these don't really denote the sides, merely rotation of the body with respect to the x, y, z (in your case red, blue, green) axes in Cartesian coordinate system, respectively.
There might be other terms I didn't think of though, but as always, it will depend on who's using them and if they're referring to the sides from the perspective of the vessel and relative to its movement, or relative to some wider frame of reference, for example with respect to the body it orbits, in which case, alas, I fail to think of other ways these orientation sides could be named, short of describing them with respect to orbital elements in any of the various coordinate systems used, such as Keplerian, Cartesian,... like you did with prograde and retrograde, which are essentially broadly describing orbital inclination with respect to the body's plane of reference.
With specific spacecraft, often its sides are also named by its components or modules, which works irrespective of spacecraft's own movement relative to another object.
TildalWaveTildalWave
$\begingroup$ I'd be surprised if ship terms like port and starboard were used much on spaceships. Because a spaceship does not have a constant attitude relative to its motion vector, "port" can refer to the forward direction one moment, and the nadir direction the next. $\endgroup$ – Hobbes Feb 10 '14 at 17:17
$\begingroup$ Port and starboard are used extensively on the International Space Station, but then, the ISS generally holds a fixed attitude relative to its velocity vector. $\endgroup$ – Tristan Feb 11 '14 at 16:23
$\begingroup$ @Hobbes If you look at this image from inside the Harmony node, for example, you'll notice blue stickers with white text "PORT", "DECK", and "OVHD" for overhead. There's also a sticker with "STBD" for starboard that Greg Chamitoff blocks the view to. In Columbus lab (see e.g. first image here, there are also "FWD" for forward and "AFT" stickers due to its different orientation. $\endgroup$ – TildalWave Apr 29 '15 at 22:19
$\begingroup$ +1 for a good thorough answer. I never heard ram/wake in actual use at JSC, just fwd/aft, but good to include them for completeness. $\endgroup$ – Organic Marble Apr 30 '15 at 1:54
$\begingroup$ Port and starboard were used extensively in the shuttle as well. Here's a switch (pictured from a simulator) using them as labels. imgur.com/a/m6jpbin Note that the attitude of the ship is irrelevant - the port wing is still the port wing. $\endgroup$ – Organic Marble Sep 23 '19 at 1:51
There are 3 directions in any orbit. The typical convention is:
Nadir- This is the direction towards the center of the planet, straight down. Oposite to NADIR is the Zenith.
Velocity Vector- Direction of movement, Prograde/retrograde are a common, Prograde is the direction of orbit, retrograde opposite
Normal direction to plane of orbit. This is often referred to as the angular momentum vector.
See also this PDF.
PearsonArtPhoto♦PearsonArtPhoto
$\begingroup$ The zenith and nadir always go straight in and out of the planet's center, right? So they're not necessarily perpendicular to the velocity vector? $\endgroup$ – Maxpm Feb 7 '14 at 14:42
$\begingroup$ I'm pretty sure that the velocity vector is perpendicular to the planet's center in any case, at least for relatively uniform gravity objects, like all planets and most large moons. $\endgroup$ – PearsonArtPhoto♦ Feb 7 '14 at 14:45
$\begingroup$ I don't think so. In the question diagram above, the green arrows miss the planet's center. If the planet were smaller, they would miss it entirely. The velocity vector (represented by the red arrows) is only tangential to the planet's surface if the orbit is perfectly circular or the spacecraft is at its periapsis or apoapsis. $\endgroup$ – Maxpm Feb 7 '14 at 14:56
$\begingroup$ For most orbital mechanics calculations, the green arrows will be aligned with the R-bar direction, i.e., pointing directly at the planet's center. This does lead to a convention where the three axes are not mutually orthogonal, but the vector pointing to the planet center is more meaningful than an inward- or outward-pointing vector normal to the velocity vector. $\endgroup$ – Tristan Feb 11 '14 at 16:26
$\begingroup$ @Nickolai, the radius and velocity vectors will only be orthogonal in a circular orbit. $\endgroup$ – Tristan Feb 11 '14 at 21:33
The mathematical names for those directions are tangent (the red arrows), normal (the green arrows, and binormal (the blue arrows). Geometers have made extensive use of these, so much so that these directions are a key part of the Fundamental Theorem of Curves. For example, see http://en.wikipedia.org/wiki/Frenet-Serret_formulas, http://mathworld.wolfram.com/FundamentalTheoremofSpaceCurves.html, and http://math.rice.edu/~hardt/401F03/ftc.pdf.
This theorem isn't of much use in orbital mechanics because torsion involves a third derivative of position with respect to time. Orbital mechanics is a study of second derivatives: F=ma.
The directions along the red arrows (v-bar) are useful in orbital mechanics because these are the directions along which you want to thrust to minimize gravity losses. The blue arrows are useful because angular velocity points in this direction. The green arrows? They're useful for geometers and for describing vehicles flying through an atmosphere. They're not so useful in orbital mechanics, which is perhaps why there isn't a standard orbital mechanics name for this direction.
When looking at the uncertainties in where a spacecraft is, those directions are oftentimes called along track (the red arrows), cross track (the blue arrows), and radial (the green arrows). One can look at "radial" as being either a bit of a misnomer or as being spot on correct. It's a misnomer in the sense that "radial" only points "radially" (toward / away from the planet in the case of a circular orbit. It's spot on in the sense that "radial" always points toward / away from the instantaneous center of curvature.
A related set of directions is the local vertical / local horizontal system, or LVLH for short. In this system, +Z points to the center of the Earth, +Y points opposite the orbital angular velocity, and +X completes the right hand coordinate system (i.e., $\hat x = \hat y \times \hat z$). This means that $\hat x$ points along the velocity vector in the case of a circular orbit. This labeling a bit arbitrary. The Clohessy-Wilshire equations use +X as pointing away from the Earth, +Z as pointing along the angular momentum vector, and +Y completing the right hand coordinate system. Either the LVLH frame or CW frame used to describe the orbital mechanics of a spacecraft rendezvousing with the ISS.
David HammenDavid Hammen
$\begingroup$ Yes, these 3 unit vectors were missing from answers, but how do you use them to name the 6 orientations or sides of an object in orbit? The same problem is with using radius vectors. How would one say, for example, "please meet me at the [?] of the spacecraft"? Surely, saying "... [negative normal facing side] ..." sounds awkward? $\endgroup$ – TildalWave Feb 28 '14 at 1:16
$\begingroup$ For now, "I'll meet you in Zvezda" (or the Cupola, or Kibo, etc.) works just fine on the space station. The names of the modules don't change with orientation. There's no need to say "I'll meet you at X" in the Soyuz because there's no room in the Soyuz to get displaced. $\endgroup$ – David Hammen Feb 28 '14 at 8:01
In Kerbal Space Program, they are called Radial and Anti-Radial on the Nav-Ball.
Radial and anti-radial
These vectors are parallel to the orbital plane, and perpendicular to the prograde vector. The radial (or radial-in) vector points inside the orbit, towards the focus of the orbit, while the anti-radial (or radial-out) vector points outside the orbit, away from the body. Performing a radial burn will rotate the orbit around the craft like spinning a hula hoop with a stick. Radial burns are usually not an efficient way of adjusting one's path - it is generally more effective to use prograde and retrograde burns. The maximum change in angle is always less than 90°; beyond this point the orbit would pass through the center of mass of the orbited body and the ship would traverse a slow spiral in towards the orbited body.
- KSP Wiki - Maneuver Node Directions
EhrykEhryk
$\begingroup$ If you're curious about their use, I rarely use Radial/Anti-radial burns except during rendezvous with a mostly aligned but poorly timed trajectory as it allows me to 'catch up' or 'fall back' with respect to the other craft. $\endgroup$ – Ehryk Apr 29 '15 at 22:13
$\begingroup$ I came here to post the same thing! +1 $\endgroup$ – Magic Octopus Urn Jul 9 '18 at 16:39
$\begingroup$ prograde / retrograde are for changing periapsis / apoapsis altitudes. Normal / antinormal are for changing orbit inclination. Radial / antiradial are for moving periapsis and apoapsis around the orbit without changing their altitudes. $\endgroup$ – Florian Castellane Aug 4 '18 at 18:28
Green arrows are the "radius" vectors, the inner one points to the center of the planet. Red arrows are velocity vectors. One of them will point in the direction of motion of the satellite, the other one, of course, in the opposite direction. Blue arrows point in the direction of angular momentum, a quantity often labelled as "h".
The radius vector is often called the R-bar and the velocity vector the V-bar. You see this often in discussing about vehicles docking with the International Space Station. When they say a vehicle is making an R-bar approach, it is essentially "climbing up" the green arrow from below (so the ship is between the Earth and the station). V-bar approaches, if I'm not mistaken, typically take place from behind.
Source: AAE 532 at Purdue University, graduate level course in orbital mechanics.
NickolaiNickolai
$\begingroup$ The green arrows do not necessarily point to the center of the planet. See my comments on PearsonArtPhoto's answer. $\endgroup$ – Maxpm Feb 10 '14 at 18:22
$\begingroup$ I'm assuming that the coordinate system is orthogonal and in the orbital plane, therefore if the red vector is pointing in the direction of velocity, the green vector must point to the center of the planet. If it's not orthogonal or not in the orbital plane, then it's just a random set of vectors that don't do anyone any good. Unless it's part of some weird scavenger hunt. $\endgroup$ – Nickolai Feb 11 '14 at 19:14
$\begingroup$ Eccentric orbits are ellipses with the planet's center at one of the foci. At any given point on an ellipse, the line perpendicular to the tangent does not necessarily pass through either foci. Your statement would be true for a perfectly circular orbit, but not the one pictured. $\endgroup$ – Maxpm Feb 12 '14 at 4:07
$\begingroup$ You're right, radius and velocity vectors are not perpendicular for an elliptical orbit, they're separated by the flight path angle gamma, which is computed using the "local horizon" which is an imaginary line that is perpendicular to the radius vector. Wow, I'm rustier than I thought! $\endgroup$ – Nickolai Feb 14 '14 at 16:28
During the Apollo moon landings, the astronauts referred to 'forward' and 'down' for the red and green vectors.
In the Gemini 12 voice comms transcript (page 29 of a 500-page PDF), a maneuver is referred to as 'Posigrade up south' (in @Maxpm's diagram these directions refer to red, green, blue in that order).
The spacecraft attitude is described as 'yaw 1 right, pitch 4 up', and they refer to the 'thrusters aft'.
HobbesHobbes
There are specific body centered coordinate frames describing these directions. The two most commonly used are:
RIC (aka UVW, RTN): Radial, In-Track, Cross-Track
Radial: In the direction of the satellite position vector
In-Track: Radial x Cross-Track (where 'x' denotes the vector cross product)
Cross-Track: Position x Velocity (aka angular momentum vector)
TNW: Tangential, Normal, W (angular momentum)
Tangential: In the direction of the Velocity vector
Normal: Normal to the velocity vector & down - W x T
W: P x V
CoAstroGeekCoAstroGeek
Thanks for contributing an answer to Space Exploration Stack Exchange!
Not the answer you're looking for? Browse other questions tagged orbital-mechanics orbit terminology or ask your own question.
Returning to moderation
Does the ISS have zenith-facing windows?
In "spacecraft talk" is nadir just a fancy word for "down"?
How does a spacecraft know its orientation in orbit?
Calculating the radial, in-track and cross-track distances
Do astronauts floating in space experience disorientation and how do they overcome it?
What are the forces in an elliptical orbit, broken down into x and y factors?
Repeated slingshot encounters with the same planet
Optimum delta-v burn to change periapsis or apoapsis at an arbitrary point on an elliptical orbit?
Calculating velocity state vector with orbital elements in 2D
Given orbital parameters and position vector along that orbit, how can I determine the eccentric anomaly at that point?
Is a sun-blocking orbit possible?
Does the arrow in this SOHO trajectory animation point heliocentric prograde, or towards the Sun?
Why use a retrograde orbit?
|
CommonCrawl
|
Antiplasmodial activity of Vernonia adoensis aqueous, methanol and chloroform leaf extracts against chloroquine sensitive strain of Plasmodium berghei in vivo in mice
Gebreyohannes Zemicheal1 &
Yalemtsehay Mekonnen ORCID: orcid.org/0000-0001-8640-21711
The aim of this study was to investigate the antiplasmodial effects of the crude aqueous, methanol and chloroform extracts of the leaves of Vernonia adoensis in Plasmodium berghei infected Swiss albino mice using Peters' 4-day suppressive test.
The number of mice used for the toxicity test was 20 (5/group) and for each extract and control groups 5 mice per group was used. The aqueous, methanol and chloroform extracts of V. adoensis leaves indicated statistically significant (P < 0.05) suppression of parasitaemia in the treated mice. The highest inhibition was that of the methanol extract treated mice (83.36%) followed by aqueous (72.26%) and chloroform (54.34%) at an oral dose of 600 mg/kg b.wt. Each extract prevented body weight loss and packed cell volume (PCV) reduction as compared to the negative control groups. The survival time of the mice treated with chloroform based on Kaplan–Meir analysis was 12.53 ± 0.37 at 600 mg/kg b.wt, while the negative control was 7.93 ± 0.37 days. The LD50 of the extracts was greater than 3000 mg/kg body weight. In conclusion, the crude leaves extract of V. adoensis have demonstrated antiplasmodial effect in vivo. P. berghei infection is suppressed in a dose-dependent manner showing relevance of the traditional use of the plant.
Malaria is still a public health problem in many parts of the tropics. Plasomodium falciparum and Plasomodium vivax are the most fatal species [1,2,3,4,5]. Currently quinine and artemisinin are the two effective drugs obtained from two traditional medicinal plants. Quinine was obtained from the bark of the Cinchona tree [6, 7] and artemisinin from the plant Artemisia annua [8, 9]. However, in the recent years these drugs show some degree of resistance [10, 11].
Vernonia adoensis (V. adoensis) is among the numerous traditionally used medicinal plants in Ethiopia. It is used for malaria, gastro-intestinal complaints, muscle spasm and for healing wounds [12,13,14,15]. In Tanzania a root-infusion is taken for stomach-pain and as anti-tuberculosis and fresh roots sliced and cooked is taken with milk against gonorrhoea [12]. Previously the antimicrobial [9], antioxidant and antipyretic [14] property of V. adoensis is reported.
In this study the antiplasmodial effect of the leaf crude extract of V. adoensis against chloroquine sensitive P. berghei in Swiss albino mice is tested.
Plant material collection
Fresh leaves of V. adoensis were collected from Gondar town that grows freely around the Gondar Hospital, Ethiopia during the month of October, 2012. Field study and any plant material collection took place upon official authorization in accordance with country's laws and following international guidelines. The identification and authentication of the plant specimen was carried out by Professors Ensermu Kelbessa and Sebsebe Demissew at the National Herbarium of the Addis Ababa University (AAU). A voucher specimen number GZ 02/2012 of the plant sample was placed at the Herbarium.
Preparation of crude plant extracts
The preparation of crude extracts was done based on our previous published works [16, 17]. In summary the collected plant leaves of V. adoensis were cleaned and air dried at room temperature under a shade in the Biomedical Science Laboratory of AAU. An electrical grinding mill (IEC, 158 VDE 0660, Germany) was used to grind the leaves. Sensitive digital weighing balance (AND: FX-320, Japan) was used to weigh the powdered plant material. Each extract was prepared in 1:10 ratio (w/v). Measurement of the percentage yield of each extract was done.
Experimental animals
We obtained the Swiss albino mice (25–34 g) from the Animal House of the College of Natural Sciences of AAU. Standard pellet diet and tap water ad libitum are fed to the mice housed in standard cages. All experiments were done three times and the tables represent the mean of the three experiments in each case. The Ethics Committee of the College of Natural Sciences of Addis Ababa University gave approval to run the experiment.
Toxicity test
Acute toxicity test (single dose exposure) of aqueous, methanol and chloroform extracts from the leaves of V. adoensis were evaluated in 3-h fasted Swiss albino mice through an oral administration of 2000, 2500 and 3000 mg/kg body weight of mice. The Organization for Economic Cooperation and Development (OECD) guidelines 425 procedure was followed to test the toxicity.
Evaluation of the antiplasmodial activity
In the in vivo evaluation of the antiplasmodial activity of the plant extracts was employed against chloroquine (CQ) sensitive P. berghei in mice using the standard 4-day suppressive method [18]. Gentle heart puncture from donor mouse and anesthetized with chloroform afforded 1 ml blood the rising parasitaemia being about 33%. Then, the 1 ml blood was diluted with 4 ml of physiological saline and thus, one ml blood contains about 5 × 107 infected red blood cells. The procedure we followed is previously given in Tekalegn et al. [16] and Zerihun et al. [17]. Twenty-five male mice were infected with P. berghei and randomly divided into five groups of five mice per group. Three test groups and two control groups (dH2O or 3% Tween 80 as a negative control CQ as positive control). Each mouse was inoculated on day 0, (D0) (intraperitoneally) with 0.2 ml of infected blood having approximately 1 × 107 P. berghei parasitized red blood cells as standard inoculum. The different doses given were 200 mg/kg, 400 mg/kg and 600 mg/kg of body weight and 0.2 ml CQ at 25 mg/kg. Standard intragastric tube was used to give the extracts and the controls. Treatment was started after 3 h of infection on D0 and then following four consecutive days in 24 h schedule.
On the 5th day Blood sample was collected from tail snip of each mouse on the 5th day. Thin smears preparation is as given previously [16, 17]. The percentage parasitaemia and suppression was calculated as:
$${\text{Percentage parasitaemia }} = \frac{\text{Number of parasitized red blood cells}}{\text{Total number of RBC examined}} \, ({\text{RBC}}) \, \times { 1}00$$
$${\text{Percentage suppression }} = \frac{\text{Parasitaemia in negative control}}{\text{Parasitaemia in negative control}} \, - {\text{ Parasitaemia in treated}} \times 100$$
Determination of body weight and packed cell volume (PCV)
The body weight and PCV of each mouse in all the groups was recorded before infection and after infection. The average body weights were calculated as:
$${\text{Mean body weight }} = \frac{\text{Total weight of mice in a group}}{\text{Total number of mice in that group}}$$
Heparinized microhematocrit capillary tubes up to 3/4th of their length was used to collect blood from tail of each mouse for PCV measurement. The tubes were sealed by crystal seal and placed in the microhematocrit centrifuge (Microheamatocrit Centrifuge, 583298, Hawksley & Sons Ltd, England) with the sealed ends out wards. The sample was centrifuged at 12,000 rpm for 4 min. The volume of the total blood and the volume of red blood cells were measured and PCV was calculated as:
$${\text{PCV }} = \frac{\text{Volume of erythrocytes in a given volume of blood}}{\text{Total blood volume}} \times 100$$
Determination of mean survival time
Mortality was observed daily and the number of days from the time of inoculation of the parasite up to death was recorded for each mouse in the treatment and control groups throughout the follow up period. The mean survival time (MST) for each group was calculated using the SPSS version 20 applying Kaplan–Meir statistical analysis.
Results of the study were expressed as mean ± standard error of the mean (M ± SEM). Data obtained from the parasitaemia, body weight, PCV and survival times were analyzed using Windows SPSS Version 15. The one-way analysis of variance (ANOVA) and paired-samples Student's t test were used to compare results among and within groups for differences between initial and final results. The result was considered statistically significant at 95% confidence level and P-value < 0.05.
No death occurred in any of the groups and dose levels during the entire period of 24 h of observation. Furthermore, during the gross physical and behavioral observation of the experimental mice, for mice treated groups with the aqueous and methanol (99.5%) extracts at the dose of 3000 mg/kg showed some reduction in feeding activity, hair erection and rigidity after the administration of the doses. In general, the oral administrations of the aqueous, methanol and chloroform extracts of V. adoensis in each doses of 2000, 2500 and 3000 mg/kg did not produce any significant changes in usual behavior and also no mortality. Therefore, the LD50 is greater than 3000 mg/kg body weight.
Antiplasmodial activity
The results of this study indicated that the in vivo aqueous, methanol and chloroform leaf extracts of V. adoensis exhibited a potent activity against P. berghei malaria parasite (Table 1). The Parasitaemia suppressive effects produced by all the test extracts were significant (P < 0.05) as compared with their respective negative control groups. The highest percent suppression at 600 mg/kg body weight was 83.36%. The mice treated with the standard drug (chloroquine 25 mg/kg) were completely free from the parasites in all the experiments using the aqueous, methanol and chloroform leaf extracts of V. adoensis (Table 1).
Table 1 Antiplasmodium activity of aqueous, methanol and chloroform extract of V. adoensis leaves against P. berghei in Swiss albino mice
Body weight and packed cell volume (PCV)
The plant extracts prevented body weight loss and the mean body weight on D0 and day 4 (D4) did not show statistically significant difference (P > 0.05) except at lower dose level of 200 mg/kg in both aqueous and chloroform extracts. Therefore, the prevention of body weight reduction by each extracts was dose dependent. Body weight of the mice in the extract treated groups on D4 is significantly higher (P < 0.05) than that of the mice in the negative control group (Table 2).
Table 2 Effect of V. adoensis of aqueous, methanol and chloroform leaf extracts on the body weights of P. berghei infected mice
Table 3 shows that the test extract doses of V. adoensis prevented PCV reduction due to parasitaemia infection. Each result showed that PCV in the respective negative control groups were significantly (P < 0.05) reduced in D4 while in the extracts treated groups significant change was not observed (P > 0.05). Furthermore, the analysis of variance performed between the extract treated groups in comparison with the corresponding negative control groups showed highly significant variation.
Table 3 Effect of V. adoensis of aqueous, methanol and chloroform leaf extracts on the packed cell volume of P. berghei infected Swiss albino mice
Mean survival time
All the experimental mice treated with the different extracts had increased dose dependent mean survival days (e.g. the chloroform extract exhibited the maximum MST of 12.53 at 600 mg/kg b.wt) as compared to the respective negative control groups (the maximum MST was 7.93). But all the mice treated with CQ survived for 3 months (Additional file 1).
All the test leaf extracts of aqueous, methanol and chloroform of V. adoensis have shown different degrees of parasitaemia inhibition in dose-related fashion. The highest suppression was recorded in the methanol extract treated mice (83.36% at the oral dose of 600 mg/kg). Previous studies have demonstrated the presence of secondary metabolites such as alkaloids, steroids, saponins, flavonoides, anthraquinones, terpenoids, sterols, diterpenoid, glycosides, tannins and sesquiterpene lactones in Vernonia species [7, 19]. The parasitaemia inhibition could be attributed to the presence of these metabolites.
Furthermore, the present result is in agreement with the result of previous in vivo study by Abosi and Raseroka [20] that reported suppressive effect of ethanolic extracts from the leaves and root bark of Vernonia amygdalina, where the leaf extract at 500 mg/kg resulted in 67% suppression of parasitaemia while the root-bark extract exerted 53.5% suppression at the same dose.
Similarly, a study carried out by Melariri and co-workers [21] showed a marked growth inhibition of parasites with values of 85% and 95% by the combination of dichloromethane extracts of leaves of Cymbopogon citrates and V. amygdalina at dose levels of 400 and 600 mg/kg b.wt against chloroquine sensitive strains of P. berghei in mice respectively. V. amygdalina leaves dichloromethane extract alone exerted 95.8% parasitaemia suppression at a higher dose of 800 mg/kg. In addition, the aqueous crude extract of the aerial part of V. ambigua had significant (P < 0.05) parasitaemia inhibition in a dose dependent suppression of parasite growth [10].
The statistical multiple comparison of the effect of each extracts on the body weight and PCV among groups on the 5th day of post-treatment and between D0 and D4 have shown that the two parameters to be within the normal range of values established for mice by Flecknell [15], adult body weight of 25–40 g and PCV of 32–54%. Therefore, the absence of any significant differences in the body weight and PCV parameters provides a support for the safety (non-toxic) of V. adoensis at all doses administered to the experimental mice. Previous studies on different species of Vernonia that also includes the present work have justified the potential of this genus as an antiplasmodial agent.
In conclusion, the crude leaves extract of V. adoensis have demonstrated antiplasmodial effect in Swiss albino mice. P. berghei infection is suppressed in a dose-dependent manner. The traditional use of the plant has thus some relevance based on this study.
The study was done only on the leaves of the plant and also only on crude extracts. It will be good if sub-fractions of the different extracts are tried. In addition using other parts of the plant such as the roots and flowers would have been more useful to come up with strong conclusion.
AAU:
Addis Ababa University
ANOVA:
b.wt:
CQ:
day zero
MST:
M ± SEM:
mean plus/minus standard error of the mean
NC:
negative control
PCV:
packed cell volume
RBC:
red blood cells
Ayele DG, Zewotir TT, Mwambi HG. Prevalence and risk factors of malaria in Ethiopia. Malaria J. 2012;11:1–9.
RBM progress and impact series. World Malaria Day: Africa Update, Roll Back Malaria. 2010.
Schlagenhauf P, Petersen E. Malaria chemoprophylaxis: strategies for risk groups. Clin Microbiol Rev. 2008;21:466–72.
WHO: Key facts about malaria. Geneva: World Health Organization; 2018.
Geleta G, Ketema T. Severe Malaria Associated with Plasmodium falciparum and P. vivax among Children in Pawe Hospital, Northwest Ethiopia. Malaria Research and Treatment, Hindawi Publishing Corporation. 2016. p. 1–7.
Achan OA, Talisuna A, Erhart A, Yeka A, Tibenderana K, Baliraine N, Rosenthal J, Alessandro D. Quinine, an old anti-malarial drug in a modern world: role in the treatment of malaria. Malaria J. 2011;10:144–55.
Builders IM, Wannang NN, Ajoku AG, Builders FP, Orisadipe A, Aguiyi CJ. Evaluation of the antimalarial potential of Vernonia ambigua Kotschy and Peyr (Asteraceae). Int J Pharmacol. 2011;7:238–47.
Whegang Y, Tahar R, Foumane N, Soula G, Gwét H, Thalabard C, Basco K. Efficacy of non-artemisinin and artemisinin-based combination therapies for uncomplicated falciparum malaria in Cameroon. Malaria J. 2010;9:56–65.
Chitemere A, Mukanganyama S. In vitro antibacterial activity of selected medicinal plants from Zimbabwe. Afr J Plant Sci Biotechnol. 2011;5:1–7.
Dondorp AM, Nosten F, Yi P, Das D, Phyo AP, Tarning J. Artemisinin resistance in Plasmodium falciparum malaria. N Engl J Med. 2009;361(5):455–67.
Amato R, Lim P, Miotto O, Amaratunga C, Dek D. Genetic markers associated with dihydroartemisinin-piperaquine failure in Plasmodium falciparum malaria in Cambodia: a genotype-phenotype association study. Lancet Infect Dis. 2017;17(2):164–73.
Dalziel JM. The useful plants of West Tropical Africa. Being an appendix to the flora of West Tropical Africa. Crown Agents for the Colonies, London. 1955. p. 612.
Fowler G. Traditional fever remedies. A list of zambian plants. 2006.
Opoku R, Nethengwe F, Dludla P, Madida T, Shonhai A, Smith P, Singh M. Larvicidal, antipyretic and antiplasmodial activity of some zulu medicinal plants. Planta Med. 2011;77:1255–62.
Flecknell PA. Non-surgical experimental procedures. In: Tuffery A, editor. Laboratory animals: an introduction for new experimenters. New York: Wiley; 1987. p. 248–9.
Tekalign D, Mekonnen Y, Animut A. In Vivo Antimalarial Activities of Clerodendrum myricoides, Dodonea angustifolia and Aloe debrana Against Plasmodium berghei. Ethiop J Health Dev. 2010;24(1):25–9.
Zerihun TM, Petros B, Mekonnen Y. Evaluation of anti-plasmodial activity of crude and column fractions of extracts from Withania somnifera. Turk J Biol. 2013;37:147–50.
Peter W, Portus H, Robinson L. The four-day suppressive in vivo antimalarial test. Ann Trop Med Parasitol. 1975;69:155–71.
Ajayia E, Adelekeb T, Adewumia M, Adeyemia A. Antiplasmodial activities of ethanol extracts of Euphorbia hirta whole plant and Vernonia amygdalina leaves in Plasmodium berghei-infected mice. J Taibah Univ Sci. 2017;11:831–5.
Abosi AO, Raseroka BH. In vivo antimalarial activity of Vernonia amygdalina. Br J Biomed Sci. 2003;60:89–91.
Melariri P, Campbell W, Etusim P, Smith P. In vitro and in vivo antiplasmodial activities of extracts of Cymbopogon citratus staph and Vernonia amygdalina delile leaves. J Nat Prod. 2011;4:164–72.
GZ wrote the research proposal, collected the plant material and did the experiment and drafted the manuscript. YM helped develop the research proposal and helped in analysis and interpretation of the results and in finalizing the manuscript. Both authors read and approved the final manuscript.
The authors thank the Addis Ababa University, School of Graduate Studies for financial support. Professors Ensermu Kelbessa and Sebsebe Demissew are thanked for identification and authentication of the plant material. Our thanks also go to Mrs. Amelework Eyado and Mrs. Tsige Yadessa their technical assistance in the laboratory.
The animal experiment was done upon approval by the Ethics Committee of the Department of Biology of the College of Natural Sciences, Addis Ababa University.
This work was funded by the School of Graduate Studies of the Addis Ababa University, Ethiopia. The funding was involved in approving the overall activity of the study.
Department of Biology, College of Natural and Computational Sciences, Addis Ababa University, P.O.Box 1176, Addis Ababa, Ethiopia
Gebreyohannes Zemicheal & Yalemtsehay Mekonnen
Gebreyohannes Zemicheal
Yalemtsehay Mekonnen
Correspondence to Yalemtsehay Mekonnen.
Additional file 1.
Kaplan–Meier analysis.
Zemicheal, G., Mekonnen, Y. Antiplasmodial activity of Vernonia adoensis aqueous, methanol and chloroform leaf extracts against chloroquine sensitive strain of Plasmodium berghei in vivo in mice. BMC Res Notes 11, 736 (2018). https://doi.org/10.1186/s13104-018-3835-2
Vernonia adoensis
Plasmodium berghei
Antiplasmodial
Parasitaemia
|
CommonCrawl
|
中国物理C
Chinese Physics C
All Title Author Keyword Abstract DOI Category Address Fund PACS EEACC
2018 Impact Factor 5.861 2019 CPC Top Reviewer Awards
Title Author Keyword
Chinese Physics C> 2020, Vol. 44> Issue(2) : 024001 DOI: 10.1088/1674-1137/44/2/024001
Experimental study of the elastic scattering of 10Be on 208Pb at the energy of around three times the Coulomb barrier
Fang-Fang Duan 1,2 ,
Yan-Yun Yang 2,3,, ,
Dan-Yang Pang 4 ,
Bi-Tao Hu 1 ,
Jian-Song Wang 5,2,3,, ,
Kang Wang 2 ,
Guo Yang 2,3 ,
Valdir Guimarães 6 ,
Peng Ma 2 ,
Shi-Wei Xu 2,3 ,
Xing-Quan Liu 7 ,
Jun-Bing Ma 2,3 ,
Zhen Bai 2 ,
Qiang Hu 2 ,
Shu-Ya Jin 2,3,1 ,
Xin-Xin Sun 1 ,
Jia-Sheng Yao 5 ,
Hang-Kai Qi 5 ,
Zhi-Yu Sun 2,3
School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100080, China
School of Physics and Nuclear Energy Engineering, Beihang University, Beijing 100191, China
School of Science, Huzhou University, Huzhou 313000, China
Instituto de Física, Universidade de Sãn Paulo, Rua do Matão, 1371, Sãn Paulo 05508-090, SP, Brazil
Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China
Elastic scattering of 10Be on a 208Pb target was measured at $ E_{\rm Lab} $ = 127 MeV, which corresponds to three times the Coulomb barrier. The secondary 10Be beam was produced at the Radioactive Ion Beam Line in Lanzhou of the Heavy-Ion Research Facility in Lanzhou. The angular distribution of elastic scattering in the 10Be + 208Pb system shows a typical Fresnel diffraction peak. Optical model analysis of the angular distribution was performed using the Woods-Saxon, double-folding and global potentials. With the global potential, different density distributions were used. The results indicate that different density distributions for the projectile induce distinct effects in the angular distribution.
elastic scattering ,
radioactive ion beam ,
angular distribution ,
optical model ,
density distribution
[1] H. Gerger and E. Marsden, Philos. Mag., 25: 604-623 (1913) doi: 10.1080/14786440408634197
[2] E. Rutherford, Philos. Mag., 21: 669 (1911) doi: 10.1080/14786440508637080
[3] W. E. Frahn, Nucl. Phys., 75: 577 (1966) doi: 10.1016/0029-5582(66)90979-5
[4] R. Chatterjee and R. Shyam, Phys. Rev. C, 66: 061601 (2002)
[5] F. M. Nunes and I. J. Thmpson, Phys. Rev. C, 57: 2818 (1998) doi: 10.1103/PhysRevC.57.R2818
[6] J. P. Fernández-García, M. Rodríguez-Gallardo, M. A. G. Alvarez et al, Nucl. Phys. A, 840: 19 (2010) doi: 10.1016/j.nuclphysa.2010.03.013
[7] J. J. Kolata, V. Guimar~aes, and E. F. Aguilera, Eur. Phys. J. A, 52: 123 (2016) doi: 10.1140/epja/i2016-16123-1
[8] L. F. Canto, P. R. S. Gomes, R. Donangelo et al, Physics Reports, 96: 1 (2015)
[9] N. Keeley, N. Alamanos, K. W. Kemper et al, Prog. Part. Nucl. Phys., 63: 396 (2009) doi: 10.1016/j.ppnp.2009.05.003
[10] M. E. Brandan and G. R. Satchler, Physics Reports, 285: 143 (1997) doi: 10.1016/S0370-1573(96)00048-8
[11] E. Liatard, J. F. Bruandet, F. Glasser et al, Eur. Lett., 13: 401 (1990) doi: 10.1209/0295-5075/13/5/004
[12] I. Tanihata, T. Kobayashi, O. Yamakawa et al, Phys. Lett. B, 206: 592 (1988) doi: 10.1016/0370-2693(88)90702-2
[13] Y. Y. Yang, X. Liu, D. Y. Pang et al, Phys. Rev. C, 98: 044608 (2018) doi: 10.1103/PhysRevC.98.044608
[14] Y. Y. Yang, J. S. Wang, Q. Wang et al, Phys. Rev. C, 87: 044613 (2013) doi: 10.1103/PhysRevC.87.044613
[16] S. L. Jin, J. S. Wang, Y. Y. Yang et al, Phys. Rev. C, 91: 054617 (2015) doi: 10.1103/PhysRevC.91.054617
[17] W. H. Ma, J. S. Wang, Y. Y. Yang et al, Nucl. Sci. Tech., 28: 177 (2017) doi: 10.1007/s41365-017-0334-4
[18] A. Arazi, J. Casal, M. Rodríguez-Gallardo et al, Phys. Rev. C, 97: 044609 (2018) doi: 10.1103/PhysRevC.97.044609
[19] P.Descouvemont and N.Itagaki, Phys. Rev. C, 97: 014612 (2018) doi: 10.1103/PhysRevC.97.014612
[20] M. Mazzocco, N. Keeley, A. Boiano et al, Phys. Rev. C, 100: 024602 (2019) doi: 10.1103/PhysRevC.100.024602
[21] A. Di Pietro, V. Scuderi, A. M. Moro et al, Phys. Rev. C, 85: 054607 (2012) doi: 10.1103/PhysRevC.85.054607
[22] L. Acosta, M. A. G. Álvarez, M. V. Andrés et al, Eur. Phys. J. A, 42: 461 (2009) doi: 10.1140/epja/i2009-10822-6
[23] W. Y. So, K. S. Choi, M.-K. Cheoun et al, Phys. Rev. C, 93: 054624 (2016) doi: 10.1103/PhysRevC.93.054624
[24] Y. Y. Yang, X. Liu, D. Y. Pang, Phys. Rev. C, 94: 034614 (2016) doi: 10.1103/PhysRevC.94.034614
[25] Y. Kanada-Enýo, H. Horiuchi, A.Doté, Phys. Rev. C, 60: 064304 (1999) doi: 10.1103/PhysRevC.60.064304
[26] D. DellÁquila, L. Acosta, F. Amorini et al, EPJ Web of conferences, 117: 06011 (2016) doi: 10.1051/epjconf/201611706011
[27] Y. Kanada-Enýo, Phys. Rev. C, 94: 024326 (2016) doi: 10.1103/PhysRevC.94.024326
[28] N. Itagaki, S. Hirose, T. Otsuka et al, Phys. Rev. C, 65: 044302 (2002)
[29] J. J. Kolata, E. F. Aguilera, F. D. Becchetti et al, Phys. Rev. C, 69: 047601 (2004) doi: 10.1103/PhysRevC.69.047601
[30] C. Signorini, A. Andrighetto, M. Ruan et al, Phys. Rev. C, 61: 061603(R) (2000) doi: 10.1103/PhysRevC.61.061603
[31] S. Raman, C. W. Nestor Jr, P. Tikkanen, At. Data Nucl. Data Tables, 78: 1 (2001) doi: 10.1006/adnd.2001.0858
[32] M. Dasgupta, P. R. S. Gomes, D. J. Hinde et al, Phys. Rev. C, 70: 024606 (2004) doi: 10.1103/PhysRevC.70.024606
[33] H. J. Votava, T. B. Clegg, E. J. Ludwig et al, Nucl. Phys. A, 204: 529 (1973) doi: 10.1016/0375-9474(73)90393-X
[34] A. M. Moro and R. Crespo, Phys. Rev. C, 85: 054613 (2012) doi: 10.1103/PhysRevC.85.054613
[35] V. Pesudo, M. J. G. Borge, A. M. Moro et al, Phys. Rev. Lett., 118: 152502 (2017) doi: 10.1103/PhysRevLett.118.152502
[36] Z. Y. Sun, W. L. Zhan, Z. Y. Guo et al, Chin. Phys. Lett., 15: 790 (1998) doi: 10.1088/0256-307X/15/11/004
[37] Y. Y. Yang, J. S. Wang, Q. Wang et al, Nucl. Instrum. Methods Phys. Res. Sect. A, 503: 496 (2003) doi: 10.1016/S0168-9002(03)01005-2
[38] J. W. Xia, W. L. Zhan, B. W. Wei et al, Nucl. Instrum. Methods Phys. Res. Sect A, 488: 11 (2002) doi: 10.1016/S0168-9002(02)00475-8
[39] W. L. Zhan, H. S. Xu, G. Q. Xiao et al, Nucl. Phys. A, 834: 694c (2010) doi: 10.1016/j.nuclphysa.2010.01.126
[40] J. Cub, C. Gund, D. Pansegrau et al, Nucl. Instrum. Methods Phys. Res. Sect A, 453: 522 (2000) doi: 10.1016/S0168-9002(00)00461-7
[41] H. Kumagai, T. Ohnishi, N. Fukuda et al, Nucl. Instrum. Meth. Phys. Res. Sect B, 317: 717 (2013) doi: 10.1016/j.nimb.2013.08.050
[42] H. Hui, D. X. Jiang, X. Q. Li et al, Nucl. Instrum. Methods Phys. Res. Sect A, 481: 160 (2002) doi: 10.1016/S0168-9002(01)01333-X
[43] Y. Y. Yang, J. S. Wang, Q. Wang et al, Nucl. Instrum. Methods Phys. Res. Sect A, 701: 1 (2013) doi: 10.1016/j.nima.2012.10.088
[44] I. J. Thompson, Comput. Phys. Rep., 7: 167 (1988) doi: 10.1016/0167-7977(88)90005-6
[45] L. C. Chamon, D. Pereira, M. S. Hussein et al, Phys. Rev. Lett., 79: 5218 (1997) doi: 10.1103/PhysRevLett.79.5218
[46] L. C. Chamon, B. V. Carlson, L. R. Gasques et al, Phys. Rev. C, 66: 014610 (2002) doi: 10.1103/PhysRevC.66.014610
[47] Y. P. Xu and D. Y. Pang, Phys. Rev. C, 87: 044605 (2013) doi: 10.1103/PhysRevC.87.044605
[48] E. Bauge, J. P. Delaroche, and M. Girod, Phys. Rev. C, 63: 024607 (2001) doi: 10.1103/PhysRevC.63.024607
[49] A. Ozawa, T. Suzuki, and I. Tanihata, Nucl. Phys. A, 693: 32 (2001) doi: 10.1016/S0375-9474(01)01152-6
[50] B. A. Brown, Phys. Rev. C, 58: 220 (1998) doi: 10.1103/PhysRevC.58.220
[1] Awad A. Ibraheem . Folding model calculations for 6He+12C elastic scattering. Chinese Physics C, 2016, 40(3): 034102. doi: 10.1088/1674-1137/40/3/034102
[2] Yu. A. Berezhnoy , V. P. Mikhailyuk . Polarization of protons in the optical model. Chinese Physics C, 2017, 41(2): 024102. doi: 10.1088/1674-1137/41/2/024102
[3] Wei-Wei Qu , Gao-Long Zhang , Guo-Yu Tian , Zhi-Qiang Chen , Shou-Ping Xu , Royichi Wada . Angular distribution measurement for 12C fragmentation via 12C + 12C scattering at 100 MeV/u. Chinese Physics C, 2018, 42(7): 074001. doi: 10.1088/1674-1137/42/7/074001
[4] Lei Yang , Cheng-Jian Lin , Hui-Ming Jia , Xin-Xing Xu , Nan-Ru Ma , Li-Jie Sun , Feng Yang , Huan-Qiao Zhang , Zu-Hua Liu , Dong-Xi Wang . Test of the notch technique for determining the radial sensitivity of the optical model potential. Chinese Physics C, 2016, 40(5): 056201. doi: 10.1088/1674-1137/40/5/056201
[5] D. Patel , S. Mukherjee , N. Deshmukh , J. Lubian , Jian-Song Wang , T. Correa , B. K. Nayak , Yan-Yun Yang , Wei-Hu Ma , D. C. Biswas , Y. K. Gupta , S. Santra , E. T. Mirgule , L. S. Danu , N. L. Singh , A. Saxena . Influence of breakup on elastic and α-production channels in the 6Li+116Sn reaction. Chinese Physics C, 2017, 41(10): 104001. doi: 10.1088/1674-1137/41/10/104001
[6] LIU Xin , WANG You-Bao , LI Zhi-Hong , JIN Sun-Jun , WANG Bao-Xiang , LI Yun-Ju , LI Er-Tao , BAI Xi-Xiang , GUO Bing , SU Jun , ZENG Sheng , YAN Sheng-Quan , LIAN Gang , HUANG Wu-Zhen , LIU Wei-Ping . Angular distribution of 6He+p elastic scattering. Chinese Physics C, 2012, 36(8): 716-720. doi: 10.1088/1674-1137/36/8/006
[7] LIU Jian , CHU Yan-Yun , REN Zhong-Zhou , WANG Zai-Jun . Theoretical study of the central depression of nuclear charge density distribution by electron scattering. Chinese Physics C, 2012, 36(1): 48-54. doi: 10.1088/1674-1137/36/1/008
[8] PANG Dan-Yang . Effects of relativistic kinematics in heavy ion elastic scattering. Chinese Physics C, 2014, 38(2): 024104. doi: 10.1088/1674-1137/38/2/024104
[9] M. Alekseev , A. Amoroso , R. Baldini Ferroli , I. Balossino , M. Bertani , D. Bettoni , F. Bianchi , J. Chai , G. Cibinetto , F. Cossio , F. De Mori , M. Destefanis , R. Farinelli , L. Fava , G. Felici , I. Garzia , M. Greco , L. Lavezzi , C. Leng , M. Maggiora , A. Mangoni , S. Marcello , G. Mezzadri , S. Pacetti , P. Patteri , A. Rivetti , M. Da Rocha Rolo , M. Savrié , S. Sosio , S. Spataro , L. Yan . A model to explain the angular distribution of $ {{{J /\psi}}} $ and $ {{\psi(2S)}} $ decay into $ {\Lambda \overline{\Lambda}} $ and ${\Sigma^0 \overline{\Sigma^0}} $ . Chinese Physics C, 2019, 43(2): 023103. doi: 10.1088/1674-1137/43/2/023103
[10] Yong-Chun Feng , Rui-Shi Mao , Peng Li , Xin-Cai Kang , Yan Yin , Tong Liu , Yao-Yao You , Yu-Cong Chen , Tie-Cheng Zhao , Zhi-Guo Xu , Yan-Yu Wang , You-Jin Yuan . Beam distribution reconstruction simulation for electron beam probe. Chinese Physics C, 2017, 41(7): 077001. doi: 10.1088/1674-1137/41/7/077001
[11] WANG Fan , CHEN Xiang-Song , SUN Wei-Min , . Nucleon internal structure: a new set of quark, gluon momentum, angular momentum operators and parton distribution functions. Chinese Physics C, 2009, 33(12): 1197-1204. doi: 10.1088/1674-1137/33/12/023
[12] YU Lei , ZHANG Gao-Long , LE Xiao-Yun . Simulation of effects of incident beam condition in p-p elastic scattering. Chinese Physics C, 2014, 38(2): 024001. doi: 10.1088/1674-1137/38/2/024001
[13] WANG Dou , Philip Bambade , Kaoru Yokoya , GAO Jie . Analytical estimation of ATF beam halo distribution. Chinese Physics C, 2014, 38(12): 127003. doi: 10.1088/1674-1137/38/12/127003
[14] Yong-Li Xu , Yin-Lu Han , Hai-Ying Liang , Zhen-Dong Wu , Hai-Rui Guo , Chong-Hai Cai . Applicability of 9Be global optical potential to description of 8,10,11B elastic scattering. Chinese Physics C, 2020, 44(3): 034101. doi: 10.1088/1674-1137/44/3/034101
[15] TIAN Kai , CAO Zhou , XUE Yu-Xiong , YANG Shi-Yu . Comparison study of the charge density distribution induced by heavy ions and pulsed lasers in silicon. Chinese Physics C, 2010, 34(1): 148-151. doi: 10.1088/1674-1137/34/1/028
[16] Lü Hui-Yi , WANG Tie-Shan , HAN Yun-Cheng , FANG Kai-Hong , MENG Xuan , HE Qing-Hua , GUAN Xing-Cai , LAN Ming-Cong . Effect of deuteron density distribution on the deduction of screening potential from the D(d,p)T reaction in Be metals. Chinese Physics C, 2011, 35(1): 26-30. doi: 10.1088/1674-1137/35/1/006
[17] SHAO Li-Jing , ZHANG Yun-Hua , MA Bo-Qiang . Parton distribution functions and nuclear EMC effect in a statistical model. Chinese Physics C, 2010, 34(9): 1417-1420. doi: 10.1088/1674-1137/34/9/059
[18] Yu-Dong Wang , Jian-Xiong Wang . Angular distribution coefficients of Z (W) boson produced in ${{e^+e^-}} $ collisions at ${{\sqrt{s}=240}} $ GeV. Chinese Physics C, 2019, 43(4): 043102. doi: 10.1088/1674-1137/43/4/043102
[19] Jian Liu , Cun Zhang , Zhong-Zhou Ren , Chang Xu . Theoretical study on neutron distribution of 208Pb by parity-violating electron scattering. Chinese Physics C, 2016, 40(3): 034101. doi: 10.1088/1674-1137/40/3/034101
[20] LI Ding . A study on the distribution of adsorbed nanoparticles. Chinese Physics C, 2008, 32(2): 160-164. doi: 10.1088/1674-1137/32/2/017
Figures(9) / Tables(2)
Fang-Fang Duan, Yan-Yun Yang, Dan-Yang Pang, Bi-Tao Hu, Jian-Song Wang, Kang Wang, Guo Yang, Valdir Guimarães, Peng Ma, Shi-Wei Xu, Xing-Quan Liu, Jun-Bing Ma, Zhen Bai, Qiang Hu, Shu-Ya Jin, Xin-Xin Sun, Jia-Sheng Yao, Hang-Kai Qi and Zhi-Yu Sun. Experimental study of the elastic scattering of 10Be on 208Pb at the energy of around three times the Coulomb barrier[J]. Chinese Physics C. doi: 10.1088/1674-1137/44/2/024001
RIS(for EndNote,Reference Manager,ProCite)
Received: 2019-09-03
Article Metric
Article Views(98)
PDF Downloads(17)
Cited by(0)
Policy on re-use
To reuse of subscription content published by CPC, the users need to request permission from CPC, unless the content was published under an Open Access license which automatically permits that type of reuse.
Entire Issue PDF
Abstract views(98)
HTML views(27)
Fang-Fang Duan 1,2,
Yan-Yun Yang 2,3,,
Dan-Yang Pang 4,
Bi-Tao Hu 1,
Jian-Song Wang 5,2,3,,
Kang Wang 2,
Guo Yang 2,3,
Valdir Guimarães 6,
Peng Ma 2,
Shi-Wei Xu 2,3,
Xing-Quan Liu 7,
Jun-Bing Ma 2,3,
Zhen Bai 2,
Qiang Hu 2,
Shu-Ya Jin 2,3,1,
Xin-Xin Sun 1,
Jia-Sheng Yao 5,
Hang-Kai Qi 5,
Zhi-Yu Sun 2,3,
Corresponding author: Yan-Yun Yang, [email protected]
Corresponding author: Jian-Song Wang, [email protected]
1. School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
2. Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
3. School of Nuclear Science and Technology, University of Chinese Academy of Sciences, Beijing 100080, China
4. School of Physics and Nuclear Energy Engineering, Beihang University, Beijing 100191, China
5. School of Science, Huzhou University, Huzhou 313000, China
6. Instituto de Física, Universidade de Sãn Paulo, Rua do Matão, 1371, Sãn Paulo 05508-090, SP, Brazil
7. Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, China
Abstract: Elastic scattering of 10Be on a 208Pb target was measured at $ E_{\rm Lab} $ = 127 MeV, which corresponds to three times the Coulomb barrier. The secondary 10Be beam was produced at the Radioactive Ion Beam Line in Lanzhou of the Heavy-Ion Research Facility in Lanzhou. The angular distribution of elastic scattering in the 10Be + 208Pb system shows a typical Fresnel diffraction peak. Optical model analysis of the angular distribution was performed using the Woods-Saxon, double-folding and global potentials. With the global potential, different density distributions were used. The results indicate that different density distributions for the projectile induce distinct effects in the angular distribution.
From the early days of nuclear physics, elastic scattering has been widely used to investigate the structure of the interacting nuclei [1, 2]. It is the simplest nuclear process with quite a large cross-section. Usually, the angular distribution of elastic scattering is plotted as the ratio to the Rutherford scattering. In this representation, the angular distributions for stable and ordinary nuclei at energies close to the Coulomb barrier are pure Rutherford ($ \sigma/\sigma_{\rm R}\approx 1 $) at forward angles. As the scattering angle increases, a typical Fresnel oscillatory diffraction pattern, or the so-called Coulomb rainbow, may appear due to the interference between the partial waves refracted by the Coulomb and short-range nuclear potentials [3]. At larger angles, the absorption component of the optical model potential exponentially damps the $ \sigma/\sigma_{\rm R} $ ratio [4, 5]. For lighter projectile and target nuclei, the Coulomb force is smaller and the diffractive pattern changes from the Fresnel to Fraunhofer oscillations. In reactions induced by weakly-bound nuclei on a heavy target, coupling effects from breakup channels may be important, and the angular distribution of the elastic scattering may show different features. Due to the Coulomb field of a heavy target and possible long-range component of the nuclear potential, the Fresnel peak may be reduced or sometimes completely damped. Deformation and strong cluster structure of the projectile and/or target can also play an important role in modifying the elastic scattering angular distribution, for instance, by deviating the elastic flow from the forward to backward angles [6]. Although several studies of elastic scattering of light nuclei on heavy targets have been carried out [7-9], the origin and characteristics of the long-range component of the nuclear potential in elastic scattering due to deformation, cluster configuration or low binding energy of the projectile, are yet to be fully understood. For instance, it would be interesting to investigate if the common behavior (e.g. the reduction of the Fresnel peak) of heavy-ion elastic scattering at energies close to the barrier is present for collisions at a few times the Coulomb barrier.
The description of the angular distribution of elastic scattering is very sensitive to the choice of the interaction potential. Optical model (OM) potentials is usually used in the analysis of the angular distribution, where the potential is constrained by the data [10]. The angular distribution may also be affected by the cluster structure, size and density of the interacting nuclei, which are quite different in the tightly-bound stable and weakly-bound unstable nuclei. Weakly-bound nuclei may have an extended radial distribution due to the valence particles. The size of the nuclei has been deduced by the interaction and reaction cross-section measurements [11, 12]. The extended radial distribution or strong cluster configuration of weakly-bound nuclei, and large deformations of stable nuclei, may considerably affect the surface density. Thus, it is worthwhile to test different density distributions in OM in order to better understand the angular distribution of elastic scattering .
In previous publications, our group has studied the elastic scattering of 7Be [13], 8B [14], 10B [15] and 9,10,11C [13, 15] on a lead target, and the breakup reactions of 8B [16] and 9Li [17] on carbon and lead targets, respectively. In the present work we report the study of elastic scattering of 10Be + 208Pb at the energy of about three times the Coulomb barrier. Elastic scattering induced by beryllium isotopes (7Be, 9Be, 10Be and 11Be) as projectiles on heavy targets can serve as an interesting case for investigating several effects that may emerge in the interactions. For instance, 9Be is a weakly-bound nucleus which has a Borromean cluster configuration given by $ \alpha $-$ \alpha $-n with a neutron valence separation energy of $ S_{n} $ = 1.574 MeV. Two recent works related to the elastic scattering in the 9Be + 120Sn [18] and 9Be + 208Pb systems [19] showed the importance of considering the three-body cluster model ($ \alpha $-$ \alpha $-n) of this projectile for the description of the elastic data. A clear Fresnel peak observed in the data of proton-rich radioactive 7Be beam on 208Pb, measured at the energy of about three times the barrier [13], was reduced to the angular distribution measured at energies close to the barrier (note that this was only confirmed by calculations since the experimental setup did not allow the collection of data at the required scattering angles) [20], probably due to the coupling to continuum. The coupling to continuum was found to be more pronounced for a weakly-bound projectile as observed, for instance, for 11Be [21]. Data for elastic scattering of the neutron-rich weakly-bound radioactive projectile 11Be on heavy targets is quite scarce. One such study is related to the elastic scattering of 11Be + 120Sn [22], and, despite the limited angular range measured, the authors inferred a strong damping of the Fresnel peak. These data were analyzed in terms of the short and long-range potentials, giving evidence of a large radius of the 11Be nucleus [23]. Proton-rich nuclei can also induce damping of the Fresnel peak, and the difference in the elastic scattering of proton and neutron-rich light nuclei on a heavy target was discussed in Ref. [24]. The other beryllium isotope is the 10Be nucleus. 10Be is a tightly-bound nucleus with a neutron separation energy of $ S_{n} $ = 6.812 MeV. Theoretical studies predicted a $ \alpha $-$ \alpha $-n-n cluster configuration of its ground-state [25]. This nucleus is an interesting case of possible coexistence of molecular orbital structure and cluster structure [26-28]. Data for 10Be + 208Pb elastic scattering, measured at energies close to the barrier, were reported in Ref. [29]. From the OM analysis, the authors obtained a larger total reaction cross-section for 10Be compared to 9Be on 209Bi [30]. This result is quite surprising considering that 10Be is much more bound than 9Be. Although doubt was raised in Ref. [29] about the experimental result of 9Be + 209Bi [30], one may wonder if the difference in the deformation of 10Be and 9Be may have a role in this discussion. The deformation parameter of 10Be was found to be $ \beta_2 $ = 1.14(6) [31], which gives $ \delta_2 $ = 2.947 fm ($ r_{0} $ = 1.2), while for 9Be the parameters are $ \beta_2 $ = 0.92 [32] with $ \delta_2 $ = 2.296 fm ($ r_{0} $ = 1.2), or $ \beta_2 $ = 1.1 [33] with $ \delta_2 $ = 2.45 fm ($ r_{0} $ = 1.2). Discussion of these low energy data is out of the scope of this paper. These results suggest that more experimental studies are needed to fully understand the reactions induced by these Be isotopes. It is also important to mention that 10Be is the core of the halo 11Be nucleus, and the excitation of 10Be was found to be important for describing the 11Be breakup data [34, 35].
The present work reports the measurements of the angular distribution of elastic scattering of 10Be on a 208Pb target at 127 MeV, which is around three times the Coulomb barrier. A detailed description of the experimental setup and of the data analysis is given in Sec. 2. The measured angular distribution of elastic scattering of 9,10Be and the optical model analysis are presented in Sec. 3 and Sec. 4. In Sec. 5, the summary and conclusions are presented.
2. Experimental details
9Be and 10Be were produced as secondary beams by fragmentation of the 54.2 MeV/nucleon 13C primary beam at the Radioactive Ion Beam Line in Lanzhou (RIBLL) [36, 37] of the Heavy-Ion Research Facility in Lanzhou (HIRFL) [38, 39]. The schematic view of the beam line is shown in Fig. 1. The 13C primary beam was impinged on the 4500 μm thick beryllium target which was placed in the production chamber (T0) of RIBLL. An aluminum wedge, 1510 μm thick, was placed at the first focal plane (C1) of RIBLL as a degrader. The energies of secondary beams at the center of the 208Pb target were E$ _{\rm Lab} $ = 88 and 127 MeV, for 9Be and 10Be, respectively. These secondary beams were identified and discriminated using a combination of time-of-flight and energy loss (ToF-ΔE) signals. The ToF detectors consisted of two plastic scintillators (C9H10), 50 μm-thick, installed at T1 and T2, giving a total of 17 m flight length. A 317 μm thick silicon detector (SD) was placed at T2 and used as the ΔE detector. After the particles had been identified, SD was removed from the beam line. The average intensities of 9Be and 10Be were 7×103 and 6×103 pps, with purity of 98% and 98.5%, respectively. The 208Pb target consisted of a self-supporting foil, 8.52 mg/cm2 thick, which was made by evaporation and measured by weighing.
Figure 1. (color online) Schematic view of the low-energy radioactive ion beam line at the RIBLL facility.
For scattering experiments, an accurate measurement of the scattering angle and the direction and position of the incident particles is crucial. For this purpose, collimating detectors are placed before the reaction target to determine the incident direction of the beam particles. The detectors in the setup should have the ability to effectively identify the scattered particles with high resolution for both energy and position measurements. In the present experiment we used several double-sided silicon strip detectors (DSSDs). The sketch of the setup used in the experiment is shown in Fig. 2. Two DSSDs, with 16 horizontal and 16 vertical strips and 74 μm and 87 μm thick (denoted as SiA and SiB), were used to give the precise position and direction of the incident beam particles. These detectors were set 669 mm and 69 mm away from the target position, respectively, as indicated in Fig. 2.
Figure 2. (color online) Experimental setup used for measuring the elastic scattering in the reaction 9,10Be + 208Pb.
An array of three $ {\Delta E-E} $ particle telescopes, named $ \mathit{\rm Tel1} $, $ \mathit{\rm Tel2} $ and $ \mathit{\rm Tel3} $, was used to detect the scattered particles. DSSDs with a thickness of 301, 129, and 144 μm, respectively, were used as the $ {\Delta E} $ detectors in each telescope. Each ${\Delta E} $ detector consisted of 32 elements (X position) on the junction side and 32 elements (Y position) on the ohmic side, giving a total active area of 64 mm×64 mm. For the E signal, SDs with a thickness of 1536, 1535, and 1528 μm, respectively, and with the same effective area as DSSDs, were used. The full detector array was mounted 267 mm downstream from the target, covering an angular range of 5° to 27°. The angular resolution of DSSDs was about 0.4°. The position of the scattered particles was extracted from DSSDs with an accuracy of 2 mm×2 mm. The amplification of the signal from each DSSD strip was not identical, and each was individually calibrated with the 9Be and 10Be beams. To improve the signal-to-noise ratio, the detector system was cooled by circulation of cold alcohol at a temperature of −20 °C. Typical $ {\Delta E-E} $ particle identification spectra for the 9Be and 10Be beams on 208Pb target are shown in Fig. 3. Points inside the solid red ellipsoid curve in Fig. 3(a) and Fig. 3(b) represent elastic scattering events of 9Be and 10Be, respectively.
Figure 3. (color online) The calibrated two-dimensional ${\Delta E-E}$ spectra for 9Be + 208Pb at $E_{\rm Lab}$ = 88 MeV (a), and for 10Be + 208Pb at $E_{\rm Lab}$ = 127 MeV (b), obtained from Tel3.
3. Data analysis and results
The scattering angles were obtained by extrapolation of the target position and the positions of particles hitting DSSDs, as shown in Fig. 4. For each scattered particle, the incident track $ \overrightarrow{AC}$ was determined by a combination of the hit positions in the SiA and SiB detectors, $ \overrightarrow{AB} $, which was extrapolated to the hit position C in the target. The particle is then scattered by the target and hits point D of DSSDs. The track $ \overrightarrow{CD} $ defines the outgoing path of the scattered particle. The angle between the incident direction $ \overrightarrow{AC} $ and the scattered direction $ \overrightarrow{CD} $ is the scattering angle $ \theta_{\rm Lab} $, which was calculated on the event-by-event basis. The beam spot on the target was large ($ \approx $ 30 mm) and asymmetrical. Therefore, a Monte Carlo simulation, taking into account the detector geometry and the beam distribution in the target, was used to evaluate the absolute differential cross-section.
Figure 4. (color online) Diagram for calculating the scattering angle.
Another important issue considered was the contamination of the data from particles scattered by DSSDs (SiA and SiB). In previous experiments, we used Parallel-Plate Avalanche Counters (PPACs) to measure the position and direction of the beam particles [40-42]. Compared to silicon detectors, PPAC introduces almost no disturbance of RIB. In this experiment, we used two thin DSSDs replacing PPACs, which caused some contamination of the data from scattering in the detectors. The events coming from the detector system were measured with the target moved out. To check the impact of the target-out events on the data, we performed a simulation assuming that the particles are scattered in the silicon detectors before scattering in the 208Pb target. Only scattered particles in SiB were considered in the simulation. The incident direction of the beam is calculated after scattering in SiA, and thus particles scattered by SiA do not affect the data. Since $ \mathit{\rm Tel1} $ was placed at a large angle, particles that do not originate in the target can hardly hit it. Thus, the target-out events mostly come from $ {\mathit {\rm Tel2}} $ and $ {\mathit {\rm Tel3}} $. The results of the simulations for the 9Be and 10Be beams as a function of the angle $ \theta $, with (blue line) and without (red line) scattered particles in Si$ _{B} $, are shown in Fig. 5. The scattering events in Si$ _{B} $ for both 9Be and 10Be beams account for less than 5% of all scattering events. We conclude that the contribution of the target-out events affects the data very little, basically in the forward angles, which can be considered negligible in our experiment. Also, the effect of the small contribution at forward angles was diluted by the normalization method.
Figure 5. (color online) Simulation results for 9Be (left) and 10Be (right) as a function of $ \theta $ with and without scattering in SiB.
The elastic scattering differential cross-section as the ratio to the Rutherford cross-section is obtained by:
$ \begin{aligned}\frac{\sigma(\theta)}{\sigma_{\rm Ruth}(\theta)}=\frac{{\rm d}\sigma(\theta)/{\rm d}\Omega}{{\rm d}\sigma_{\rm Ruth}(\theta)/{\rm d}\Omega}= \frac{\dfrac{N(\theta)_{\rm exp}}{N_{\rm in}N_{\rm target}{\rm d}\Omega}}{\dfrac{N(\theta)_{\rm Ruth}}{N_{\rm in}N_{\rm target}{\rm d}\Omega}}= C\times\frac{N(\theta)_{\rm exp}}{N(\theta)_{\rm Ruth}}, \end{aligned} $
where $ \mathit{C} $ is the normalization constant, $ N_{\rm in} $ is the number of incident particles, $ N_{\rm target} $ is the number of target nuclei per unit area, $ N(\theta)_{\rm exp} $ and $ N(\theta)_{\rm Ruth} $ are the yields at a given angle in the data and from the simulations, respectively. The normalization constant $ \mathit{C} $ for the 9Be angular distribution was obtained by normalizing the experimental cross-section to the simulation results for angles below 20°, where the elastic scattering is assumed to be pure Rutherford scattering. This overall normalization was also applied to the cross-section of the 10Be + 208Pb system. With this method, the cross-sections are obtained in a straight forward way, and the influence of the systematic errors of the measured total number of incident particles, target thickness and solid angle determination was avoided. To minimize the systematic errors, small corrections of the detector misalignment were also performed. The details of this procedure can be found in Ref. [43].
It is important to mention that, in principle, the elastic and inelastic scattering from the excited states of the lead target nuclei could not be discriminated and the data are quasi-elastic in nature. However, the contributions from the excited states of the lead target were found to be negligible in several other experiments with a similar energy and angular range [13]. For this reason, we consider in the present work that the data are for elastic scattering.
The differential cross-sections for elastic scattering were normalized to the differential cross-section of Rutherford scattering, and are plotted as a function of scattering angle. The elastic scattering angular distributions for 9Be + 208Pb at the energy $ E_{\rm Lab} $ = 88 MeV, and for 10Be + 208Pb at the energy $ E_{\rm Lab} $ = 127 MeV, are shown in Fig. 6. As can be seen, the ratio $ \sigma \ / \sigma_{\rm Ruth} $ for 9Be is close to unity since the Rutherford scattering is dominant within the measured angular range. In the angular distribution of 10Be + 208Pb, shown in Fig. 6(b), the typical Fresnel diffraction can be observed.
Figure 6. (color online) The experimental elastic scattering angular distributions for the 9Be + 208Pb system at the energy $E_{\rm Lab}$ = 88 MeV (a) , and for the 10Be + 208Pb system at the energy $E_{\rm Lab}$ = 127 MeV (b).
4. Optical model analysis
Optical model analysis of the elastic scattering differential cross-section data was performed. All the calculations were performed with the code FRESCO [44]. We first considered the complex Woods-Saxon (WS) potential, which has six parameters, namely, the real (imaginary) potential depth V (W), radius $ r_{v} $ ($ r_{w} $) and the diffuseness $ a_{v} $ ($ a_{w} $). The reduced radii have to be multiplied by the mass term $ (A_{P}^{1/3} + A_{T}^{1/3}) $, where $ A_{P} $ = 10 and $ A_{T} $ = 208, to give the radii of the real and imaginary potentials. The WS potential for 10Be + 208Pb was obtained by adjusting the six parameters to best reproduce the elastic scattering data. The parameters from the fit procedure are listed in Table 1, and were obtained with the minimum $ \chi^2 $ criteria given by:
parameters V/MeV rv/fm av/fm W/MeV rw/fm aw/fm χ2
10Be + 208Pb 18.33 1.251 0.636 20.27 1.255 0.744 0.493
Table 1. The Woods-Saxon parameters obtained by fitting the experimental data.
$ \chi^2 = \frac{1}{N}\sum\limits_{i = 1}^N\frac{[\sigma_{i}^{\rm exp}-\sigma_{i}^{\rm th}]^2}{\Delta\sigma_{i}^2}, $
in which N is the number of data points, $ \sigma_{i}^{\rm exp} $ and $ \sigma_{i}^{\rm th} $ are the experimental and the calculated differential cross-sections, and $ \Delta\sigma_{i} $ is the uncertainty of the experimental cross-section. The results of the OM analysis with the WS potential are shown in Fig. 7 by the black dashed line. As can be seen, the agreement with the data is good, in particular at the Fresnel peak. The total reaction cross-section obtained with the WS potential is 3370 mb. However, since the experimental data were obtained in a relatively limited angular range, the potential parameters are not unique.
Figure 7. (color online) Elastic scattering angular distribution for the 10Be + 208Pb system at 127 MeV. The lines are the results of the optical model analysis with the WS potential and the double-folding SPP.
To decrease the number of free parameters, and thus the ambiguities in the potentials in the OM analysis, folding potentials have been developed. The results with the double-folding São Paulo Potential (SPP) in the OM analysis are shown in Fig. 7 by the red solid line. The total reaction cross-section obtained with SPP is 3240 mb. SPP is a "folding-type" effective nucleon-nucleon interaction with a fixed parametrized nucleon density distributions in the projectile and target. It can be used in association with OM, with $ N_{\rm R} $ and $ N_{\rm I} $ as normalizations of the real and imaginary parts [45]. From a large set of systematic values, $ N_{\rm R} $ = 1.00 and $ N_{\rm I} $ = 0.78 were proposed [46]. With the standard values of the normalization, we could reproduce well the data in the measured angular region with SPP, as can be seen in Fig. 7. SPP fit is more sensitive to the backward angles, where the influence of the absorption of the flux from direct reactions is more important. A measurement at more backward angles for 10Be would be highly desirable for a more rigorous test of this potential.
We considered another global nucleus-nucleus potential, which was obtained from a systematic optical potential analysis by Xu and Pang (X&P) [47]. This global potential can reasonably reproduce the elastic scattering and total reaction cross-sections for projectiles with mass numbers up to $ A\lesssim40 $, including the stable and unstable nuclei, and at energies above the Coulomb barrier. It is obtained by folding the semi-microscopic Bruyères Jeukenne-Lejeuue-Mahaux (JLMB) nucleon-nucleus potential with the nucleon density distribution of the projectile nucleus [47]. The JLMB potential itself employs single-folding of the effective nucleon-nucleon interaction with the nucleon density distribution of the target nucleus [48]. Hence, the X&P potential is single folding in nature, but it also requires nucleon density distributions in both the projectile and target nuclei. The results with JLMB are close to SPP, but in some cases it may overestimate the differential cross-section for large angles. These may be caused by the special consideration of the Pauli nonlocality in SPP, which is important at low incident energies [47]. The density distribution in the projectile can be deduced from the observed interaction and total reaction cross-sections using the Glauber model [49], or using the Hartree-Fock calculation [50]. The proton and neutron density distributions in the target nuclei are obtained from the Hartree-Fock calculation with the SkX interaction [47]. Optical model results with this global nucleus-nucleus potential, using different density distributions in 10Be, and the comparison with the data for 10Be + 208Pb are shown in Fig. 8. The root-mean-square (RMS) radii of proton, neutron and nuclear matter distributions used in these calculations are summarized in Table 2. The first row in Table 2 gives the RMS radii derived from the Glauber model with harmonic oscillator distributions which result in the RMS radius $ R_{\rm HO} = 2.299 $ fm for 10Be [49]. In row 2, the RMS radius $ R_{\rm Liatard} = 2.479 $ fm was determined from the Glauber model analysis of the total reaction cross-section of 10Be on a carbon target by Liatard et al. [11]. $ R_{\rm Liatard2} $ and $ R_{\rm Liatard3} $ are two artificial density distributions, obtained by stretching the distribution proposed by Liatard et al., so that the radius of 10Be is increased by 10% and 20%, respectively. The elastic scattering angular distributions calculated with the X&P potential using these density distributions are shown in Fig. 8 together with the experimental data. From these results, it can be concluded that a change of the RMS radius by 10% induces a shift of the angular distribution by about 0.7 degrees at angles where $ \sigma/\sigma_{\rm Ruth} = 0.5 $. In other words, a precision of the angular distribution measurement of 0.1 degrees, which is feasible with modern techniques, allows to determine the RMS radius (of a nucleus like 10Be) with a precision of around 1.4%. Given that there are quite large uncertainties of RMS radius of light heavy ions (see, e.g., the compilation of RMS radii of light heavy ions in Ref. [49]), it might be interesting to measure the RMS radii of these nuclei in elastic scattering experiments. Of course, more effort needs to be made both at the experimental and theoretical level, to fully understand the precision of this method. Obviously, better statistics, which requires higher beam intensities and a high-performance detector arrays, would be needed.
parameters $\langle r_{p}^2 \rangle^{1/2}$ $\langle r_{n}^2 \rangle^{1/2}$ $\langle r^2 \rangle^{1/2}$ Ref. σ/mb
$R_{\rm HO}$ 2.186 2.311 2.299 [49] 3029
$R_{\rm Liatard}$ 2.311 2.585 2.479 [11] 3138
$R_{\rm Liatard2}$ 2.541 2.844 2.727 [11] 3285
Table 2. The RMS radii of the proton, neutron and nuclear matter distributions used, in units of fm. The total reaction cross-sections and references are listed.
Figure 8. (color online) Comparison of the experimental data and the optical model calculations using nucleus-nucleus potentials with different density distributions for 10Be. The angular distribution in green dashed line was calculated with $R_{\rm HO}$ [49]. The angular distribution in black dotted line, red solid line and orange dash-dotted line were calculated with density distributions $R_{\rm Liatard}$ [11], $R_{\rm Liatard2}$ and $R_{\rm Liatard3}$, respectively.
Comparison of the experimental data and the results of optical model calculations using the SPP and X&P potentials was also made for the 9Be + 208Pb system. The results are shown in Fig. 9. The Coulomb dominance is clearly seen in the range of scattering angles of our experiment. For this reason, a similar analysis as for 10Be was not made. For stable nuclei like 9Be, both SPP and X&P reproduce the elastic scattering data quite well.
Figure 9. (color online) Comparison of the experimental data and the optical model calculations using the SPP (red solid line) and X&P (black dotted line) potentials for the 9Be + 208Pb system.
5. Summary and conclusions
Measurements of elastic scattering of 9Be and 10Be on 208Pb at energies above the Coulomb barrier were performed at HIRFL-RIBLL. The elastic scattering angular distributions for 9Be and 10Be were measured at ELab = 88 and 127 MeV, respectively. For the stable 9Be nucleus, the elastic scattering is a pure Rutherford scattering in the angular range measured, and was used for normalization of the 10Be + 208Pb data. The present data show that the detection system is a powerful equipment for performing elastic scattering measurements. The angular distribution of the 10Be + 208Pb system was analyzed with the optical model using the Woods-Saxon and the double-folding São Paulo potentials. The measured angular distribution is well described by these potentials. Optical model analysis using a global nucleus-nucleus potential, based on a single-folding potential, was also performed. In this analysis different density distributions were used showing that the angular distribution is sensitive to the choice of the radius of the projectile.
In conclusion, we performed measurements of elastic scattering of 10Be on a lead target at an energy above the Coulomb barrier. The obtained angular distribution shows a Fresnel peak. However, it would be desirable to have measurements at more backward angles, where higher sensitivity could be obtained for the choice of potential and for the influence of other mechanisms, such as coupling to excited states and direct reactions.
We would like to acknowledge the staff of HIRFL for the operation of the cyclotron and friendly collaboration.
PDF查看
关注分享
IOPScience
Chinese Physical Society
E-mail: [email protected]
CPC Website
IOP Website
CPC WeChat
Copyright © Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Road, Beijing 100049, China
Supported by: Beijing Renhe Information Technology Co. Ltd E-mail: [email protected]
Export File
DownLoad: Full-Size Img PowerPoint
|
CommonCrawl
|
Difference between revisions of "Probability Seminar"
Valko (talk | contribs)
(→April 26, Colloquium, Kavita Ramanan, Brown)
Vadicgor (talk | contribs)
(→March 19, 2020, SPRING BREAK)
= Spring 2019 =
<b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted.
<b>We usually end for questions at 3:15 PM.</b>
If you would like to sign up for the email list to receive seminar announcements then please send an email to
[mailto:[email protected] [email protected]]
== January 23, 2020, [https://www.math.wisc.edu/~seppalai/ Timo Seppalainen] (UW Madison) ==
'''Non-existence of bi-infinite geodesics in the exponential corner growth model
Whether bi-infinite geodesics exist has been a significant open problem in first- and last-passage percolation since the mid-80s. A non-existence proof in the case of directed planar last-passage percolation with exponential weights was posted by Basu, Hoffman and Sly in November 2018. Their proof utilizes estimates from integrable probability. This talk describes an independent proof completed 10 months later that relies on couplings, coarse graining, and control of geodesics through planarity and increment-stationary last-passage percolation. Joint work with Marton Balazs and Ofer Busani (Bristol).
== January 31, [https://www.math.princeton.edu/people/oanh-nguyen Oanh Nguyen], [https://www.math.princeton.edu/ Princeton] ==
== January 30, 2020, [https://www.math.wisc.edu/people/vv-prof-directory Scott Smith] (UW Madison) ==
'''Quasi-linear parabolic equations with singular forcing'''
Title: '''Survival and extinction of epidemics on random graphs with general degrees'''
The classical solution theory for stochastic ODE's is centered around Ito's stochastic integral. By intertwining ideas from analysis and probability, this approach extends to many PDE's, a canonical example being multiplicative stochastic heat equations driven by space-time white noise. In both the ODE and PDE settings, the solution theory is beyond the scope of classical deterministic theory because of the ambiguity in multiplying a function with a white noise. The theory of rough paths and regularity structures provides a more quantitative understanding of this difficulty, leading to a more refined solution theory which efficiently divides the analytic and probabilistic aspects of the problem, and remarkably, even has an algebraic component.
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$.
In this talk, we will discuss a new application of these ideas to stochastic heat equations where the strength of the diffusion is not constant but random, as it depends locally on the solution. These are known as quasi-linear equations. Our main result yields the deterministic side of a solution theory for these PDE's, modulo a suitable renormalization. Along the way, we identify a formally infinite series expansion of the solution which guides our analysis, reveals a nice algebraic structure, and encodes the counter-terms in the PDE. This is joint work with Felix Otto, Jonas Sauer, and Hendrik Weber.
Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
== <span style="color:red"> Wednesday, February 6 at 4:00pm in Van Vleck 911</span> , [https://lc-tsai.github.io/ Li-Cheng Tsai], [https://www.columbia.edu/ Columbia University] ==
== February 6, 2020, [https://sites.google.com/site/cyleeken/ Cheuk-Yin Lee] (Michigan State) ==
'''Sample path properties of stochastic partial differential equations: modulus of continuity and multiple points'''
Title: '''When particle systems meet PDEs'''
In this talk, we will discuss sample path properties of stochastic partial differential equations (SPDEs). We will present a sharp regularity result for the stochastic wave equation driven by an additive Gaussian noise that is white in time and colored in space. We prove the exact modulus of continuity via the property of local nondeterminism. We will also discuss the existence problem for multiple points (or self-intersections) of the sample paths of SPDEs. Our result shows that multiple points do not exist in the critical dimension for a large class of Gaussian random fields including the solution of a linear system of stochastic heat or wave equations.
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
== February 13, 2020, [http://www.jelena-diakonikolas.com/ Jelena Diakonikolas] (UW Madison) ==
''' '''
== February 7, [http://www.math.cmu.edu/~yug2/ Yu Gu], [https://www.cmu.edu/math/index.html CMU] ==
== February 20, 2020, [https://math.berkeley.edu/~pmwood/ Philip Matchett Wood] (UC Berkeley) ==
Title: '''Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime'''
== February 27, 2020, No seminar ==
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
== March 5, 2020, [https://www.ias.edu/scholars/jiaoyang-huang Jiaoyang Huang] (IAS) ==
== February 14, [https://www.math.wisc.edu/~seppalai/ Timo Seppäläinen], UW-Madison==
== March 12, 2020, No seminar ==
Title: '''Geometry of the corner growth model'''
== March 19, 2020, Spring break ==
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
== March 26, 2020, [https://math.cornell.edu/philippe-sosoe Philippe Sosoe] (Cornell) ==
== February 21, [https://people.kth.se/~holcomb/ Diane Holcomb], KTH ==
== April 2, 2020, [http://pages.cs.wisc.edu/~tl/ Tianyu Liu] (UW Madison)==
== April 9, 2020, [http://stanford.edu/~ajdunl2/ Alexander Dunlap] (Stanford) ==
Title: '''On the centered maximum of the Sine beta process'''
== April 16, 2020, [https://statistics.wharton.upenn.edu/profile/dingjian/ Jian Ding] (University of Pennsylvania) ==
== April 22-24, 2020, [http://frg.int-prob.org/ FRG Integrable Probability] meeting ==
Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
3-day event in Van Vleck 911
== Probability related talk in PDE Geometric Analysis seminar: <br> Monday, February 22 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison ==
== April 23, 2020, [http://www.hairer.org/ Martin Hairer] (Imperial College) ==
Title: Quantitative homogenization in a balanced random environment
[https://www.math.wisc.edu/wiki/index.php/Colloquia Wolfgang Wasow Lecture] at 4pm in Van Vleck 911
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
== April 30, 2020, [http://willperkins.org/ Will Perkins] (University of Illinois at Chicago) ==
== <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http://www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] ==
<div style="width:520px;height:50px;border:5px solid black">
<b><span style="color:red">  Please note the unusual day and time.
  </span></b>
Title: '''Functional Limit Laws for Recurrent Excited Random Walks'''
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
<!-- == March 7, TBA == -->
<!-- == March 14, TBA == -->
== March 21, Spring Break, No seminar ==
== March 28, [https://www.math.wisc.edu/~shamgar/ Shamgar Gurevitch] [https://www.math.wisc.edu/ UW-Madison]==
Title: '''Harmonic Analysis on GLn over finite fields, and Random Walks'''
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the ''character ratio'':
\text{trace}(\rho(g))/\text{dim}(\rho),
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant ''rank''. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
== April 4, [https://www.math.wisc.edu/~pmwood/ Philip Matchett Wood], [http://www.math.wisc.edu/ UW-Madison] ==
Title: '''Outliers in the spectrum for products of independent random matrices'''
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
== April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] ==
'''Title: Stabilization of Diffusion Limited Aggregation in a Wedge.'''
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
== April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] ==
Title: '''Large Deviations Theory for Chemical Reaction Networks'''
The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
== April 25, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
== April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
Title: Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer's theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems.
== May 2, TBA ==
==<span style="color:red"> Friday, August 10, 10am, B239 Van Vleck </span> András Mészáros, Central European University, Budapest ==
Title: '''The distribution of sandpile groups of random regular graphs'''
We study the distribution of the sandpile group of random <math>d</math>-regular graphs. For the directed model we prove that it follows the Cohen-Lenstra heuristics, that is, the probability that the <math>p</math>-Sylow subgroup of the sandpile group is a given <math>p</math>-group <math>P</math>, is proportional to <math>|\operatorname{Aut}(P)|^{-1}</math>. For finitely many primes, these events get independent in limit. Similar results hold for undirected random regular graphs, there for odd primes the limiting distributions are the ones given by Clancy, Leake and Payne.
Our results extends a recent theorem of Huang saying that the adjacency matrices of random <math>d</math>-regular directed graphs are invertible with high probability to the undirected case.
==September 20, [http://math.columbia.edu/~hshen/ Hao Shen], [https://www.math.wisc.edu/ UW-Madison] ==
Title: '''Stochastic quantization of Yang-Mills'''
"Stochastic quantization" refers to a formulation of quantum field theory as stochastic PDEs. Interesting progress has been made these years in understanding these SPDEs, examples including Phi4 and sine-Gordon. Yang-Mills is a type of quantum field theory which has gauge symmetry, and its stochastic quantization is a Yang-Mills flow perturbed by white noise.
In this talk we start by an Abelian example where we take a symmetry-preserving lattice regularization and study the continuum limit. We will then discuss non-Abelian Yang-Mills theories and introduce a symmetry-breaking smooth regularization and restore the symmetry using a notion of gauge-equivariance. With these results we can construct dynamical Wilson loop and string observables. Based on [S., arXiv:1801.04596] and [Chandra,Hairer,S., work in progress].
[[Past Seminars]]
Thursdays in 901 Van Vleck Hall at 2:30 PM, unless otherwise noted. We usually end for questions at 3:20 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to [email protected]
January 23, 2020, Timo Seppalainen (UW Madison)
Non-existence of bi-infinite geodesics in the exponential corner growth model
January 30, 2020, Scott Smith (UW Madison)
Quasi-linear parabolic equations with singular forcing
February 6, 2020, Cheuk-Yin Lee (Michigan State)
Sample path properties of stochastic partial differential equations: modulus of continuity and multiple points
February 13, 2020, Jelena Diakonikolas (UW Madison)
February 20, 2020, Philip Matchett Wood (UC Berkeley)
February 27, 2020, No seminar
March 5, 2020, Jiaoyang Huang (IAS)
March 12, 2020, No seminar
March 19, 2020, Spring break
March 26, 2020, Philippe Sosoe (Cornell)
April 2, 2020, Tianyu Liu (UW Madison)
April 9, 2020, Alexander Dunlap (Stanford)
April 16, 2020, Jian Ding (University of Pennsylvania)
April 22-24, 2020, FRG Integrable Probability meeting
April 23, 2020, Martin Hairer (Imperial College)
Wolfgang Wasow Lecture at 4pm in Van Vleck 911
April 30, 2020, Will Perkins (University of Illinois at Chicago)
Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Probability_Seminar&oldid=18795"
|
CommonCrawl
|
Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2015: systems biology
Structured sparse CCA for brain imaging genetics via graph OSCAR
Lei Du1,
Heng Huang2,
Jingwen Yan1,
Sungeun Kim1,
Shannon Risacher1,
Mark Inlow3,
Jason Moore4,
Andrew Saykin1,
Li Shen1 &
for the Alzheimer's Disease Neuroimaging Initiative
Recently, structured sparse canonical correlation analysis (SCCA) has received increased attention in brain imaging genetics studies. It can identify bi-multivariate imaging genetic associations as well as select relevant features with desired structure information. These SCCA methods either use the fused lasso regularizer to induce the smoothness between ordered features, or use the signed pairwise difference which is dependent on the estimated sign of sample correlation. Besides, several other structured SCCA models use the group lasso or graph fused lasso to encourage group structure, but they require the structure/group information provided in advance which sometimes is not available.
We propose a new structured SCCA model, which employs the graph OSCAR (GOSCAR) regularizer to encourage those highly correlated features to have similar or equal canonical weights. Our GOSCAR based SCCA has two advantages: 1) It does not require to pre-define the sign of the sample correlation, and thus could reduce the estimation bias. 2) It could pull those highly correlated features together no matter whether they are positively or negatively correlated. We evaluate our method using both synthetic data and real data. Using the 191 ROI measurements of amyloid imaging data, and 58 genetic markers within the APOE gene, our method identifies a strong association between APOE SNP rs429358 and the amyloid burden measure in the frontal region. In addition, the estimated canonical weights present a clear pattern which is preferable for further investigation.
Our proposed method shows better or comparable performance on the synthetic data in terms of the estimated correlations and canonical loadings. It has successfully identified an important association between an Alzheimer's disease risk SNP rs429358 and the amyloid burden measure in the frontal region.
In recent years, the bi-multivariate analyses techniques [1], especially the sparse canonical correlation analysis (SCCA) [2–8], have been widely used in brain imaging genetics studies. These methods are powerful in identifying bi-multivariate associations between genetic biomarkers, e.g., single nucleotide polymorphisms (SNPs), and the imaging factors such as the quantitative traits (QTs).
Witten et al. [3, 9] first employed the penalized matrix decomposition (PMD) technique to handle the SCCA problem which had a closed form solution. This SCCA imposed the ℓ 1-norm into the traditional CCA model to induce sparsity. Since the ℓ 1-norm only randomly chose one of those correlated features, it performed poorly in finding structure information which usually existed in biology data. Witten et al. [3, 9] also implemented the fused lasso based SCCA which penalized two adjacent features orderly. This SCCA could capture some structure information but it demanded the features be ordered. As a result, a lot of structured SCCA approaches arose. Lin et al. [7] imposed the group lasso regularizer to the SCCA model which could make use of the non-overlapping group information. Chen et al. [10] proposed a structure-constrained SCCA (ssCCA) which used a graph-guided fused ℓ 2-norm penalty for one canonical loading according to features' biology relationships. Du et al. [8] proposed a structure-aware SCCA (S2CCA) to identify group-level bi-multivariate associations, which combined both the covariance matrix information and the prior group information by the group lasso regularizer. These structured SCCA methods, on one hand, can generate a good result when the prior knowledge is well fitted to the hidden structure within the data. On the other hand, they become unapproachable when the prior knowledge is incomplete or not available. Moreover, it is hard to precisely capture the prior knowledge in real world biomedical studies.
To facilitate structural learning via grouping the weights of highly correlated features, the graph theory were widely utilized in sparse regression analysis [11–13]. Recently, we notice that the graph theory has also been employed to address the grouping issue in SCCA. Let each graph vertex and each feature has a one-to-one correspondence relationship, and ρ ij be the sample correlation between features i and j. Chen et al. [4, 5] proposed a network-structured SCCA (NS-SCCA) which used the ℓ 1-norm of |ρ ij |(u i −s i g n(ρ ij )u j ) to pull those positively correlated features together, and fused those negatively correlated features to the opposite direction. The knowledge-guided SCCA (KG-SCCA) [14] was an extension of both NS-SCCA [4, 5] and S2CCA [8]. It used ℓ 2-norm of \(\rho _{ij}^{2}(u_{i}-sign(r_{ij})u_{j})\) for one canonical loading, similar to what Chen proposed, and employed the ℓ 2,1-norm penalty for another canonical loading. Both NS-SCCA and KG-SCCA could be used as a group-pursuit method if the prior knowledge was not available. However, one limitation of both models is that they depend on the sign of pairwise sample correlation to recover the structure pattern. This probably incur undesirable bias since the sign of the correlations could be wrongly estimated due to possible graph misspecification caused by noise [13].
To address the issues above, we propose a novel structured SCCA which neither requires to specify prior knowledge, nor to specify the sign of sample correlations. It will also work well if the prior knowledge is provided. The GOSC-SCCA, named from Graph Octagonal Selection and Clustering algorithm for Sparse Canonical Correlation Analysis, is inspired by the outstanding feature grouping ability of octagonal selection and clustering algorithm for regression (OSCAR) [11] regularizer and graph OSCAR (GOSCAR) [13] regularizer in regression task. Our contributions can be summarized as follows 1) GOSC-SCCA could pull those highly correlated features together when no prior knowledge is provided. While those positively correlated features will be encouraged to have similar weights, those negatively correlated ones will also be encouraged to have similar weights but with different signs. 2) Our GOSC-SCCA could reduce the estimation bias given no requirement for specifying the sign of sample correlation. 3) We provide a theoretical quantitative description for the grouping effect of GOSC-SCCA. We use both synthetic data and real imaging genetic data to evaluate GOSC-SCCA. The experimental results show that our method is better than or comparable to those state-of-the-art methods, i.e., L1-SCCA, FL-SCCA [3] and KG-SCCA [14], in identifying stronger imaging genetic correlations and more accurate and cleaner canonical loadings pattern. Note that the PMA software package were used to implement the L1-SCCA (SCCA with lasso penalty) and FL-SCCA (SCCA with fused lasso penalty) methods. Please refer to http://cran.r-project.org/web/packages/PMA/ for more details.
We denote a vector as a boldface lowercase letter, and denote a matrix as a boldface uppercase letter. m i indicates the i-th row of matrix M=(m ij ). Matrices \(\mathbf {X} = \{\mathrm {\mathbf {x}}^{1}; \ldots ; \mathrm {\mathbf {x}}^{n}\} \subseteq \mathbb {R}^{p}\) and \(\mathbf {Y} = \{\mathrm {\mathbf {y}}^{1}; \ldots ; \mathrm {\mathbf {y}}^{n}\} \subseteq \mathbb {R}^{q}\) denote two separate datasets collected from the same population. Imposing lasso into a traditional CCA model [15], the L1-SCCA model is formulated as follows [3, 9]:
$$ \begin{aligned} & \min_{\mathbf{u},\mathbf{v}} -\mathbf{u}^{T} \mathbf{X}^{T} \mathbf{Y} \mathbf{v},\\ & s.t. ||\mathbf{u}||_{2}^{2} = 1, ||\mathbf{v}||_{2}^{2} = 1, ||\mathbf{u}||_{1} \leq c_{1}, ||\mathbf{v}||_{1} \leq c_{2}, \end{aligned} $$
where ||u||1≤c 1 and ||v||1≤c 2 are sparsity penalties controlling the complexity of the SCCA model. The fused lasso [2–4, 9] can also be used instead of lasso. In order to make the problem be convex, the equal sign is usually replaced by less-than-equal sign, i.e. \(||\mathbf {u}||_{2}^{2} \leq 1, ||\mathbf {v}||_{2}^{2} \leq 1\) [3].
The graph OSCAR regularization
The OSCAR regularizer is firstly introduced by Bondell et al. [11], which has been proved to have the ability of grouping features automatically by encouraging those highly correlated features to have similar weights. Formally, the OSCAR penalty is defined as follows,
$$ \begin{aligned} ||\mathbf{u}||_{\text{OSCAR}} = \sum\limits_{i<j}\max\{|u_{i}|,|u_{j}|\}, \\ ||\mathbf{v}||_{\text{OSCAR}} = \sum\limits_{i<j}\max\{|v_{i}|,|v_{j}|\}. \end{aligned} $$
Note that this penalty is applied to each feature pair.
To make OSCAR be more flexible, Yang et al. [13] introduce the GOSCAR,
$$ \begin{aligned} ||\mathbf{u}||_{\text{GOSCAR}} = \sum\limits_{(i,j) \in E_{u}}\max\{|u_{i}|,|u_{j}|\}, \\ ||\mathbf{v}||_{\text{GOSCAR}} = \sum\limits_{(i,j) \in E_{v}}\max\{|v_{i}|,|v_{j}|\}. \end{aligned} $$
where E u and E v are the edge sets of the u-related and v-related graphs, respectively. Obviously, the GOSCAR will reduce to OSCAR when both graphs are complete [13].
Applying \(\max \{|u_{i}|,|u_{j}|\} = \frac {1}{2}(|u_{i}-u_{j}|+|u_{i}+u_{j}|)\), the GOSCAR regularizer takes the following form,
$${} \begin{aligned} ||\mathbf{u}||_{\text{GOSCAR}} = \frac{1}{2}\sum\limits_{(i,j) \in E_{u}} (|u_{i}-u_{j}|) + \frac{1}{2}\sum\limits_{(i,j) \in E_{u}} (|u_{i}+u_{j}|), \\ ||\mathbf{v}||_{\text{GOSCAR}} = \frac{1}{2}\sum\limits_{(i,j) \in E_{v}} (|v_{i}-v_{j}|) + \frac{1}{2}\sum\limits_{(i,j) \in E_{v}} (|v_{i}+v_{j}|). \end{aligned} $$
The GOSC-SCCA model
Since the grouping effect is also an important consideration in SCCA learning, we propose to expand L1-SCCA to GOSC-SCCA by imposing GOSCAR instead of L1 only as follows.
$$ \begin{aligned} & \min_{\mathbf{u},\mathbf{v}} -\mathbf{u}^{T} \mathbf{X}^{T} \mathbf{Y} \mathbf{v}\\ & s.t. ~||\mathbf{Xu}||_{2}^{2} \leq 1, ||\mathbf{Yv}||_{2}^{2} \leq 1, ||\mathbf{u}||_{1} \leq c_{1}, ||\mathbf{v}||_{1} \leq c_{2},\\ & ||\mathbf{u}||_{\text{GOSCAR}} \leq c_{3}, ||\mathbf{v}||_{\text{GOSCAR}} \leq c_{4}. \end{aligned} $$
where (c 1,c 2,c 3,c 4) are parameters and they could control the solution path of the canonical loadings. Since the S2CCA [8] has proved that the covariance matrix information could help improve the prediction ability, we also use \(||\mathbf {Xu}||_{2}^{2} \leq 1\) and \(||\mathbf {Yv}||_{2}^{2} \leq 1\) other than \(||\mathbf {u}||_{2}^{2} \leq 1, ||\mathbf {v}||_{2}^{2} \leq 1\).
As a structured sparse model, GOSC-SCCA will encourage \(u_{i} \doteq u_{j}\) if the i-th feature and the j-th feature are highly correlated. We will give a quantitative description for this later.
The proposed algorithm
We can write the objective function into unconstrained formulation via the Lagrange multiplier method, i.e.
$${} \begin{aligned} \mathbf{\mathcal{L}(u,v)} &= -\mathbf{u}^{T} \mathbf{X}^{T} \mathbf{Y} \mathbf{v} + \lambda_{1}||\mathbf{u}||_{\text{GOSCAR}}+\lambda_{2}||\mathbf{v}||_{\text{GOSCAR}} \\ &\quad +\frac{\beta_{1}}{2}||\mathbf{u}||_{1}+\frac{\beta_{2}}{2}||\mathbf{v}||_{1}+ +\frac{\gamma_{1}}{2}||\mathbf{Xu}||_{2}^{2}+\frac{\gamma_{2}}{2}||\mathbf{Yv}||_{2}^{2} \end{aligned} $$
where (λ 1,λ 2,β 1,β 2) are tuning parameters, and they have a one-to-one correspondence to parameters (c 1,c 2,c 3,c 4) in GOSC-SCCA model [4].
Taking the derivative regarding u and v respectively, and letting them be zero, we obtain,
$$\begin{array}{@{}rcl@{}} -\mathbf{X}^{T}\mathbf{Yv} +\lambda_{1} \mathbf{L}_{1}\mathbf{u} +\lambda_{1} \hat{\mathbf{L}}_{1}\mathbf{u}+ \beta_{1}\mathbf{\Lambda_{1}} +\gamma_{1}\mathbf{X}^{T}\mathbf{X}\mathbf{u}=0, \end{array} $$
$$\begin{array}{@{}rcl@{}} -\mathbf{Y}^{T}\mathbf{Xu}+\lambda_{2} \mathbf{L}_{2}\mathbf{v}+\lambda_{2} \hat{\mathbf{L}}_{2}\mathbf{v}+\beta_{2}\mathbf{\Lambda_{2}}+\gamma_{2}\mathbf{Y}^{T}\mathbf{Y}\mathbf{v}=0. \end{array} $$
where Λ 1 is a diagonal matrix with the k 1-th element as \(\frac {1}{2||u_{k_{1}}||_{1}} (k_{1} \in [1,p])\), and Λ 2 with the k 2-th element as \(\frac {1}{2||v_{k_{2}}||_{1}} (k_{2} \in [1,q])\); L 1 is the Laplacian matrix which can be obtained from L 1=D 1−W 1; \(\hat {\mathbf {L}}_{1}\) is a matrix which is from \(\hat {\mathbf {L}}_{1} = \hat {\mathbf {D}}_{1} + \hat {\mathbf {W}}_{1}\). L 2 and \(\hat {\mathbf {L}}_{2}\) have the same entries as L 1 and \(\hat {\mathbf {L}}_{1}\) separately based on v.
In the initialization, both W 1 and \(\hat {\mathbf {W}}_{1}\) have the same entry with each element as \(\frac {1}{2}\) except the diagonal elements. But W 1 and \(\hat {\mathbf {W}}_{1}\) become different after each iteration, i.e.,
$$ w_{ij} = \frac{1}{2|u_{i}-u_{j}|}, ~~~{\hat{w}}_{ij} = \frac{1}{2|u_{i}+u_{j}|}. $$
If ||u i −u j ||1=0, the corresponding element in matrix W 1 will not exist. So we regularize it as \(\frac {1}{2\sqrt {||u_{i}-u_{j}||_{1}^{2}+\zeta }}\) (ζ is a very small positive number) when ||u i −u j ||1=0. We also approximate ||u i ||1=0 with \(\sqrt {||u_{i}||_{1}^{2}+\zeta }\) for Λ 1. Then the objective function regarding u is \(\mathbf {\mathcal {L^{*}}(u)} = \sum _{i=1}^{p} (-u^{i} \mathbf {x}_{i}^{T} \mathbf {Y} \mathbf {v} + \lambda _{1}\sum || \sqrt {||u_{i}||_{1}^{2}+\zeta }||_{\text {GOSCAR}}+\frac {\beta 1}{2}\sqrt {||u_{i}||_{1}^{2}+\zeta } +\frac {\gamma _{1}}{2}||\mathbf {x}_{i}u_{i}||_{2}^{2})\). It is easy to prove that \(\mathcal {L^{*}}(\mathbf {u})\) will reduce to problem (6) regarding u when ζ→0. The cases of ||v i ||1=0 and ||v i −v j ||1=0 can be addressed using a similar regularization method.
D 1 is a diagonal matrix and its i-th diagonal element is obtained by summing the i-th row of W 1, i.e. \(d_{i} = \sum _{j} w_{ij}\). The diagonal matrix \(\hat {\mathbf {D}}_{1}\) is also obtained from \({\hat {d}}_{i} = \sum _{j} {\hat {w}}_{ij}\). Likewise, we can calculate W 2, \(\hat {\mathbf {W}}_{2}\), D 2 and \(\hat {\mathbf {D}}_{2}\) by the same method in terms of v.
Then according to Eqs. (7-8), we can obtain the solution to our problem with respect to u and v separately.
$$\begin{array}{@{}rcl@{}} \mathbf{u}=(\lambda_{1} (\mathbf{L}_{1}+\hat{\mathbf{L}}_{1}) +\beta_{1}\mathbf{\Lambda_{1}}+\gamma_{1}\mathbf{X^{T}X})^{-1}\mathbf{X^{T}Yv}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \mathbf{v}=(\lambda_{2} (\mathbf{L}_{2}+\hat{\mathbf{L}}_{2}) +\beta_{2}\mathbf{\Lambda_{2}}+\gamma_{2}\mathbf{Y^{T}Y})^{-1}\mathbf{Y^{T}Xu}. \end{array} $$
We observe that L 1, \(\hat {\mathbf {L}}_{1}\) and Λ 1 depend on u which is an unknown variable, and v is also unknown which is used to calculate L 2, \(\hat {\mathbf {L}}_{2}\) and Λ 2. Thus we propose an effective iterative algorithm to solve this problem. We first fix v to solve u; and then fix u to solve v.
Algorithm 1 exhibits the pseudo code of the proposed GOSC-SCCA algorithm. For the key calculation steps, i.e., Step 5 and Step 10, we solve a system of linear equations with quadratic complexity other than computing the matrix inverse with cubic complexity. Thus the whole algorithm can work with desired efficiency. In addition, the algorithm is guaranteed to converge and we will prove this in the next subsection.
Convergence analysis
We first introduce the following lemma.
Lemma 1
For any two nonzero real numbers \(\tilde {u}\) and u, we have
$$ ||\tilde{u}||_{1}-\frac{||\tilde{u}||_{1}^{2}}{2||u||_{1}} \leq ||u||_{1}-\frac{||u||_{1}^{2}}{2||u||_{1}}. $$
Given the lemma in [16], we have \(||\tilde {\mathbf {u}}||_{2}-\frac {||\tilde {\mathbf {u}}||_{2}^{2}}{2||\mathbf {u}||_{2}} \leq ||\mathbf {u}||_{2}-\frac {||\mathbf {u}||_{2}^{2}}{2||\mathbf {u}||_{2}}\) for any two nonzero vectors. We also have \(||\tilde {u}||_{1}=||\tilde {u}||_{2}\) and ||u||1=||u||2 for any two nonzero real numbers, which completes the proof. □
Based on Lemma 1, we have
$$\begin{array}{@{}rcl@{}} \!\!\!\! ||\tilde{u}' - u'||_{1}-\frac{||\tilde{u}' - u'||_{1}^{2}}{2||\tilde{u} - u||_{1}} \leq ||\tilde{u} - u||_{1}-\!\frac{||\tilde{u} - u||_{1}^{2}}{2||\tilde{u} - u||_{1}}, \end{array} $$
$$\begin{array}{@{}rcl@{}} \!\!\!\! ||\tilde{u}' + u'||_{1}-\frac{||\tilde{u}' + u'||_{1}^{2}}{2||\tilde{u} + u||_{1}} \leq ||\tilde{u} + u||_{1}-\!\frac{||\tilde{u} + u||_{1}^{2}}{2||\tilde{u} + u||_{1}}, \end{array} $$
when \(|\tilde {u}' - u'|\), \(|\tilde {u} - u|\), \(|\tilde {u}' + u'|\) and \(|\tilde {u} + u|\) are nonzero.
We now have the following theorem regarding GOSC-SCCA algorithm.
Theorem 1
The objective function value of GOSC-SCCA will monotonically decrease in each iteration till the algorithm converges.
The proof consists of two parts.
(1) Part 1: From Step 3 to Step 7 in Algorithm 1, u is the only unknown variable to be solved. The objective function (6) can be equivalently transferred to
$${} \mathbf{\mathcal{L}(\!u,v\!)} = -\mathbf{u}^{T} \mathbf{X}^{T} \mathbf{Y} \mathbf{v} + \lambda_{1}||\mathbf{u}||_{\text{GOSCAR}}+\frac{\beta_{1}}{2}||\mathbf{u}||_{1}+\frac{\gamma_{1}}{2}||\mathbf{Xu}||_{2}^{2} $$
According to Step 5 we have
$$\begin{aligned} & -\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}Yv}+\lambda_{1}\tilde{\mathbf{u}}^{\mathbf{T}}\tilde{\mathbf{L}}_{\mathbf{1}}\tilde{\mathbf{u}} +\lambda_{1}\tilde{\mathbf{u}}^{\mathbf{T}}{\tilde{\hat{\mathbf{L_1}}}}\tilde{\mathbf{u}}\\ &+\beta_{1}\tilde{\mathbf{u}}^{\mathbf{T}} \mathbf{\Lambda_{1}} \tilde{\mathbf{u}}+\gamma_{1}\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}X}\tilde{\mathbf{u}}\\ & \leq -\mathbf{u^{T}X^{T}Yv}+\lambda_{1}\mathbf{u^{T}}\mathbf{L_{1}}\mathbf{u} +\lambda_{1}\mathbf{u^{T}}\hat{\mathbf{L_{1}}}\mathbf{u}\\ &+\beta_{1}\mathbf{u^{T}\Lambda_{1} u}+\gamma_{1}\mathbf{u^{T}X^{T}Xu} \end{aligned} $$
where \(\tilde {\mathbf {u}}\) is the updated u.
It is known that \(\mathbf {u^{T}}\mathbf {L}\mathbf {u} = \sum w_{ij} ||u_{i}-u_{j}||_{1}^{2}\) if L is the laplacian matrix [17]. Similarly, \(\mathbf {u^{T}}\hat {\mathbf {L}}\mathbf {u} = \sum w_{ij} ||u_{i}+u_{j}||_{1}^{2}\). Then according to Eq. (9), we obtain
$${} \begin{aligned} & -\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}Yv}+2\lambda_{1}\sum w_{ij}\frac{||\tilde{u}_{i}-\tilde{u}_{j}||_{1}^{2}}{2||u_{i}-u_{j}||_{1}}\\ & +2\lambda_{1}\sum {\hat{w}}_{ij}\frac{||\tilde{u}_{i}+\tilde{u}_{j}||_{1}^{2}}{2||u_{i}+u_{j}||_{1}}+\beta_{1}\sum \frac{||\tilde{u}_{i}||_{1}^{2}}{2||u_{i}||_{1}}+\gamma_{1}\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}X}\tilde{\mathbf{u}}\\ & \leq -\mathbf{u^{T}X^{T}Yv}+2\lambda_{1}\sum w_{ij} \frac{||u_{i}-u_{j}||_{1}^{2}}{2||u_{i}-u_{j}||_{1}}+\\ & 2\lambda_{1}\sum {\hat{w}}_{ij} \frac{||u_{i}+u_{j}||_{1}^{2}}{2||u_{i}+u_{j}||_{1}} +\beta_{1}\sum \frac{||u_{i}||_{1}^{2}}{2||u_{i}||_{1}}+\gamma_{1}\mathbf{u^{T}X^{T}Xu} \end{aligned} $$
We first multiply 2λ 1 on both sides of Eq. (13) for each feature pair separately, and do the same to both sides of Eq. (14). After that, we multiply β 1 on both sides of Eq. (12). Finally, by summing all these inequations together to both sides of Eq. (15) accordingly, we arrive at
$${} \begin{aligned} & -\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}Yv}+2\lambda_{1} \sum w_{ij}|{\tilde{u}}_{i}- {\tilde{u}}_{j}|+2\lambda_{1} \sum {\hat{w}}_{ij}|{\tilde{u}}_{i} + {\tilde{u}}_{j}|\\ & +\beta_{1}||\tilde{\mathbf{u}}||_{1}+\gamma_{1}||{\mathbf{X}}{\tilde{\mathbf{u}}}||_{2}^{2} \\ & \leq -\mathbf{u^{T}X^{T}Yv}+2\lambda_{1}\sum w_{ij}|u_{i}-u_{j}|+2\lambda_{1}\sum {\hat{w}}_{ij}|u_{i}+u_{j}| \\ & +\beta_{1}||\mathbf u||_{1}+\gamma_{1}||\mathbf{Xu}||_{2}^{2}. \end{aligned} $$
Let \(\lambda _{1}^{*} = 2\lambda _{1}\), \(\gamma _{1}^{*} = 2\gamma _{1},\beta _{1}^{*} = 2\beta _{1}\), we have
$${} \begin{aligned} -\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}Yv}+\frac{\lambda_{1}^{*}}{2}||\tilde{\mathbf{u}}||_{\text{GOSCAR}}+\frac{\beta_{1}^{*}}{2}||\tilde{\mathbf{u}}||_{1}+\frac{\gamma_{1}^{*}}{2}||\mathbf{X}\tilde{\mathbf{u}}||_{2}^{2} \\ \leq -\mathbf{u^{T}X^{T}Yv}+\frac{\lambda_{1}^{*}}{2}||\mathbf{u}||_{\text{GOSCAR}}+\frac{\beta_{1}^{*}}{2}||\mathbf{u}||_{1}+\frac{\gamma_{1}^{*}}{2}\mathbf{||Xu||}_{2}^{2}. \end{aligned} $$
Therefore, GOSC-SCCA will decrease the objective function in each iteration, i.e., \(\mathbf {\mathcal {L}(}\tilde {\mathbf {u}}\mathbf {,v)} \leq \mathbf {\mathcal {L}(u,v)}\).
(2) Part 2: From Step 8 to Step 12, the only unknown variable is v. Similarly, we can arrive at
$${} \begin{aligned} -\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}Y} \tilde{\mathbf{v}}+\frac{\lambda_{2}^{*}}{2}||\tilde{\mathbf{v}}||_{\text{GOSCAR}}+\frac{\beta_{2}^{*}}{2}||\tilde{\mathbf{v}}||_{1}+\frac{\gamma_{2}^{*}}{2}||\mathbf{Y}\tilde{\mathbf{v}}||_{2}^{2} \\ \leq -\tilde{\mathbf{u}}^{\mathbf{T}}\mathbf{X^{T}Yv}+\frac{\lambda_{2}^{*}}{2}||\mathbf{v}||_{\text{GOSCAR}}+\frac{\beta_{2}^{*}}{2}||\mathbf{v}||_{1}+\frac{\gamma_{2}^{*}}{2}||\mathbf{Yv}||_{2}^{2}. \end{aligned} $$
Thus GOSC-SCCA also decreases the objective function in each iteration during the second phase, i.e., \(\mathbf {\mathcal {L}}(\tilde {\mathbf {u}},\tilde {\mathbf {v}}) \leq \mathbf {\mathcal {L}}(\tilde {\mathbf {u}},\mathbf {v})\).
Based on the analysis above, we easily have \(\mathbf {\mathcal {L}}(\tilde {\mathbf {u}},\tilde {\mathbf {v}}) \leq \mathbf {\mathcal {L}(u,v)}\) according to the transitive property of inequalities. Therefore, the objective value monotonically decreases in each iteration. Note that the CCA objective \(\mathbf {\frac {u^{T}X^{T}Yv}{\sqrt {u^{T}X^{T}Xu}\sqrt {v^{T}Y^{T}Yv}}}\) ranges from [-1,1], and both u T X T X u and v T Y T Y v are constrained to be 1. Thus the −u T X T Y v is lower bounded by -1, and so Eq. (6) is lower bounded by –1. In addition, Eqs. (16–17) imply that the KKT condition is satisfied. Therefore, the GOSC-SCCA algorithm will converge to a local optimum. □
Based on the convergence analysis, to facilitate the GOSC-SCCA algorithm, we set the stopping criterion of Algorithm 1 as max{|δ|∣δ∈(u t+1−u t )}≤τ and max{|δ|∣δ∈(v t+1−v t )}≤τ, where τ is a predefined estimation error. Here we set τ=10−5 empirically from the experiments.
The grouping effect of GOSC-SCCA
For the structured sparse learning in high-dimensional situation, the automatic feature grouping property is of great importance [18]. In regression analysis, Zou and Hastie [18] have suggested that a regressor behaviors grouping effect when it can set those regression coefficients of the same group to similar weights. This is also the case for structured SCCA methods. So, it is important and meaningful to investigate the theoretical boundary of the grouping effect.
We have the following theorem in terms of GOSC-SCCA.
Let X and Y be two data sets, and (λ,β,γ)be the pre-tuned parameters. Let \(\tilde {\mathbf {u}}\) be the solution to our SCCA problem of Eqs. (10–11). Suppose the i-th feature and j-th feature only link to each other on the graph, \(\tilde {u}_{i}\) and \(\tilde {u}_{j}\) are their optimal solutions, thus \(\text {sgn}(\tilde {u}_{i}) = \text {sgn}(\tilde {u}_{j})\)holds. The solutions to \(\tilde {u}_{i}\) and \(\tilde {u}_{j}\) satisfy
$$ |\tilde{u}_{i}-\tilde{u}_{j}| \leq \frac{2\lambda_{1} w_{ij}}{\gamma_{1}}+ \frac{1}{\gamma_{1}}\sqrt{2(1-\rho_{ij})} $$
where ρ ij is the sample correlation between features i and j, and w i,j is the corresponding element in u-related matrix W 1.
Let \(\tilde {\mathbf {u}}\) be the solution to our problem Eq. (6), we have the following equations after taking the partial derivative with respect to \(\tilde u_{i}\) and \(\tilde u_{j}\), respectively.
$$\begin{aligned} (\lambda_{1} \mathbf{L}_{1}^{i}+\lambda_{1} {\hat{\mathbf{L}}}_{1}^{i} +\beta_{1}{\Lambda_{1}}_{ii}+\gamma_{1}\mathbf{x}_{i}^{T}\mathbf{x}_{i})\tilde{u}_{i}=\mathbf{x}_{i}^{T}\mathbf{Yv},\\ (\lambda_{1} \mathbf{L}_{1}^{j}+\lambda_{1} {\hat{\mathbf{L}}}_{1}^{j} +\beta_{1}{\Lambda_{1}}_{jj}+\gamma_{1}\mathbf{x}_{j}^{T}\mathbf{x}_{j})\tilde{u}_{j}=\mathbf{x}_{j}^{T}\mathbf{Yv}. \end{aligned} $$
We know that features i and u j are only linked to each other, thus D ii =D jj =A ij =w ij for those intermediate matrices. Besides, we also know that \(\text {sgn}(\tilde u_{i})=\frac {\tilde u_{i}}{|\tilde u_{i}|}\), \(\text {sgn}(\tilde u_{i})=\text {sgn}(\tilde u_{j})\), \(\mathbf {x}_{i}^{T}\mathbf {x}_{i} = \rho _{ii} = 1\) and \(\mathbf {x}_{j}^{T}\mathbf {x}_{j} = \rho _{jj} = 1\). Then according to the definition of L 1, \({\hat {\mathbf {L}}}_{1}\) and Λ 1, we can arrive at
$${} \begin{aligned} \lambda_{1} w_{ij}\text{sgn}(\tilde{u}_{i}-\tilde{u}_{j}) + \lambda_{1} {\hat{w}}_{ij}\text{sgn}(\tilde{u}_{i}+\tilde{u}_{j}) +\beta_{1}\text{sgn}(\tilde{u}_{i})+\gamma_{1}\tilde{u}_{i}\\ =\mathbf{x}_{i}^{T}\mathbf{Yv},\\ \lambda_{1} w_{ij}\text{sgn}(\tilde{u}_{j}-\tilde{u}_{i}) + \lambda_{1} {\hat{w}}_{ij}\text{sgn}(\tilde{u}_{i}+\tilde{u}_{j}) +\beta_{1}\text{sgn}(\tilde{u}_{j})+\gamma_{1}\tilde{u}_{j}\\ =\mathbf{x}_{j}^{T}\mathbf{Yv}. \end{aligned} $$
Subtracting these two equations, we obtain
$$ \begin{aligned} \gamma_{1}(\tilde{u}_{i}-\tilde{u}_{j}) = 2\lambda_{1}w_{ij}\text{sgn}(\tilde{u}_{j}-\tilde{u}_{i}) + (\mathbf{x}_{i}-\mathbf{x}_{j})^{T}\mathbf{Yv} \end{aligned} $$
Then we take ℓ 2-norm on both sides of Eq. (20), apply the triangle inequality, and use the equality \(||(\mathbf {x}_{i}-\mathbf {x}_{j})||_{2}^{2} = 2(1-\rho _{ij})\),
$$ \begin{aligned} \gamma_{1}|\tilde{u}_{i}-\tilde{u}_{j}| \leq 2\lambda_{1}w_{ij} + \sqrt{2(1-\rho_{ij})}\sqrt{||\mathbf{Yv}||_{2}^{2}} \end{aligned} $$
We have known that our problem implies \(||\mathbf {Yv}||_{2}^{2} \leq 1\), thus we arrive at
$$ \begin{aligned} |\tilde{u}_{i}-\tilde{u}_{j}| \leq \frac{2\lambda_{1} w_{ij}}{\gamma_{1}}+ \frac{1}{\gamma_{1}}\sqrt{2(1-\rho_{ij})} \end{aligned} $$
Now the upper bound for the canonical loadings v can also be obtained, i.e.
$$ \begin{aligned} |\tilde{v}_{i}-\tilde{v}_{j}| \leq \frac{2\lambda_{2} w'_{ij}}{\gamma_{2}}+ \frac{1}{\gamma_{2}}\sqrt{2(1-\rho'_{ij})} \end{aligned} $$
where ρ ij′ is the sample correlation between the i-th and j-th feature in v, and \(w^{\prime }_{ij}\) is the corresponding element in v-related matrix W 2.
Theorem 2 provides a theoretical upper bound for the difference between the estimated coefficients of the i-th feature and j-th feature. It seems that this is not a tight enough bound. However our bound is slack since it does not bound much more the pairwise difference of features i and j if ρ ij ≪1. This is desirable for two irrelevant features [19]. Suppose two features with very small correlation, i.e. ρ ij ≪0, their coefficients do not need to be the same or similar. So we do not care about their coefficients' pairwise difference, and will not set their pairwise difference a tight bound. This quantitative description for the grouping effect makes the GOSCAR penalty an ideal choice for structured SCCA.
We compare GOSC-SCCA with several state-of-the-art SCCA and structured SCCA methods, including L1-SCCA [3], FL-SCCA [3], KG-SCCA [14]. We do not compare GOSC-SCCA with S2CCA [8], ssCCA [7] and CCA-SG (CCA Sparse Group) [10] since they require prior knowledge available in advance. We do not choose NS-SCCA [5] as benchmark either, due to the following two reasons. (1) NS-SCCA generates many intermediate variables during its iterative procedure. As the authors stated, NS-SCCA's per-iteration complexity is linear in (p+|E|), and thus the complexity becomes O(p 2) when it is in the group pursuit mode. (2) Its penalty term is similar to that of KG-SCCA which has been selected for comparison.
There are six parameters to be decided before using the GOSC-SCCA, thus it will take too much time by blindly tuning. We tune the parameters following two principles. On one hand, Chen and Liu [5] found out that the result is not very sensitive to γ 1 and γ 2. So we choose them from a small scope [0.1, 1, 10]. On the other hand, if the parameters are too small, the SCCA will reduce to CCA due to the subtle influence of the penalties. And, too large parameters will over-penalize the results. Therefore, we tune the rest of the parameters within the range of {10−3,10−2,10−1,100,101,102,103}. In this study, we conduct all the experiments using the nested 5-fold cross-validation strategy, and the parameters are only tuned from the training set. In order to save time, we only tune these parameters on the first run of the cross-validation. That is, the parameters are tuned when the first four folds are used as the training set. Then we directly use the tuned parameters for all the remaining experiments. All these methods use the same partition for cross-validation in the experiment.
Evaluation on synthetic data
We generate four synthetic datasets to investigate the performance of GOSC-SCCA and those benchmarks. Following [4, 5], these datasets are generated by four steps: 1) We predefine the structures and use them to create u and v respectively. 2) We create a latent vector z from N(0,I n×n ). 3) We create X with each \(\mathbf {x}_{i} \sim N(z_{i}\mathbf {u},\sum _{x})\) where \((\sum _{x})_{jk}=\exp ^{-|u_{j}-u_{k}|}\) and Y with each \(\mathbf {y}_{i} \sim N(z_{i}\mathbf {v},\sum _{y})\) where \((\sum _{y})_{jk}=\exp ^{-|v_{j}-v_{k}|}\). 4) For the first group of nonzero features in u, we change half of their signs, and also change the signs of the corresponding data. Since the synthetic datasets are order-independent, this setup is equivalent to randomly change a portion of features' signs in u. Now that we change the sign of both coefficients and the data simultaneously, we still have X ′ u ′=X u where X ′ and u ′ indicate the data and coefficients after the sign swap. We do the same on the Y side to make our simulation more challenging [13]. In addition, we set all four datasets with n=80, p=100 and q=120. They also have different correlation coefficients and different group structures. Therefore, the simulation is designed to cover a set of diverse cases for a fair comparison.
The estimated correlation coefficients of each method on four datasets are contained in Table 1. The best values and those are not significantly worsen than the best values are shown in bold. On the training results, we observe that GOSC-SCCA either estimates the largest correlation coefficients (Dataset 1 and Dataset 4), or is not significantly worse than the best method (Dataset 2 and Dataset 3). GOSC-SCCA also has the best average correlation coefficients. On the testing results, GOSC-SCCA also outperforms those benchmarks in terms of the average correlation coefficients, though KG-SCCA does not perform significantly worse than our method. For the overall average obtained across four datasets, GOSC-SCCA obtains the better correlation coefficients than the competing methods on both training set and testing set.
Table 1 5-fold cross-validation results on synthetic data
Figure 1 shows the estimated canonical loadings of all four SCCA methods in a typical run. As we can see, L1-SCCA cannot accurately recover the true signals. For those coefficients with sign swapped, it fails to recognize them. The FL-SCCA slightly improves L1-SCCA's performance but cannot identify those coefficients with sign changed either. Our GOSC-SCCA successfully groups those nonzero features together, and accurately recognizes the coefficients whose signs are changed. No matter what structures are within the dataset, GOSC-SCCA is able to estimate true signals which are very close to the ground truth. Although KG-SCCA also recognizes the coefficients with sign swapped, it is unable to recover every group of nonzero coefficients. For example, KG-SCCA misses two groups of nonzero features in terms of v for the second dataset. The results on synthetic datasets reveal that GOSC-SCCA can not only estimate stronger correlation coefficients than the competing methods, but also identifies more accurate and cleaner canonical loadings.
Canonical loadings estimated on four synthetic datasets. The first column is for Dataset 1, and the second column is for Dataset 2, and so forth. For each dataset, the weights of u are shown on the left panel, and those of v are on the right. The first row is the ground truth, and each remaining row corresponds to a specific method: (1) Ground Truth. (2) L1-SCCA. (3) FL-SCCA. (4) KG-SCCA. (5) GOSC-SCCA
Evaluation on real neuroimaging genetics data
Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The ADNI was launched in 2003 as a public-private partnership, led by Principal Investigator Michael W. Weiner, MD. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer's disease (AD). For up-to-date information, see www.adni-info.org.
Table 2 contains the characteristics of the ADNI dataset used in this work. Participants including 568 non-Hispanic Caucasian subjects, including 196 healthy control (HC), 343 MCI and 28 AD participants. However, many participants's data are incomplete due to various factors such as data loss. After cleaning those participants with incomplete information, we get 282 participants in our experiments. The genotype data were downloaded from LONI (adni.loni.usc.edu), and the preprocessed [11C] Florbetapir PET scans (i.e., amyloid imaging data) were also obtained from LONI. Before conducting the experiment, the amyloid imaging data had been preprocessed and the specific pipeline could be found in [14]. These imaging measures were adjusted by removing the effects of the baseline age, gender, education, and handedness via the regression weights derived from HC participants. We finally obtained 191 region-of-interest (ROI) level amyloid measurements which were extracted from the MarsBaR AAL atlas. We included four genetic markers, i.e., rs429358, rs439401, rs445925 and rs584007, from the known AD risk gene APOE. We intend to investigate if our GOSC-SCCA could identify this widely known associations between amyloid deposition and APOE SNPs.
Table 2 Real data characteristics
Shown in Table 3 are the 5-fold cross-validation results of various SCCA methods. We observe that GOSC-SCCA and KG-SCCA obtain similar correlation coefficients on every run, including the training performance and testing performance. Besides, they both are significantly better than L1-SCCA and FL-SCCA, which is consistent with the analysis in [14]. This result shows that GOSC-SCCA can improve the ability of identifying interesting imaging genetic associations compared with L1-SCCA and FL-SCCA.
Table 3 5-fold cross-validation results on real data
Figure 2 contains the estimated canonical loadings obtained from 5-fold cross-validation. To facilitate the interpretation, we employ the heat map for this real data. Each row denotes a method, and u (genetic markers) is shown on the left panel and v (imaging markers) is on the right. As we can see, on the genetic side, all four SCCA exhibit similar canonical loading pattern. Since every SCCA here incorporates the lasso (ℓ 1-norm), they select only the APOE e4 SNP (rs429358), which is a widely known AD risk marker, with those irrelevant ones discarded to assure sparsity. On the imaging side, L1-SCCA identifies many signals which is hard to interpret. FL-SCCA fuses those adjacent features together due to its pairwise smoothness, which can be easily observed from the figure. But it is difficult to interpret either. GOSC-SCCA and KG-SCCA perform similarly again in this run. They both identify the imaging signals in accordance with the findings in [20]. It is easily to observe that they estimated a very clean signal pattern, and thus is easy to conduct further investigation. Recall the results in Table 3, the association between the marker rs429358 and the amyloid accumulation in the brain is relatively strong, and thus the signal can be well captured by both KG-SCCA and GOSC-SCCA. In addition, the correlations among the imaging variables and those among genetic variables are high enough so that the signs of these correlations can hardly be impeded by the noises. That is, the signs of sample correlations tend to be correctly estimated. Therefore, KG-SCCA does not suffer sign directionality issue, and so performs similarly to GOSC-SCCA. However, if some sample correlations are not very strong and their signs are mis-estimated, KG-SCCA may not work very well (see the results of the second synthetic dataset). In summary, this reveals that our method has better generalization ability, and could identify biologically meaningful imaging genetic associations.
Canonical loadings estimated on the real dataset. Each row corresponds to a SCCA method: (1) L1-SCCA. (2) FL-SCCA. (3) KG-SCCA. (4) GOSC-SCCA. For each row, the estimated weights of u are shown on the left figure, and those of v on the right
In this paper, we have proposed a structured SCCA method GOSC-SCCA, which intended to reduce the estimation bias caused by the incorrect sign of sample correlation. GOSC-SCCA employed the GOSCAR (Graph OSCAR) regularizer which is an extension of the popular penalty OSCAR. The GOSC-SCCA could pull those highly correlated features together no matter that they were positively correlated or negatively correlated. We also provide a theoretical quantitative description of the grouping effect of our SCCA method. An effective algorithm was also proposed to solve the GOSC-SCCA problem and the algorithm was guaranteed to converge.
We evaluated GOSC-SCCA and three other popular SCCA methods on both synthetic datasets and a real imaging genetics dataset. The synthetic datasets consisted of different ground truth, i.e. different correlation coefficients and canonical loadings. GOSC-SCCA was capable of consistently identifying strong correlation coefficients on both training set and testing set, and either outperformed or performed similarly to the competing methods. Besides, GOSC-SCCA successfully and accurately recognized the signals which were the closest to the ground truth when compared with the competing methods.
The results on the real data showed that both GOSC-SCCA and KG-SCCA could find an important association between the APOE SNPs and the amyloid burden measure in the frontal region of the brain. KG-SCCA performs similarly to GOSC-SCCA on this real data largely because of the strong correlations between the variables within the genetic data, as well as those within the imaging data. In this case, the signs of the correlation coefficients between these variables tend to be correctly calculated, and so KG-SCCA does not have the sign directionality issue. On the other hand, if the correlations among some variables are not very strong, the performance of KG-SCCA can be affected by the mis-estimation of some correlation signs. In this case, GOSC-SCCA, which is designed to overcome the sign directionality issue, is expected to perform better than KG-SCCA. This fact has already been validated by the results of the second synthetic dataset.
The satisfactory performance of GOSC-SCCA, coupled with its theoretical convergence and grouping effect, demonstrates the promise of our method as an effective structured SCCA method in identifying meaningful bi-multivariate imaging genetic associations. The following are a few possible future directions. (1) Note that the identified pattern between the APOE genotype and amyloid deposition is a well-known and relatively strong imaging genetic association. Thus one direction is to apply GOSC-SCCA to more complex imaging genetic data for revealing novel but less obvious associations. (2) The data tested in this study is brain wide but targeted only at APOE SNPs. Another direction is to apply GOSC-SCCA to imaging genetic data with higher dimensionality, where more effective and efficient strategies for parameter tuning and cross-validation warrant further investigation. (3) The third direction is to employ GOSC-SCCA as a knowledge-driven approach, where pathways, networks or other relevant biological knowledge can be incorporated in the model to aid association discovery. In this case, comparative study can also been done between GOSC-SCCA and other state-of-the-arts knowledge-guided SCCA methods in bi-multivariate imaging genetics analyses.
We have presented a new structured sparse canonical analysis (SCCA) model for analyzing brain imaging genetics data and identifying interesting imaging genetic associations. This SCCA model employs a regularization item based on the graph octagonal selection and clustering algorithm for regression (GOSCAR). The goal is twofold: (1) encourage highly correlated features to have similar canonical weights, and (2) reduce the estimation bias via removing the requirement of pre-defining the sign of the sample correlation. As a result, it could pull highly correlated features together no matter whether they are positively or negatively correlated. Empirical results on both synthetic and real data have demonstrated the promise of the proposed method.
Vounou M, Nichols TE, Montana G. Discovering genetic associations with high-dimensional neuroimaging phenotypes: A sparse reduced-rank regression approach. NeuroImage. 2010; 53(3):1147–59.
Parkhomenko E, Tritchler D, Beyene J. Sparse canonical correlation analysis with application to genomic data integration. Stat Appl Genet Mol Biol. 2009; 8(1):1–34.
Witten DM, Tibshirani R, Hastie T. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics. 2009; 10(3):515–34.
Chen X, Liu H, Carbonell JG. Structured sparse canonical correlation analysis. In: International Conference on Artificial Intelligence and Statistics, JMLR Proceedings 22, JMLR.org: 2012.
Chen X, Liu H. An efficient optimization algorithm for structured sparse cca, with applications to eqtl mapping. Stat Biosci. 2012; 4(1):3–26.
Chi EC, Allen G, Zhou H, Kohannim O, Lange K, Thompson PM, et al. Imaging genetics via sparse canonical correlation analysis. In: Biomedical Imaging (ISBI), 2013 IEEE 10th Int Sym On: 2013. p. 740–3, doi:http://dx.doi.org/10.1109/ISBI.2013.6556581.
Lin D, Calhoun VD, Wang YP. Correspondence between fMRI and SNP data by group sparse canonical correlation analysis. Medical image analysis. 2014; 18(6):891–902.
Du L, Yan J, Kim S, Risacher SL, Huang H, Inlow M, Moore JH, Saykin AJ, Shen L. A novel structure-aware sparse learning algorithm for brain imaging genetics. In: International Conference on Medical Image Computing and Computer Assisted Intervention. Berlin, Germany: Springer: 2014. p. 329–36.
Witten DM, Tibshirani RJ. Extensions of sparse canonical correlation analysis with applications to genomic data. Stat Appl Genet Mol Biol. 2009; 8(1):1–27.
Chen J, Bushman FD, et al. Structure-constrained sparse canonical correlation analysis with an application to microbiome data analysis. Biostatistics. 2013; 14(2):244–58.
Bondell HD, Reich BJ. Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with oscar. Biometrics. 2008; 64(1):115–23.
Li C, Li H. Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics. 2008; 24(9):1175–82.
Yang S, Yuan L, Lai YC, Shen X, Wonka P, Ye J. Feature grouping and selection over an undirected graph. In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, USA: ACM: 2012. p. 922–30.
Yan J, Du L, Kim S, Risacher SL, Huang H, Moore JH, Saykin AJ, Shen L. Transcriptome-guided amyloid imaging genetic analysis via a novel structured sparse learning algorithm. Bioinformatics. 2014; 30(17):564–71.
Hardoon D, Szedmak S, Shawe-Taylor J. Canonical correlation analysis: An overview with application to learning methods. Neural Comput. 2004; 16(12):2639–64.
Nie F, Huang H, Cai X, Ding CH. Efficient and robust feature selection via joint 2, 1-norms minimization. In: Advances in Neural Information Processing Systems. Massachusetts, USA: The MIT Press: 2010. p. 1813–21.
Grosenick L, Klingenberg B, Katovich K, Knutson B, Taylor JE. Interpretable whole-brain prediction analysis with graphnet. NeuroImage. 2013; 72:304–21.
Zou H, Hastie T. Regularization and variable selection via the elastic net. J Royal Stat Soc Ser B (Stat Method). 2005; 67(2):301–20.
Lorbert A, Eis D, Kostina V, Blei DM, Ramadge PJ. Exploiting covariate similarity in sparse regression via the pairwise elastic net. In: International Conference on Artificial Intelligence and Statistics, JMLR Proceedings 9, JMLR.org: 2010. p. 477–84.
Ramanan VK, Risacher SL, Nho K, Kim S, Swaminathan S, Shen L, Foroud TM, Hakonarson H, Huentelman MJ, Aisen PS, et al. Apoe and bche as modulators of cerebral amyloid deposition: a florbetapir pet genome-wide association study. Mole psychiatry. 2014; 19(3):351–7.
At Indiana University, this work was supported by NIH R01 LM011360, U01 AG024904, RC2 AG036535, R01 AG19771, P30 AG10133, UL1 TR001108, R01 AG 042437, R01 AG046171, and R03 AG050856; NSF IIS-1117335; DOD W81XWH-14-2-0151, W81XWH-13-1-0259, and W81XWH-12-2-0012; NCAA 14132004; and CTSI SPARC Program. At University of Texas at Arlington, this work was supported by NSF CCF-0830780, CCF-0917274, DMS-0915228, and IIS-1117965. At University of Pennsylvania, the work was supported by NIH R01 LM011360, R01 LM009012, and R01 LM010098.
Data collection and sharing for this project was funded by the Alzheimer's Disease Neu-roimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengi-neering, and through generous contributions from the following: AbbVie, Alzheimer's Asso-ciation; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Bio-gen; Bristol-Myers Squibb Company; CereSpir, Inc.; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Develop-ment LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; Neu-roRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeu-tics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the Na-tional Institutes of Health (www.fnih.org). The grantee organization is the Northern Califor-nia Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are dis-seminated by the Laboratory for Neuro Imaging at the University of Southern California.
Publication charges for this article have been funded by the corresponding author.
This article has been published as part of BMC Systems Biology Volume 10 Supplement 3, 2016: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2015: systems biology. The full contents of the supplement are available online at
http://bmcsystbiol.biomedcentral.com/articles/supplements/volume-10-supplement-3.
Data used in the preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu).
LD, JM, AS, and LS: overall design. LD, HH, and MI: modeling and algorithm design. LD and JY: experiments. SK, SR, and AS: data preparation and result evaluation. LD, JY and LS: manuscript writing. All authors read and approved the final manuscript.
School of Medicine, Indiana University, Indianapolis, USA
Lei Du, Jingwen Yan, Sungeun Kim, Shannon Risacher, Andrew Saykin & Li Shen
Computer Science & Engineering, University of Texas at Arlington, Arlington, USA
Heng Huang
Terre Haute, USA
Mark Inlow
School of Medicine, University of Pennsylvania, Philadelphia, USA
Lei Du
Jingwen Yan
Sungeun Kim
Shannon Risacher
Andrew Saykin
Correspondence to Li Shen.
Alzheimer's Disease Neuroimaging Initiative, Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf
Du, L., Huang, H., Yan, J. et al. Structured sparse CCA for brain imaging genetics via graph OSCAR. BMC Syst Biol 10, 68 (2016). https://doi.org/10.1186/s12918-016-0312-1
Brain imaging genetics
Canonical correlation analysis
Structured sparse model
|
CommonCrawl
|
Home > Vol 21 (2013) > Agoshkova
Approximation by algebraic polynomials in metric spaces $$$L_{\psi}$$$
T.A. Agoshkova (Dnipropetrovsk National University of Railway Transport named after Academician V. Lazaryan), http://orcid.org/0000-0002-8030-7445
In the space $$$L_{\psi}[-1;1]$$$ of non-periodic functions with metric $$$\rho(f,0)_{\psi} = \int\limits_{-1}^1 \psi(|f(x)|)dx$$$, where $$$\psi$$$ is a function of the type of modulus of continuity, we study Jackson inequality for modulus of continuity of $$$k$$$-th order in the case of approximation by algebraic polynomials. It is proved that the direct Jackson theorem is true if and only if the lower dilation index of the function $$$\psi$$$ is not equal to zero.
the direct Jackson theorem; modulus of continuity; modulus of continuity of k-th order; the lower dilation index; an algebraic polynomial
Krein S.G., Petunin Yu.I., Semyonov Ye.M. Interpolation of linear operators, Nauka, Moscow, 1978; 400 p. (in Russian)
Storozhenko E.A., Krotov V.G., Osval'd P. "Direct and inverse theorems of Jackson type in $$$L_p$$$ spaces, $$$0 < p < 1$$$", Mat. sb., 1975; 98(3): pp. 395-415. (in Russian)
Ivanov V.I. "Direct and inverse theorems of approximation theory in the metric of $$$L_p$$$ when $$$0 < p < 1$$$", Mat. zametki, 1975; 18(5): pp. 641-658. (in Russian)
Storozhenko E.A., Osval'd P. "Jackson theorem in $$$L_p({\mathbb{R}}^k)$$$ spaces, $$$0 < p < 1$$$", Dokl. AN SSSR, 1976; 229(3): pp. 554-557. (in Russian)
Storozhenko E.A., Osval'd P. "Jackson theorem in $$$L_p({\mathbb{R}}^k)$$$ spaces, $$$0 < p < 1$$$", Sib. mat. zhurn., 1978; 19(4): pp. 888-901. (in Russian)
Pichugov S.A. "On Jacksom theorem for periodic functions in spaces with integral metric", Ukrainian Math. J., 2000; 52(1): pp. 122-133. (in Russian)
Pichugov S.A. "On Jacksom theorem for periodic functions in metric spaces with integral metric. II", Ukrainian Math. J., 2011; 63(11): pp. 1524-1533. (in Russian)
Storozhenko E.A. "Approximation by algebraic polynomials in $$$L_p$$$, $$$0 < p < 1$$$", Vestnik MGU. Seriya: Matematika, Mehanika, 1978; 4: pp. 87-92. (in Russian)
Hodak L.B. "On approximation of functions by alebraic polynomials in $$$L_p$$$ metric when $$$0 < p < 1$$$", Mat. zametki, 1981; 30(3): pp. 321-332. (in Russian)
Shvedov A.S. "Jackson theorem in $$$L_p$$$, $$$0 < p < 1$$$, for algebraic polynomials, and the orders of comonotone approximations", Mat. zametki, 1979; 25(1): pp. 107-117. (in Russian)
Runovski K. "On Jackson's type inequalities in Orlicz classes", Revista Mat. Comp., 2001; 14(2): pp. 394-404.
Brudnyi Yu.A. "Approximation of functions by algebraic polynomials", Izv. AN SSSR, 1968; 32(4): pp. 780-787. (in Russian)
Timan A.F. Approximation theory for real-variable functions, Fizmatgiz, Moscow, 1960; 624 p. (in Russian)
Copyright (c) 2013 T.A. Agoshkova
|
CommonCrawl
|
Sample records for continuous operations control
Controlled time of arrival windows for already initiated energy-neutral continuous descent operations
Dalmau Codina, Ramon; Prats Menéndez, Xavier
Continuous descent operations with controlled times of arrival at one or several metering fixes could enable environmentally friendly procedures without compromising terminal airspace capacity. This paper focuses on controlled time of arrival updates once the descent has been already initiated, assessing the feasible time window (and associated fuel consumption) of continuous descent operations requiring neither thrust nor speed-brake usage along the whole descent (i.e. only elevator control ...
A Motion Control of a Robotic Walker for Continuous Assistance during Standing, Walking and Seating Operation
Chugo, Daisuke; Takase, Kunikatsu
In this paper, we develop an active walker system for standing, walking and seating operation continuously which cooperates the developed standing assistance system with safety and stability. For realizing these conditions, our walker coordinates the assisting position cooperating the standing assistance manipulator according to the posture of the patient. Furthermore, our walker adjusts a seating position when the patient sit down which has high risk for falling down. Using our proposed syst...
Toward demonstrating controlled-X operation based on continuous-variable four-partite cluster states and quantum teleporters
Wang Yu; Su Xiaolong; Shen Heng; Tan Aihong; Xie Changde; Peng Kunchi
One-way quantum computation based on measurement and multipartite cluster entanglement offers the ability to perform a variety of unitary operations only through different choices of measurement bases. Here we present an experimental study toward demonstrating the controlled-X operation, a two-mode gate in which continuous variable (CV) four-partite cluster states of optical modes are utilized. Two quantum teleportation elements are used for achieving the gate operation of the quantum state transformation from input target and control states to output states. By means of the optical cluster state prepared off-line, the homodyne detection and electronic feeding forward, the information carried by the input control state is transformed to the output target state. The presented scheme of the controlled-X operation based on teleportation can be implemented nonlocally and deterministically. The distortion of the quantum information resulting from the imperfect cluster entanglement is estimated with the fidelity.
Training to Operate a Simulated Micro-Unmanned Aerial Vehicle With Continuous or Discrete Manual Control
Durlach, Paula J; Neumann, John L; Billings, Deborah R
.... They were then given training missions during which performance was measured. Eight conditions were investigated, formed by crossing three 2-level factors: input device (mouse vs. game controller...
An Operational Foundation for Delimited Continuations
Biernacka, Malgorzata; Biernacki, Dariusz; Danvy, Olivier
We present an abstract machine and a reduction semantics for the lambda-calculus extended with control operators that give access to delimited continuations in the CPS hierarchy. The abstract machine is derived from an evaluator in continuation-passing style (CPS); the reduction semantics (i.......e., a small-step operational semantics with an explicit representation of evaluation contexts) is constructed from the abstract machine; and the control operators are the shift and reset family. We also present new applications of delimited continuations in the CPS hierarchy: finding list prefixes...
LANL continuity of operations plan
Senutovitch, Diane M [Los Alamos National Laboratory
The Los Alamos National Laboratory (LANL) is a premier national security research institution, delivering scientific and engineering solutions for the nation's most crucial and complex problems. Our primary responsibility is to ensure the safety, security, and reliability of the nation's nuclear stockpile. LANL emphasizes worker safety, effective operational safeguards and security, and environmental stewardship, outstanding science remains the foundation of work at the Laboratory. In addition to supporting the Laboratory's core national security mission, our work advances bioscience, chemistry, computer science, earth and environmental sciences, materials science, and physics disciplines. To accomplish LANL's mission, we must ensure that the Laboratory EFs continue to be performed during a continuity event, including localized acts of nature, accidents, technological or attack-related emergencies, and pandemic or epidemic events. The LANL Continuity of Operations (COOP) Plan documents the overall LANL COOP Program and provides the operational framework to implement continuity policies, requirements, and responsibilities at LANL, as required by DOE 0 150.1, Continuity Programs, May 2008. LANL must maintain its ability to perform the nation's PMEFs, which are: (1) maintain the safety and security of nuclear materials in the DOE Complex at fixed sites and in transit; (2) respond to a nuclear incident, both domestically and internationally, caused by terrorist activity, natural disaster, or accident, including mobilizing the resources to support these efforts; and (3) support the nation's energy infrastructure. This plan supports Continuity of Operations for Los Alamos National Laboratory (LANL). This plan issues LANL policy as directed by the DOE 0 150.1, Continuity Programs, and provides direction for the orderly continuation of LANL EFs for 30 days of closure or 60 days for a pandemic/epidemic event. Initiation of COOP operations may
78 FR 21245 - Continuity of Operations Plan
...; Order No. 778] Continuity of Operations Plan AGENCY: Federal Energy Regulatory Commission, DOE. ACTION: Final rule. SUMMARY: In this Final Rule the Commission revises its Continuity of Operations Plan... Commission's Continuity of Operations Plan (COOP) regulations to incorporate its regional offices into the...
Continuity of Accelerator Operations during an Extended Pandemic
Noel Okay
The Operations group for the Continuous Electron Accelerator Facility in Newport News Virginia has developed a Continuity of Operations plan for pandemic conditions when high absenteeism may impact accelerator control room operations. Protocols to address both the potential spread of illnesses in the control room environment as well as maintaining minimum staffing requirements for contiguous accelerator operation will be presented. During acute pandemic conditions local government restrictions may prevent continued operations but during extended periods of high absenteeism accelerator operations can continue when some added precautionary measures and staffing adjustments are made in the way business is done.
Analytic continuation of Toeplitz operators
Bommier-Hato, H.; Engliš, Miroslav; Youssfi, E.-H.
Ro�. 25, �. 4 (2015), s. 2323-2359 ISSN 1050-6926 R&D Projects: GA MŠk(CZ) MEB021108 Institutional support: RVO:67985840 Keywords : Toeplitz operator * Bergman space * strictly pseudoconvex domain Subject RIV: BA - General Mathematics Impact factor: 1.109, year: 2015 http://link.springer.com/article/10.1007%2Fs12220-014-9515-0
...; Order No. 765] Continuity of Operations Plan AGENCY: Federal Energy Regulatory Commission, DOE. ACTION... Operations Plan to allow the Commission the discretion to better address not only long-term and catastrophic... discretion regarding: the activation and deactivation of the Continuity of Operations Plan and any suspension...
Operator continued fraction and bound states
Pindor, M.
The effective Hamiltonian of the model space perturbation theory (multilevel Rayleigh-Schroedinger theory) is expressed as an operator continued fraction. In the case of a nondegenerate model space the expression becomes an operator branched continued fraction. The method is applied to the harmonic oscillator with the kinetic energy treated as the perturbation and to the anharmonic oscillator
Comparative analysis of the operation efficiency of the continuous and relay control systems of a multi-axle wheeled vehicle suspension
Zhileykin, M. M.; Kotiev, G. O.; Nagatsev, M. V.
In order to improve the efficiency of the multi-axle wheeled vehicles (MWV) automotive engineers are increasing their cruising speed. One of the promising ways to improve ride comfort of the MWV is the development of the dynamic active suspension systems and control laws for such systems. Here, by the dynamic control systems we mean the systems operating in real time mode and using current (instantaneous) values of the state variables. The aim of the work is to develop the MWV suspension optimal control laws that would reduce vibrations on the driver's seat at kinematic excitation. The authors have developed the optimal control laws for damping the oscillations of the MWV body. The developed laws allow reduction of the vibrations on the driver's seat and increase in the maximum speed of the vehicle. The laws are characterized in that they allow generating the control inputs in real time mode. The authors have demonstrated the efficiency of the proposed control laws by means of mathematical simulation of the MWV driving over unpaved road with kinematic excitation. The proposed optimal control laws can be used in the MWV suspension control systems with magnetorheological shock absorbers or controlled hydropneumatic springs. Further evolution of the research line can be the development of the energy-efficient MWV suspension control systems with continuous control input on the vehicle body.
Singular continuous spectrum for palindromic Schroedinger operators
Hof, A.; Knill, O.; Simon, B.
We give new examples of discrete Schroedinger operators with potentials taking finitely many values that have purely singular continuous spectrum. If the hull X of the potential is strictly ergodic, then the existence of just one potential x in X for which the operator has no eigenvalues implies that there is a generic set in X for which the operator has purely singular continuous spectrum. A sufficient condition for the existence of such an x is that there is a z element of X that contains arbitrarily long palindromes. Thus we can define a large class of primitive substitutions for which the operators are purely singularly continuous for a generic subset in X. The class includes well-known substitutions like Fibonacci, Thue-Morse, Period Doubling, binary non-Pisot and ternary non-Pisot. We also show that the operator has no absolutely continuous spectrum for all x element of X if X derives from a primitive substitution. For potentials defined by circle maps, x n =l J (θ 0 +nα), we show that the operator has purely singular continuous spectrum for a generic subset in X for all irrational α and every half-open interval J. (orig.)
An Operational Foundation for Delimited Continuations in the CPS Hierarchy
Workload Control with Continuous Release
Phan, B. S. Nguyen; Land, M. J.; Gaalman, G. J. C.
Workload Control (WLC) is a production planning and control concept which is suitable for the needs of make-to-order job shops. Release decisions based on the workload norms form the core of the concept. This paper develops continuous time WLC release variants and investigates their due date
Criteria for approving equipment for continued operation
Narayanan, T.V.
In May 1988, the Pressure Vessel Research Committee (PVRC) of the Welding Research Council (WRC) initiated four projects in support of ASME's efforts to develop Codes and Standards for life prediction and life extension of nuclear and fossil power plant components. These projects are: (1) Criteria for Approving Equipment for Continued Operation (2) Guidelines and Procedures for Evaluating Piping for Continued Operation (3) Nondestructive Evaluation of Material Degradation (4) Operation and Maintenance History and Life Cycle Management. The PVRC awarded a contract to Foster Wheeler Development Corporation to undertake the first of these projects. The specific objective was to develop a program plan that will lead to development of ''Criteria for Approving Equipment for Continued Operation.'' The program is divided into the following four tasks: Task 1: Literature Search; Task 2: Telephone Interview and Consultation; Task 3: Program Plan Development; Task 4: Preparation of a Summary Report. This report is in fulfillment of the above project. As part of this study, the author reviewed about 145 reports, papers and books relating to various aspects of life extension. Various experts were also consulted who are involved in EPRI, NRC, ASME, PVRC, MPC, and utility studies as well as other research projects. The conclusions and recommendations for Code-related activities are summarized
Justification for Continued Operation for Tank 241-Z-361
BOGEN, D.M.
This justification for continued operations (JCO) summarizes analyses performed to better understand and control the potential hazards associated with Tank 241-2-361. This revision to the JCO has been prepared to identify and control the hazards associated with sampling the tank using techniques developed and approved for use in the Tank Waste Remediation System (TWRS) at Hanford
This justification for continued operations (JCO) summarizes analyses performed to better understand and control the potential hazards associated with Tank 241-2-361. This revision to the JCO has been prepared to identify and control the hazards associated with sampling the tank using techniques developed and approved for use in the Tank Waste Remediation System (TWRS) at Hanford.
Continuous Air Monitor Operating Experience Review
Cadwallader, L.C.; Bruyere, S.A.
Continuous air monitors (CAMs) are used to sense radioactive particulates in room air of nuclear facilities. CAMs alert personnel of potential inhalation exposures to radionuclides and can also actuate room ventilation isolation for public and environmental protection. This paper presents the results of a CAM operating experience review of the DOE Occurrence Reporting and Processing System (ORPS) database from the past 18 years. Regulations regarding these monitors are briefly reviewed. CAM location selection and operation are briefly discussed. Operating experiences reported by the U.S. Department of Energy and in other literature sources were reviewed to determine the strengths and weaknesses of these monitors. Power losses, human errors, and mechanical issues cause the majority of failures. The average 'all modes' failure rate is 2.65E-05/hr. Repair time estimates vary from an average repair time of 9 hours (with spare parts on hand) to 252 hours (without spare parts on hand). These data should support the use of CAMs in any nuclear facility, including the National Ignition Facility and the international ITER experiment
Experience of developing and introduction of the integrated systems for accounting, control and physical protection of nuclear materials under conditions of continuously operating production
Filatov, O.N.; Rogachev, V.E.
The improvements of the integrated systems for accounting, control and physical protection (ACPP) of nuclear materials under conditions practically continuous production cycle are described. As a result of development and introduction of the improved means and technologies the developed systems realized successfully the requirements of reliable ACPP of nuclear materials [ru
Continuity of operations/continuity of government for state-level transportation organizations : brief.
As a result of a federal requirement, all non-federal entities that own or operate critical : infrastructure are required to develop Continuity of Operations/Continuity of Government : (COOP/COG) Plans. Transportation is a critical infrastructure com...
High level waste facilities - Continuing operation or orderly shutdown
Decker, L.A.
Two options for Environmental Impact Statement No action alternatives describe operation of the radioactive liquid waste facilities at the Idaho Chemical Processing Plant at the Idaho National Engineering and Environmental Laboratory. The first alternative describes continued operation of all facilities as planned and budgeted through 2020. Institutional control for 100 years would follow shutdown of operational facilities. Alternatively, the facilities would be shut down in an orderly fashion without completing planned activities. The facilities and associated operations are described. Remaining sodium bearing liquid waste will be converted to solid calcine in the New Waste Calcining Facility (NWCF) or will be left in the waste tanks. The calcine solids will be stored in the existing Calcine Solids Storage Facilities (CSSF). Regulatory and cost impacts are discussed
Access control system operation
Barnes, L.D.
An automated method for the control and monitoring of personnel movement throughout the site was developed under contract to the Department of Energy by Allied-General Nuclear Services (AGNS) at the Barnwell Nuclear Fuel Plant (BNFP). These automated features provide strict enforcement of personnel access policy without routine patrol officer involvement. Identification methods include identification by employee ID number, identification by voice verification and identification by physical security officer verification. The ability to grant each level of access authority is distributed over the organization to prevent any single individual at any level in the organization from being capable of issuing an authorization for entry into sensitive areas. Each access event is recorded. As access events occur, the inventory of both the entered and the exited control area is updated so that a current inventory is always available for display. The system has been operated since 1979 in a development mode and many revisions have been implemented in hardware and software as areas were added to the system. Recent changes have involved the installation of backup systems and other features required to achieve a high reliability. The access control system and recent operating experience are described
Operational Aspects of Continuous Pharmaceutical Production
Mitic, Aleksandar
Introduction of the Process Analytical Technolo gy (PAT) Initiative, the Quality by Design (QbD) approach and the Continuous Improvement (CI) methodology/philosophy is considered as a huge milestone in the modern pharmaceutical indust ry. The above concepts, when applied to a pharmaceutical...... satisfaction of the demands defined by the PA T Initiative. This approach could be considered as establishing a Lean Production System (LPS) whic h is usually supported with tools associated with Process Intensifaction (PI) a nd Process Optimization (PO). Development of continuous processes is often c onnected...... tools, such as microwave assisted organic synthesis (MAOS), ultrasounds, meso-scale flow chemistry and microprocess technology. Furthermore, developmen t of chemical catalysts and enzymes enabled further acceleration of some chemical reactions that were known as very slow or impossible to be performed...
Continuous anti-Stokes Raman laser operation
Feitisch, A.; Muller, T.; Welling, H.; Wellegehausen, B.
The anti-Stokes Raman laser (ASRL) process has proved to be a method that works well for frequency upconversion and for the generation of powerful tunable narrowband (pulsed) laser radiation in the UV and VUV spectral range. This conversion process allows large-frequency shifts in single step, high output energies, and high efficiencies. A basic requirement is population inversion on a two-photon transition, where, in general, the upper level of the transition should be metastable. Up to now the ASRL technique has only been demonstrated for the pulsed regime, where the necessary population inversion was generated by photodissociation or inner shell photoionization. These inversion techniques, however, cannot be transferred to cw operation of an ASRL, and, therefore, other inversion techniques have to be developed. Here a novel approach for the creation of the necessary population inversion is proposed, that uses well-known cw gas lasers as the active material for the conversion process. The basic idea is to use either existing two-photon population inversions in a cw laser material or to generate the necessary population inversion by applying a suitable population transfer process to the material. A natural two-photon inversion situation in a laser material is evident whenever a cascade laser can be operated. Cascade laser-based anti-Stokes schemes are possible in a He-Ne laser discharge, and investigations of these schemes are discussed
Continuous Descent Operations using Energy Principles
De Jong, P.M.A.
During today's aircraft descents, Air Traf?c Control (ATC) commands aircraft to descend to specific altitudes and directions to maintain separation and spacing from other aircraft. When the aircraft is instructed to maintain an intermediate descent altitude, it requires engine thrust to maintain
Nonlinear analysis and control of a continuous fermentation process
Szederkényi, G.; Kristensen, Niels Rode; Hangos, K.M
Different types of nonlinear controllers are designed and compared for a simple continuous bioreactor operating near optimal productivity. This operating point is located close to a fold bifurcation point. Nonlinear analysis of stability, controllability and zero dynamics is used to investigate o...... are recommended for the simple fermenter. Passivity based controllers have been found to be globally stable, not very sensitive to the uncertainties in the reaction rate and controller parameter but they require full nonlinear state feedback....
Continuous tokamak operation with an internal transformer
Singer, C.E.; Mikkelsen, D.R.
A large improvement in efficiency of current drive in a tokamak can be obtained using neutral beam injection to drive the current in a plasma which has low density and high resistivity. The current established under such conditions acts as the primary of a transformer to drive current in an ignited high-density plasma. In the context of a model of plasma confinement and fusion reactor costs, it is shown that such transformer action has substantial advantages over strict steady-state current drive. It is also shown that cycling plasma density and fusion power is essential for effective operation of an internal transformer cycle. Fusion power loading must be periodically reduced for intervals whose duration is comparable to the maximum of the particle confinement and thermal inertia timescales for plasma fueling and heating. The design of neutron absorption blankets which can tolerate reduced power loading for such short intervals is identified as a critical problem in the design of fusion power reactors
Discrete-continuous bispectral operators and rational Darboux transformations
Boyallian, Carina; Portillo, Sofia
In this Letter we construct examples of discrete-continuous bispectral operators obtained by rational Darboux transformations applied to a regular pseudo-difference operator with constant coefficients. Moreover, we give an explicit procedure to write down the differential operators involved in the bispectral situation corresponding to the pseudo-difference operator obtained by the Darboux process.
Engineering Process Monitoring for Control Room Operation
Bätz, M
A major challenge in process operation is to reduce costs and increase system efficiency whereas the complexity of automated process engineering, control and monitoring systems increases continuously. To cope with this challenge the design, implementation and operation of process monitoring systems for control room operation have to be treated as an ensemble. This is only possible if the engineering of the monitoring information is focused on the production objective and is lead in close coll...
General predictive control using the delta operator
Jensen, Morten Rostgaard; Poulsen, Niels Kjølstad; Ravn, Ole
This paper deals with two-discrete-time operators, the conventional forward shift-operator and the δ-operator. Both operators are treated in view of construction of suitable solutions to the Diophantine equation for the purpose of prediction. A general step-recursive scheme is presented. Finally...... a general predictive control (GPC) is formulated and applied adaptively to a continuous-time plant...
Matrix Wings: Continuous Process Improvement an Operator Can Love
key processes in our normal operations. In addition to the almost inevitable resistance to change, one of the points of pushback is that members of...Fall 2016 | 9 Matrix Wings Continuous Process Improvement an Operator Can Love Dr. A. J. Briding, Colonel, USAF, Retired Disclaimer: The views and...Operations for the 21st Century (AFSO21), the latest comprehensive effort at finding the right ap- proach for implementing a continuous process
Alertness, performance and off-duty sleep on 8-hour and 12-hour night shifts in a simulated continuous operations control room setting
Baker, T.L. [Institute for Circadian Physiology, Boston, MA (United States)
A growing number of nuclear power plants in the United States have adopted routine 12-hr shift schedules. Because of the potential impact that extended work shifts could have on safe and efficient power plant operation, the U.S. Nuclear Regulatory Commission funded research on 8-hr and 12-hr shifts at the Human Alertness Research Center (HARC) in Boston, Massachusetts. This report describes the research undertaken: a study of simulated 8-hr and 12-hr work shifts that compares alertness, speed, and accuracy at responding to simulator alarms, and relative cognitive performance, self-rated mood and vigor, and sleep-wake patterns of 8-hr versus 12-hr shift workers.
Baker, T.L.
A growing number of nuclear power plants in the United States have adopted routine 12-hr shift schedules. Because of the potential impact that extended work shifts could have on safe and efficient power plant operation, the U.S. Nuclear Regulatory Commission funded research on 8-hr and 12-hr shifts at the Human Alertness Research Center (HARC) in Boston, Massachusetts. This report describes the research undertaken: a study of simulated 8-hr and 12-hr work shifts that compares alertness, speed, and accuracy at responding to simulator alarms, and relative cognitive performance, self-rated mood and vigor, and sleep-wake patterns of 8-hr versus 12-hr shift workers
Sleep/Wakefulness Management in Continuous/Sustained Operations
......There is an antinomy between the physiological requirement and the operational requirement. To be able to continue the mission but also to preserve our security and the security of the crew we need an appropriate sleep-wakefulness management...
National Geospatial Data Asset (NGDA) Continuously Operating Reference Stations (CORS)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The National Geodetic Survey (NGS), an office of NOAA's National Ocean Service, manages a network of Continuously Operating Reference Stations (CORS) that provide...
Report: EPA Needs to Improve Continuity of Operations Planning
Report #10-P-0017, October 27, 2009. EPA has limited assurance that it can successfully maintain continuity of operations and execute its mission essential functions during a significant national event such as a pandemic influenza outbreak.
The overlap Dirac operator as a continued fraction
Wenger, U.; Deutsches Elektronen-Synchrotron
We use a continued fraction expansion of the sign-function in order to obtain a five dimensional formulation of the overlap lattice Dirac operator. Within this formulation the inverse of the overlap operator can be calculated by a single Krylov space method and nested conjugate gradient procedures are avoided. We point out that the five dimensional linear system can be made well conditioned using equivalence transformations on the continued fractions. (orig.)
Lp-continuity for Calderón–Zygmund operator
Given a Calderón–Zygmund (- for short) operator , which satisfies Hörmander condition, we prove that: if maps all the characteristic atoms to W L 1 , then is continuous from L p to L p ( 1 < p < ∞ ) . So the study of strong continuity on arbitrary function in L p has been changed into the study of weak continuity on ...
Continuity of operations/continuity of government for state-level transportation organizations.
The Homeland Security Presidential Directive 20 (HSPD-20) requires all local, state, tribal and territorial government agencies, : and private sector owners of critical infrastructure and key resources (CI/KR) to create a Continuity of Operations/Con...
Linearizing control of continuous anaerobic fermentation processes
Babary, J.P. [Centre National d`Etudes Spatiales (CNES), 31 - Toulouse (France). Laboratoire d`Analyse et d`Architecture des Systemes; Simeonov, I. [Institute of Microbiology, Bulgarian Academy of Sciences (Bulgaria); Ljubenova, V. [Institute of Control and System Research, BAS (Country unknown/Code not available); Dochain, D. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium)
Biotechnological processes (BTP) involve living organisms. In the anaerobic fermentation (biogas production process) the organic matter is mineralized by microorganisms into biogas (methane and carbon dioxide) in the absence of oxygen. The biogas is an additional energy source. Generally this process is carried out as a continuous BTP. It has been widely used in life process and has been confirmed as a promising method of solving some energy and ecological problems in the agriculture and industry. Because of the very restrictive on-line information the control of this process in continuous mode is often reduced to control of the biogas production rate or the concentration of the polluting organic matter (de-pollution control) at a desired value in the presence of some perturbations. Investigations show that classical linear controllers have good performances only in the linear zone of the strongly non-linear input-output characteristics. More sophisticated robust and with variable structure (VSC) controllers are studied. Due to the strongly non-linear dynamics of the process the performances of the closed loop system may be degrading in this case. The aim of this paper is to investigate different linearizing algorithms for control of a continuous non-linear methane fermentation process using the dilution rate as a control action and taking into account some practical implementation aspects. (authors) 8 refs.
Identification of efforts required for continued safe operation of KANUPP
Ghafoor, M.A.; Hashmi, J.A.; Siddiqui, Z.H.
Kanupp, the first commercial CANDU PHWR, rated at 137 MWe, was built on turnkey basis by the Canadian General Electric Company for the Pakistan Atomic Energy Commission, and went operational in October, 1972 near Karachi. It has operated since then with a lifetime average availability factor of 51.5% and capacity factor of 25%. In 1976, Kanupp suffered loss of technical support from its original vendors due to the Canadian embargo on export of nuclear technology. Simultaneously, the world experienced the most explosive development and advancement in electronic and computer technology, accelerating the obsolescence of such equipment and systems installed in Kanupp. Replacement upgrading of obsolete computers, control and instrumentation was thus the first major set of efforts realized as essential f or continued safe operation. On the other hand, Kanupp was able to cope with the normal maintenance of its process, mechanical and electrical equipment till the late 80's. But now many of these components are reaching the end of their useful life, and developing chronic problems due to ageing, which can only be solved by complete replacement. This is much more difficult for custom-made nuclear process equipment, e.g. the reactor internals and the fuelling machine. Public awareness and international concern about nuclear safety have increased significantly since the TMI and Chernobyl events. Corresponding realization of the critical role of human factors and the importance of operational experience feedback, has helped Kanupp by opening international channels of communication, including renewed cooperation on CANDU technology. The safety standards and criteria for CANDU as well as other NPPs have matured and evolved gradually over the past two decades. First Kanupp has to ensure that its present ageing-induced equipment problems are resolved to satisfy the original safety requirements and public risk targets which are still internationally acceptable. But as a policy, we
Ghafoor, M A; Hashmi, J A; Siddiqui, Z H [Karachi Nuclear Power Plant, Karachi (Pakistan)
Continuous restraint control systems: safety improvement for various occupants
Laan, E. van der; Jager, B. de; Veldpaus, F.; Steinbuch, M.; Nunen, E. van; Willemsen, D.
Occupant safety can be significantly improved by continuous restraint control systems. These restraint systems adjust their configuration during the impact according to the actual operating conditions, such as occupant size, weight, occupant position, belt usage and crash severity. In this study,
A major challenge in process operation is to reduce costs and increase system efficiency whereas the complexity of automated process engineering, control and monitoring systems increases continuously. To cope with this challenge the design, implementation and operation of process monitoring systems for control room operation have to be treated as an ensemble. This is only possible if the engineering of the monitoring information is focused on the production objective and is lead in close collaboration of control room teams, exploitation personnel and process specialists. In this paper some principles for the engineering of monitoring information for control room operation are developed at the example of the exploitation of a particle accelerator at the European Laboratory for Nuclear Research (CERN).
A continued fraction representation of the mass operator
Saraswati, D.K.
We explore some further possibilities of application of the projection operator method of Zwanzig to the theory of Green's functions of quantum statistical mechanics, initiated by Ichiyanagi, and present a continued fraction representation of the mass operator involving a hierarchy of the random forces. As an application of the theory, we calculate the polarization operator of the phonon Green's function of the Frohlich Hamiltonian in the first approximation which corresponds to the assumption that the electron momenta are orthogonal to the phonon momentum. (author)
eleventh and last lecture. Measures like phototherapy and adapted social environments are discussed, and problems associated with the use of chronobiotic...1-1 Individual Differences in Vigilance and Performance during Continuous/Sustained Operations Maria Casagrande Dipartimento di Psicologia Università ...Carver CS, Scheier MF, Weintraub JK (1989) Assessing coping strategies: a theoretical based approach, Journal of Personality and Social Psychology
Operational Control of Internal Transport
J.R. van der Meer (Robert)
textabstractOperational Control of Internal Transport considers the control of guided vehicles in vehicle-based internal transport systems found in facilities such as warehouses, production plants, distribution centers and transshipment terminals. The author's interest of research having direct use
IT Strategic and Operational Controls
Kyriazoglou, J
This book provides a comprehensive guide to implementing an integrated and flexible set of IT controls in a systematic way. It can help organisations to formulate a complete culture for all areas which must be supervised and controlled; allowing them to simultaneously ensure a secure, high standard whilst striving to obtain the strategic and operational goals of the company.
Reactor operating procedures for start up of continuously operated chemical plants
Verwijs, J.W.; Verwijs, J.W.; Kösters, P.H.; van den Berg, Henderikus; Westerterp, K.R.; Kosters, P.G.H.
Rules are presented for the startup of an adiabatic tubular reactor, based on a qualitative analysis of the dynamic behavior of continuously-operated vapor- and liquid-phase processes. The relationships between the process dynamics, operating criteria, and operating constraints are investigated,
SIMULTANEOUS SCHEDULING AND OPERATIONAL OPTIMIZATION OF MULTIPRODUCT, CYCLIC CONTINUOUS PLANTS
A. Alle
Full Text Available The problems of scheduling and optimization of operational conditions in multistage, multiproduct continuous plants with intermediate storage are simultaneously addressed. An MINLP model, called TSPFLOW, which is based on the TSP formulation for product sequencing, is proposed to schedule the operation of such plants. TSPFLOW yields a one-order-of-magnitude CPU time reduction as well as the solution of instances larger than those formerly reported (Pinto and Grossmann, 1994. Secondly, processing rates and yields are introduced as additional optimization variables in order to state the simultaneous problem of scheduling with operational optimization. Results show that trade-offs are very complex and that the development of a straightforward (rule of thumb method to optimally schedule the operation is less effective than the proposed approach.
Alle A.
Continuous operation of a pilot plant for the production of beryllium oxide
Costa, T C; Amaral, S; Silveira, C M.S.; de Oliveira, A P [Instituto de Tecnologia, Governador Valadares (Brazil)
A method of obtaining beryllium oxide with a purity of 99,2% was developed in a pilot plant with a capacity of 7 tons per month destined to operate continuously. The operation market prospects and control of production with the objective of obtaining internacional technical grade beryllium oxide are discussed.
Costa, T.C.; Amaral, S.; Silveira, C.M.S.; Oliveira, A.P. de
A method of obtaining beryllium oxide with a purity of 99,2% was developed in a pilot plant with a capacity of 7 tons per month destined to operate continuously. The operation market prospects and control of production with the objective of obtaining internacional technical grade beryllium oxide are discussed [pt
Control systems engineering in continuous pharmaceutical manufacturing. May 20-21, 2014 Continuous Manufacturing Symposium.
Myerson, Allan S; Krumme, Markus; Nasr, Moheb; Thomas, Hayden; Braatz, Richard D
This white paper provides a perspective of the challenges, research needs, and future directions for control systems engineering in continuous pharmaceutical processing. The main motivation for writing this paper is to facilitate the development and deployment of control systems technologies so as to ensure quality of the drug product. Although the main focus is on small-molecule pharmaceutical products, most of the same statements apply to biological drug products. An introduction to continuous manufacturing and control systems is followed by a discussion of the current status and technical needs in process monitoring and control, systems integration, and risk analysis. Some key points are that: (1) the desired objective in continuous manufacturing should be the satisfaction of all critical quality attributes (CQAs), not for all variables to operate at steady-state values; (2) the design of start-up and shutdown procedures can significantly affect the economic operation of a continuous manufacturing process; (3) the traceability of material as it moves through the manufacturing facility is an important consideration that can at least in part be addressed using residence time distributions; and (4) the control systems technologies must assure quality in the presence of disturbances, dynamics, uncertainties, nonlinearities, and constraints. Direct measurement, first-principles and empirical model-based predictions, and design space approaches are described for ensuring that CQA specifications are met. Ways are discussed for universities, regulatory bodies, and industry to facilitate working around or through barriers to the development of control systems engineering technologies for continuous drug manufacturing. Industry and regulatory bodies should work with federal agencies to create federal funding mechanisms to attract faculty to this area. Universities should hire faculty interested in developing first-principles models and control systems technologies for
Purely absolutely continuous spectrum for almost Mathieu operators
Chulaevsky, V.; Delyon, F.
Using a recent result of Sinai, the authors prove that the almost Mathieu operators acting on l 2 (Z), (H αλ Psi)(n) = Ψ(n + 1) + Ψ(n - 1) + λ cos(ωn + α) Ψ(n), have a purely absolutely continuous spectrum for almost all α provided that ω is a good irrational and λ is sufficiently small. Furthermore, the generalized eigenfunctions are quasiperiodic
Generic singular continuous spectrum for ergodic Schr\\"odinger operators
Avila, Artur; Damanik, David
We consider Schr\\"odinger operators with ergodic potential $V_\\omega(n)=f(T^n(\\omega))$, $n \\in \\Z$, $\\omega \\in \\Omega$, where $T:\\Omega \\to \\Omega$ is a non-periodic homeomorphism. We show that for generic $f \\in C(\\Omega)$, the spectrum has no absolutely continuous component. The proof is based on approximation by discontinuous potentials which can be treated via Kotani Theory.
Pilot plant for flue gas treatment - continuous operation tests
Chmielewski, A.G.; Tyminski, B.; Iller, E.; Zimek, Z.; Licki, J.; Radzio, B.
Tests of continuous operation have been performed on pilot plant at EPS Kaweczyn in the wide range of SO 2 concentration (500-3000 ppm). The bag filter has been applied for aerosol separation. The high efficiencies of SO 2 and NO x removal, approximately 90% were obtained and influenced by such process parameters as: dose, gas temperature and ammonia stoichiometry. The main apparatus of the pilot plant (e.g. both accelerators) have proved their reliability in hard industrial conditions. (Author)
Human Performance in Continuous Operations. Volume 3. Technical Documentation
completed for the U. S. Commander, V Corps. Artillery, by Manning (1978). Manning collected information which bears on the following three questions: 0 Can...performance data were not collected in these pre- liminary studies. Field Studies of Continuous Tank OperationsLI __ _ _ __ _ _ _ To simulate a combat...on routine, monotonous tasks tends A show rapid and severe decrement after peri- odk of more than 24 hours without sleep. I Increasing task complexity
Acute gallbladder torsion - a continued pre-operative diagnostic dilemma
Desrochers Randal
Full Text Available Abstract Acute gallbladder volvulus continues to remain a relatively uncommon process, manifesting itself usually during exploration for an acute surgical abdomen with a presumptive diagnosis of acute cholecystitis. The pathophysiology is that of mechanical organo-axial torsion along the gallbladder's longitudinal axis involving the cystic duct and cystic artery, and with a pre-requisite of local mesenteric redundancy. The demographic tendency is septua- and octo-genarians of the female sex, and its overall incidence is increasing, this being attributed to increasing life expectancy. We discuss two cases of elderly, fragile women presenting to the emergency department complaining of sudden onset right upper quadrant abdominal pain. Their subsequent evaluation suggested acute cholecystitis. Ultimately both were taken to the operating room where the correct diagnosis of gallbladder torsion was made. Pre-operative diagnosis continues to be a major challenge with only 4 cases reported in the literature diagnosed with pre-operative imaging; the remainder were found intra-operatively. Consequently, a delay in diagnosis can have devastating patient outcomes. Herein we propose a necessary high index of suspicion for gallbladder volvulus in the outlined patient demographic with symptoms and signs mimicking acute cholecystitis.
Nonlinear MIMO Control of a Continuous Cooling Crystallizer
Pedro Alberto Quintana-Hernández
Full Text Available In this work, a feedback control algorithm was developed based on geometric control theory. A nonisothermal seeded continuous crystallizer model was used to test the algorithm. The control objectives were the stabilization of the third moment of the crystal size distribution (μ3 and the crystallizer temperature (T; the manipulated variables were the stirring rate and the coolant flow rate. The nonlinear control (NLC was tested at operating conditions established within the metastable zone. Step changes of magnitudes ±0.0015 and ±0.5°C were introduced into the set point values of the third moment and crystallizer temperature, respectively. In addition, a step change of ±1°C was introduced as a disturbance in the feeding temperature. Closed-loop stability was analyzed by calculating the eigenvalues of the internal dynamics. The system presented a stable dynamic behavior when the operation conditions maintain the crystallizer concentration within the metastable zone. Closed-loop simulations with the NLC were compared with simulations that used a classic PID controller. The PID controllers were tuned by minimizing the integral of the absolute value of the error (IAE criterion. The results showed that the NLC provided a suitable option for continuous crystallization control. For all analyzed cases, the IAEs obtained with NLC were smaller than those obtained with the PID controller.
Combined Noncyclic Scheduling and Advanced Control for Continuous Chemical Processes
Damon Petersen
Full Text Available A novel formulation for combined scheduling and control of multi-product, continuous chemical processes is introduced in which nonlinear model predictive control (NMPC and noncyclic continuous-time scheduling are efficiently combined. A decomposition into nonlinear programming (NLP dynamic optimization problems and mixed-integer linear programming (MILP problems, without iterative alternation, allows for computationally light solution. An iterative method is introduced to determine the number of production slots for a noncyclic schedule during a prediction horizon. A filter method is introduced to reduce the number of MILP problems required. The formulation's closed-loop performance with both process disturbances and updated market conditions is demonstrated through multiple scenarios on a benchmark continuously stirred tank reactor (CSTR application with fluctuations in market demand and price for multiple products. Economic performance surpasses cyclic scheduling in all scenarios presented. Computational performance is sufficiently light to enable online operation in a dual-loop feedback structure.
Control Transfer in Operating System Kernels
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach's message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Toward continuous-wave operation of organic semiconductor lasers
Sandanayaka, Atula S. D.; Matsushima, Toshinori; Bencheikh, Fatima; Yoshida, Kou; Inoue, Munetomo; Fujihara, Takashi; Goushi, Kenichi; Ribierre, Jean-Charles; Adachi, Chihaya
The demonstration of continuous-wave lasing from organic semiconductor films is highly desirable for practical applications in the areas of spectroscopy, data communication, and sensing, but it still remains a challenging objective. We report low-threshold surface-emitting organic distributed feedback lasers operating in the quasi–continuous-wave regime at 80 MHz as well as under long-pulse photoexcitation of 30 ms. This outstanding performance was achieved using an organic semiconductor thin film with high optical gain, high photoluminescence quantum yield, and no triplet absorption losses at the lasing wavelength combined with a mixed-order distributed feedback grating to achieve a low lasing threshold. A simple encapsulation technique greatly reduced the laser-induced thermal degradation and suppressed the ablation of the gain medium otherwise taking place under intense continuous-wave photoexcitation. Overall, this study provides evidence that the development of a continuous-wave organic semiconductor laser technology is possible via the engineering of the gain medium and the device architecture. PMID:28508042
Research continues on zebra mussel control
Researchers are working on many fronts to learn methods for controlling and combatting zebra mussels, a species of mussel that can attach to the inside of water intakes at hydroelectric and thermal power plants, and can reduce or block water flow. Biologists at the University of Toledo in Ohio report that compounds from the African soapberry plant called lemmatoxins are lethal to zebra mussels. In laboratory tests, researchers have determined 1 to 2 milligrams of purified lemmatoxins per liter will kill the mussels. In field tests, biologist Harold Lee flushed water through a mussel-infested pipe. He found that the berry extract killed mussels in four to eight hours, making continuous treatment of water intake pipes unnecessary, according to a report in New Scientists. The University of Toledo participated in another project, funded by the American Water Works Association Research Foundation. That project team included the cities of Toledo and Cleveland, Ohio, Finkbeiner, Pettis ampersand Strout, Ltd. consulting engineers, and researchers from Ohio's Case Western Reserve University. The team identified a chemical oxidant, sodium hypochlorite, as a cost-effective agent for controlling zebra mussels at water treatment plant intakes. Toledo has used the sodium hypochlorite and reports the chemical has cleared colonies of zebra mussels that had attached to the intake of its water treatment plant
Lithium control during normal operation
Suryanarayan, S.; Jain, D.
Periodic increases in lithium (Li) concentrations in the primary heat transport (PHT) system during normal operation are a generic problem at CANDU® stations. Lithiated mixed bed ion exchange resins are used at stations for pH control in the PHT system. Typically tight chemistry controls including Li concentrations are maintained in the PHT water. The reason for the Li increases during normal operation at CANDU stations such as Pickering was not fully understood. In order to address this issue a two pronged approach was employed. Firstly, PNGS-A data and information from other available sources was reviewed in an effort to identify possible factors that may contribute to the observed Li variations. Secondly, experimental studies were carried out to assess the importance of these factors in order to establish reasons for Li increases during normal operation. Based on the results of these studies, plausible mechanisms/reasons for Li increases have been identified and recommendations made for proactive control of Li concentrations in the PHT system. (author)
Management and Operational Control of Criticality
Daniels, J. T. [Authority Health and Safety Branch, United Kingdom Atomic Energy Authority, Risley, Lancs. (United Kingdom)
The evidence of the six process criticality accidents that have been reported to date shows that, without exception, they have been due to the failure of operational controls. In no instance has a criticality accident in processing been due to the use of wrong data 01 inaccurate calculation. Criticality accidents are least likely to occur in the production stream and are more likely to be associated with ancillary equipment and operations. Important as correct criticality calculations are, there are many other considerations which require the exercise of judgement in establishing the operational environment. No operation involving fissile material should be permitted without a formal review resulting in a documented statement of (a) the environmental assessment, (b) the nuclear safety arguments which demonstrate safety under that environment, and (c) the operational requirements which will ensure the validity of (b) under the conditions of (a). To ensure the continued viability of the environmental assessment and the continued reliability of clearance conditions there should be close supervision by operating management, and periodic checks made by site nuclear safety staff. Additionally, there should be periodic and systematic examinations by competent persons who are not responsible to the overall management of the site. (author)
Osmotic membrane bioreactor for phenol biodegradation under continuous operation
Praveen, Prashant; Loh, Kai-Chee, E-mail: [email protected]
Highlights: • Osmotic membrane bioreactor was used for phenol biodegradation in continuous mode. • Extractant impregnated membranes were used to alleviate substrate inhibition. • Phenol removal was achieved through both biodegradation and membrane rejection. • Phenol concentrations up to 2500 mg/L were treated at HRT varying in 2.8–14 h. • A biofilm removal strategy was formulated to improve bioreactor sustainability. - Abstract: Continuous phenol biodegradation was accomplished in a two-phase partitioning osmotic membrane bioreactor (TPPOMBR) system, using extractant impregnated membranes (EIM) as the partitioning phase. The EIMs alleviated substrate inhibition during prolonged operation at influent phenol concentrations of 600–2000 mg/L, and also at spiked concentrations of 2500 mg/L phenol restricted to 2 days. Filtration of the effluent through forward osmosis maintained high biomass concentration in the bioreactor and improved effluent quality. Steady state was reached in 5–6 days at removal rates varying between 2000 and 5500 mg/L-day under various conditions. Due to biofouling and salt accumulation, the permeate flux varied from 1.2–7.2 LMH during 54 days of operation, while maintaining an average hydraulic retention time of 7.4 h. A washing cycle, comprising 1 h osmotic backwashing using 0.5 M NaCl and 2 h washing with water, facilitated biofilm removal from the membranes. Characterization of the extracellular polymeric substances (EPS) through FTIR showed peaks between 1700 and 1500 cm{sup −1}, 1450–1450 cm{sup −1} and 1200–1000 cm{sup −1}, indicating the presence of proteins, phenols and polysaccharides, respectively. The carbohydrate to protein ratio in the EPS was estimated to be 0.3. These results indicate that TPPOMBR can be promising in continuous treatment of phenolic wastewater.
On Chinese National Continuous Operating Reference Station System of GNSS
CHEN Junyong
Full Text Available Objective: Global navigation satellite system (GNSS Continuous Operating Reference Station (CORS System can maintain a accurate, 3D, geocentric and dynamic reference coordinate frame in the corresponding area, can provide positioning and navigation service. It can also serve for the meteorology, geodynamics, earthquake monitoring and Location Based services (LBS etc in the same area. Until now, our country can't provide a facing National CORS System serving for every profession and trade, and the national sharing platform of CORS System resources has not been established. So this paper discusses some valuable insight how to construct the National CORS System in China. Method: Constructing goal�Service object�CORS distribution�CORS geographic�geology and communication environment and other factors, are major considerations for the Constructing the National CORS System. Moreover, constructing GNSS CORS is more specific, mainly from four aspects, namely site-selection�civil construction�security measures and equipment-selection for consideration. Outcome: The project of the Constructing Global navigation satellite system (GNSS Continuous Operating Reference Station (CORS System in china is put forward, and is discussed from goal�principle�project and other for construction. Some meaning thought how to construct the National CORS System is submitted Conclusion: The Global navigation satellite system (GNSS Continuous Operating Reference Station (CORS System in china is the lack of a unified planning and design in the national level. So far, the national CORS system serving all walks of life has not been provided, and the national sharing platform of CORS System resources has not been established The primary mission of the Global navigation satellite system (GNSS Continuous Operating Reference Station (CORS System in china is as follows: using data set of GNSS and receiving, transport, process, integration, transmit information and
49 CFR 236.777 - Operator, control.
... 49 Transportation 4 2010-10-01 2010-10-01 false Operator, control. 236.777 Section 236.777..., MAINTENANCE, AND REPAIR OF SIGNAL AND TRAIN CONTROL SYSTEMS, DEVICES, AND APPLIANCES Definitions § 236.777 Operator, control. An employee assigned to operate the control machine of a traffic control system. ...
A compilation of necessary elements for a local government continuity of operations plan
Cashen, Kevin M.
CHDS State/Local National and state homeland security strategies call for continuity of operations plan development. The 2006 Nationwide Plan Review Phase II Report identifies continuity of operations plan development as a state and local goal with a federal goal of providing continuity of operations plan development support. Most local governments do not have a continuity of operation plan or it needs to be updated. Continuity of operations plan guidance is provided by a variety of intern...
Belt conveyor dynamics in transient operation for speed control
He, D.; Pang, Y.; Lodewijks, G.
Belt conveyors play an important role in continuous dry bulk material transport, especially at the mining industry. Speed control is expected to reduce the energy consumption of belt conveyors. Transient operation is the operation of increasing or decreasing conveyor speed for speed control. According to literature review, current research rarely takes the conveyor dynamics in transient operation into account. However, in belt conveyor speed control, the conveyor dynamic behaviors are signifi...
Belt conveyors play an important role in continuous dry bulk material transport, especially at the mining industry. Speed control is expected to reduce the energy consumption of belt conveyors. Transient operation is the operation of increasing or decreasing conveyor speed for speed control.
Repetitive learning control of continuous chaotic systems
Chen Maoyin; Shang Yun; Zhou Donghua
Combining a shift method and the repetitive learning strategy, a repetitive learning controller is proposed to stabilize unstable periodic orbits (UPOs) within chaotic attractors in the sense of least mean square. If nonlinear parts in chaotic systems satisfy Lipschitz condition, the proposed controller can be simplified into a simple proportional repetitive learning controller
Characteristics of switched reluctance motor operating in continuous and discontinuous conduction mode
Ćalasan Martin P.
Full Text Available This paper presents mechanical characteristics of Switched Reluctance Motor (SRM when it operates in Discontinuous Conduction Mode (DCM or in Continuous Conduction Mode (CCM, i.e. when the current through the phase coils (windings flows discontinuously or continuously. Firstly, in order to maximize the output power of SRM optimization of its control parameters was performed, such that the peak and RMS values of the current do not exceed the predefined values. The optimal control parameters vs. rotation speed, as well as the corresponding characteristics of torque, power and efficiency. It is shown that with CCM the machine torque (power, at high speed, can be increased.
Solidification control in continuous casting of steel
Solidification in continuous casting (CC) technology is initiated in a water- ..... to fully austenitic solidification, and FP between 0 and 1 indicates mixed mode. ... the temperature interval (LIT – TSA) corresponding to fs = 0⋅9 → 1, is in reality the.
Optimization and control of a continuous polymerization reactor
L. A. Alvarez
Full Text Available This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO, the Model Predictive Control (MPC and a Target Calculation (TC that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.
Specificity of continuous auditing approach on information technology internal controls
Kaćanski Slobodan
Full Text Available Contemporary business world, can not be imagined without the use of information technology in all aspects of business. The use of information technology in manufacturing and non-production companies' activities can greatly facilitate and accelerate the process of operation and control. Because of its complexity, they possess vulnerable areas and provide space for the emergence of accidental and intentional frauds that can significantly materially affect the business decisions made by the companies' management. Implementation of internal controls can greatly reduce the level of errors that can contribute to making the wrong decisions. In order to protect the operating system, the company's management implement an internal audit to periodically examine the fundamental quality of the internal control systems. Since the internal audit, according to its character, only periodically checks quality of internal control systems and information technologies to be reported to the manager, the problem arises in the process of in wrong time reporting the management structures of the business entity. To eliminate this problem, management implements a special approach to internal audit, called continuous auditing.
EDS operator and control software
Ott, L.L.
The Enrichment Diagnostic System (EDS) was developed at Lawrence Livermore National Laboratory (LLNL) to acquire, display and analyze large quantities of transient data for a real-time Advanced Vapor Laser Isotope Separation (AVLIS) experiment. Major topics discussed in this paper are the EDS operator interface (SHELL) program, the data acquisition and analysis scheduling software, and the graphics software. The workstation concept used in EDS, the software used to configure a user's workstation, and the ownership and management of a diagnostic are described. An EDS diagnostic is a combination of hardware and software designed to study specific aspects of the process. Overall system performance is discussed from the standpoint of scheduling techniques, evaluation tools, optimization techniques, and program-to-program communication methods. EDS is based on a data driven design which keeps the need to modify software to a minimum. This design requires a fast and reliable data base management system. A third party data base management product, Berkeley Software System Database, written explicitly for HP1000's, is used for all EDS data bases. All graphics is done with an in-house graphics product, Device Independent Graphics Library (DIGLIB). Examples of devices supported by DIGLIB are: Versatec printer/plotters, Raster Technologies Graphic Display Controllers, and HP terminals (HP264x and HP262x). The benefits derived by using HP hardware and software as well as obstacles imposed by the HP environment are presented in relation to EDS development and implementation
76 FR 79271 - Genesee & Wyoming Inc.-Continuance in Control Exemption-Hilton & Albany Railroad, Inc.
... Inc.--Continuance in Control Exemption-Hilton & Albany Railroad, Inc. AGENCY: Surface Transportation.... (GWI), a noncarrier, to continue in control of Hilton & Albany Railroad, Inc. (HAL), upon HAL's... Railway Company (NSR) and operation of a 55.5-mile rail line between Hilton and Albany, Ga.\\1\\ GWI's...
Sensor for automatic continuous emission control of gases
Becker, M
For continuous in-situ measurements of exhaust gases, a laboratory model of a gas sensor has been designed and constructed with a particular view to a maintenance-free operation in adverse environments. The equipment operates on the basis of specific, frequency-selective gas absorption in the infrared and uses the single beam dual wavelength method, thus achieving a high degree of independence from external interferences like intensity loss by window contamination or dust within the absorption path. Additional function control circuits enable maintenance-free operation also over longer time periods. The equipment is in principle capable of operating in a wide wavelength range. By selecting the SO/sub 2/ absorption band at 4.0 ..mu..m wavelength and by a unique design of the electronic signal-processing circuits, measurements of SO/sub 2/ concentrations within exhaust ducts have been made possible which are free from interference of existing other gas constituents also present like CO, O/sub 2/, NO/sub (x)/, and water. The measuring range with an absorption path of 10 m covers concentrations from 0.2 to 5 g/normal m/sup 3/ at a maximum uncertainty of 2.5 percent of the maximum value. The equipment has been tested inside a chimney of a 150 MW power plant burning fossil fuel.
Contextual control of discriminated operant behavior.
Bouton, Mark E; Todd, Travis P; León, Samuel P
Previous research has suggested that changing the context after instrumental (operant) conditioning can weaken the strength of the operant response. That result contrasts with the results of studies of Pavlovian conditioning, in which a context switch often does not affect the response elicited by a conditioned stimulus. To begin to make the methods more similar, Experiments 1-3 tested the effects of a context switch in rats on a discriminated operant response (R; lever pressing or chain pulling) that had been reinforced only in the presence of a 30-s discriminative stimulus (S; tone or light). As in Pavlovian conditioning, responses and reinforcers became confined to presentations of the S during training. However, in Experiment 1, after training in Context A, a switch to Context B caused a decrement in responding during S. In Experiment 2, a switch to Context B likewise decreased responding in S when Context B was equally familiar, equally associated with reinforcement, or equally associated with the training of a discriminated operant (a different R reinforced in a different S). However, there was no decrement if Context B had been associated with the same response that was trained in Context A (Experiments 2 and 3). The effectiveness of S transferred across contexts, whereas the strength of the response did not. Experiment 4 found that a continuously reinforced response was also disrupted by context change when the same response manipulandum was used in both training and testing. Overall, the results suggest that the context can have a robust general role in the control of operant behavior. Mechanisms of contextual control are discussed.
Advanced control room caters for the operator
George, C.R.; Rygg, D.E.
In existing control rooms the operators' efficiency is often limited by widely scattered and sometimes illogically arranged controls which tend to increase the potential for outages or equipment damage. The advanced control room described allows instant and ready access to preselected information and control by one or two operators from a seated or standing position. (author)
Operation control device under radiation exposure
Kimura, Kiichi; Murakami, Toichi.
The device of the present invention performs smooth progress of operation by remote control for a plurality of operations in periodical inspections in controlled areas of a nuclear power plant, thereby reducing the operator's exposure dose. Namely, the device monitors the progressing state of the operation by displaying the progress of operation on a CRT of a centralized control device present in a low dose area remote from an operation field through an ITV camera disposed in the vicinity of the operation field. Further, operation sequence and operation instruction procedures previously inputted in the device are indicated to the operation field through an operation instruction outputting device (field CRT) in accordance with the progress of the operation steps. On the other hand, the operation progress can be aided by inputting information from the operation field such as start or completion of the operation steps. Further, the device of the present invention can monitor the change of operation circumstances and exposure dose of operators based on the information from a radiation dose measuring device disposed in the operation circumstance and to individual operators. (I.S.)
The VEPP-2000 Collider Control System: Operational Experience
Senchenko, A I; Lysenko, A P; Rogovsky, Yu A; Shatunov, P Yu
The VEPP-2000 collider was commissioned and operated successfully in 2010-2013. During the operation the facility underwent continuous updates and experience in maintenance was acquired. Strong cooperation between the staff of the accelerator complex and the developers of the control system proved effective for implementing the necessary changes in a short time.
Long-term operation in Korea - Continued operation of Wolsong 1 Long-term operation of existing reactors in Switzerland
Bae, Su Hwan; Straub, Ralf
Session 6 identified some key stakeholder concerns or interests that shape their considerations on renewing a nuclear power plant licence or extending facility lifetime. These included the safety of long-term operations, the potential need for upgrades or additional investment, and the timing and implementation of such investments. Mr Bae of the Korea Hydro and Nuclear Power Company presented the current nuclear power programme in Korea and the company's experience with stakeholder involvement, specifically related to the licence renewal of Wolsong unit 1 that included a formal agreement between Korea Hydro and Nuclear Power Company and the local communities around the plant. Mr Straub, of the Swiss Federal Department of the Environment, Transport, Energy and Communications, provided insight on the current restructuring of the Swiss energy strategy, and the Swiss form of 'direct democracy' that involves frequent public referenda. The proposed energy strategy to be assessed by voters in May 2017 would include a gradual phase-out of nuclear power. Citizens' perception of safe operations, the competence and openness of nuclear actors and the benefits that nuclear plants bring to the local population play a role in their judgement of whether facilities should continue with long-term operations. While for a new facility there is not as much time to establish the relationship and build a rapport and reputation with the community, in the case of existing plants there is history and experience either to build on or to overcome. Each set of decisions has a number of stakeholders, but the general public living around the plant was highlighted as a primary stakeholder. In the case of Korea Hydro and Nuclear Power's licence renewal efforts at Wolsong 1, gaining and maintaining the support of the surrounding communities is critical. The company applied lessons learnt from past experiences and in a year-long process pursued an agreement with representatives appointed by the
The CEBAF [Continuous Electron Beam Accelerator Facility] control system architecture
Bork, R.
The focus of this paper is on CEBAF's computer control system. This control system will utilize computers in a distributed, networked configuration. The architecture, networking and operating system of the computers, and preliminary performance data are presented. We will also discuss the design of the operator consoles and the interfacing between the computers and CEBAF's instrumentation and operating equipment
REACTOR CONTROL ROD OPERATING SYSTEM
Miller, G.
A nuclear reactor control rod mechanism is designed which mechanically moves the control rods into and out of the core under normal conditions but rapidly forces the control rods into the core by catapultic action in the event of an emergency. (AEC)
Automated and continuously operating acid dew point measuring instrument for flue gases
Reckmann, D.; Naundorf, G.
Design and operation is explained for a sulfuric acid dew point indicator for continuous flue gas temperature control. The indicator operated successfully in trial tests over several years with brown coal, gas and oil combustion in a measurement range of 60 to 180 C. The design is regarded as uncomplicated and easy to manufacture. Its operating principle is based on electric conductivity measurement on a surface on which sulfuric acid vapor has condensed. A ring electrode and a PtRh/Pt thermal element as central electrode are employed. A scheme of the equipment design is provided. Accuracy of the indicator was compared to manual dew point sondes manufactured by Degussa and showed a maximum deviation of 5 C. Manual cleaning after a number of weeks of operation is required. Fly ash with a high lime content increases dust buildup and requires more frequent cleaning cycles.
Continuous operation of RODOS in case of long lasting releases
Raskob, W.; Paesler-Sauer, J.; Rafat, M.
users who want to explore the impact of long term countermeasures in more detail. The users has also the option to set the time (in days after the first release) at which they consider the release to be ended or all significant deposition to have occurred. This will be entered interactively via the user interface. This will enable the user to start the consideration of countermeasures after the majority of the release has occurred, as some releases might have a very long, low level tail, and it would be unrealistic to expect decision makers to wait until deposition had completely stopped in such circumstances. This approach also recognizes that deposition will not end everywhere on the same day. The demonstration will exemplify the continuous automatic operation of RODOS using a release scenarios lasting over several days. On-line meteorological data measured at Forschungszentrum Karlsruhe, assumed to be the release point, will be used as input for the diagnostic calculations performed automatically within a distance range of 160 km x 160 km every 10 minutes. Numerical weather forecast from the German Weather Service, updated every 12 hours, will be used for prognostic calculations repeated in the same calculation area every hour. In parallel, the European wide numerical weather forecasts calculated by the ALADIN model, run by the Austrian Weather service and updated every 12 hours, are used to calculate in interactive RODOS runs the longer distance contamination. All functions of RODOS and its broad spectrum of results will be presented by the demonstration team. In particular, the necessity of emergency actions and countermeasure and their consequences in terms of areas affected, radiation doses and resources needed can be tracked with the ongoing activity release. (author)
Experimental Verification of Dynamic Operation of Continuous and Multivessel Batch Distillation
Wittgens, Bernd
This thesis presents a rigorous model based on first principles for dynamic simulation of the composition dynamics of a staged high-purity continuous distillation columns and experiments performed to verify it. The thesis also demonstrates the importance of tray hydraulics to obtain good agreement between simulation and experiment and derives analytic expressions for dynamic time constants for use in simplified and vapour dynamics. A newly developed multivessel batch distillation column consisting of a reboiler, intermediate vessels and a condenser vessel provides a generalization of previously proposed batch distillation schemes. The total reflux operation of this column was presented previously and the present thesis proposes a simple feedback control strategy for its operation based on temperature measurements. The feasibility of this strategy is demonstrated by simulations and verified by laboratory experiments. It is concluded that the multivessel column can be easily operated with simple temperature controllers, where the holdups are only controlled indirectly. For a given set of temperature setpoints, the final product compositions are independent of the initial feed composition. When the multivessel batch distillation column is compared to a conventional batch column, both operated under feedback control, it is found that the energy required to separate a multicomponent mixture into highly pure products is much less for the multivessel system. This system is also the simplest one to operate.
Adaptive Controller Design for Continuous Stirred Tank Reactor
K. Prabhu; V. Murali Bhaskaran
Continues Stirred Tank Reactor (CSTR) is an important issue in chemical process and a wide range of research in the area of chemical engineering. Temperature Control of CSTR has been an issue in the chemical control engineering since it has highly non-linear complex equations. This study presents problem of temperature control of CSTR with the adaptive Controller. The Simulation is done in MATLAB and result shows that adaptive controller is an efficient controller for temperature control of C...
The role of the control room operator
Williams, M.C.
A control room operator at an Ontario Hydro nuclear power plant operates a reactor-turbine unit according to approved procedures within imposed constraints to meet the objectives of the organization. A number of operating and administrative tasks make up this role. Control room operators spend approximately six percent of their time physically operating equipment exclusive of upset conditions, and another one percent operating in upset conditions. Testing occupies five percent of an operator's time. Operators must be trained to recognize the entire spectrum of inputs available to them and use them all effectively. Any change in system or unit state is always made according to an approved procedure. Extensive training is required; operators must be taught and pracised in what to do, and must know the reasons behind their actions. They are expected to memorize emergency procedures, to know when to consult operating procedures, and to have sufficient understanding and practice to perform these procedures reliably
Trainer module for security control center operations
Bernard, E.A.
An operator trainer module has been developed to be used with the security control center equipment to be installed as part of a safeguards physical protection system. The module is designed to provide improved training and testing capabilities for control center operators through the use of simulations for perimeter equipment operations. Operators, through the trainer module, can be challenged with a variety of realistic situations which require responsive action identical to that needed in an actual system. This permits a consistent evaluation and confirmation of operator capabilities prior to assignment as an operator and allows for periodic retesting to verify that adequate performance levels are maintained
Control of a hydraulically actuated continuously variable transmission
Pesgens, M.F.M.; Vroemen, B.G.; Stouten, B.; Veldpaus, F.E.; Steinbuch, M.
Vehicular drivelines with hierarchical powertrain control require good component controller tracking, enabling the main controller to reach the desired goals. This paper focuses on the development of a transmission ratio controller for a hydraulically actuated metal push-belt continuously variable
Quantization of edge currents for continuous magnetic operators
Kellendonk, J
For a magnetic Hamiltonian on a half-plane given as the sum of the Landau operator with Dirichlet boundary conditions and a random potential, a quantization theorem for the edge currents is proven. This shows that the concept of edge channels also makes sense in presence of disorder. Moreover, gaussian bounds on the heat kernel and its covariant derivatives are obtained.
Improving training tools for continuing operator qualification in Spain
Marti, F.; San Antonio, S.
There are currently nine nuclear power plants in service in Spain; the most recent started commercial operation in 1988. Spanish legislation requires operators to have an academic technical background of at least 3 yr. The turnover rate is <5%, and in recent years, symptom-based emergency procedure has been introduced. These facts have given rise to a situation in which Spanish licensed operators are demanding more in-depth training to avoid a stagnant routine and boredom. In responding to this challenge, Tecnatom has had to significantly update its two simulators for boiling water reactor (BWR) and pressurized water reactor (PSR) plants, to ensure coverage of the emergency procedures and has had to create a tool - the Interactive Graphics Simulator - that allows these problems to be ameliorated. With a view to updating its simulators, Tecnatom initiated in 1985 a project known as advanced simulation models (MAS), which was completed at the end of 1990. The TRACS code is a real-time advanced thermohydraulic code for upgrading Tecnatom's nuclear plant simulators. The interactive graphic simulator, (SGI) is a system that provides a graphic display of the models of a full-scope simulator by means of color monitors. The two new tools used are enabling higher levels of motivation to be achieved among the plant operations personnel, especially with respect to requalification
Closed-loop helium circulation system for actuation of a continuously operating heart catheter pump.
Karabegovic, Alen; Hinteregger, Markus; Janeczek, Christoph; Mohl, Werner; Gföhler, Margit
Currently available, pneumatic-based medical devices are operated using closed-loop pulsatile or open continuous systems. Medical devices utilizing gases with a low atomic number in a continuous closed loop stream have not been documented to date. This work presents the construction of a portable helium circulation addressing the need for actuating a novel, pneumatically operated catheter pump. The design of its control system puts emphasis on the performance, safety and low running cost of the catheter pump. Static and dynamic characteristics of individual elements in the circulation are analyzed to ensure a proper operation of the system. The pneumatic circulation maximizes the working range of the drive unit inside the catheter pump while reducing the total size and noise production.Separate flow and pressure controllers position the turbine's working point into the stable region of the pressure creation element. A subsystem for rapid gas evacuation significantly decreases the duration of helium removal after a leak, reaching subatmospheric pressure in the intracorporeal catheter within several milliseconds. The system presented in the study offers an easy control of helium mass flow while ensuring stable behavior of its internal components.
Co-operatives and Normative Control
Bregn, Kirsten; Jagd, Søren
This paper explores the conditions for applying normative control in co-operatives. For normative control to be effective two conditions are found particularly important: Individuals must be morally involved and the organization must have a system of making it possible to link common norms...... and individual action. If these conditions are not fulfilled, as may be the case in many co-operatives, normative control cannot be expected to work. The problems of normative control in co-operatives may then not be caused by the use of normative control as such, but may instead be a problem of securing...
Continuity and general perturbation of the Drazin inverse for closed linear operators
N. Castro González
Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.
The design and operation of a continuous ion-exchange demonstration plant for the recovery of uranium
Craig, W.M.; Douglas, M.E.E.; Louw, G.D.
A description is given of the design of the continuous ion-exchange demonstration plant at Blyvooruitzicht Gold Mine, including details of the process design, the column construction, and the control system. The operating and process results gathered over a period of seventeen months are summarized, and devolopment work and modifications to the process are discussed. It is concluded that the system comprising continuous loading and continuous elution is technically feasible and can be scaled up with confidence [af
Controlling operational risk: Concepts and practices
van den Tillaart, A.H.A.J.
The subject of this thesis is controlling 'operational risk' in banks. Operational risk is defined as the risk of losses resulting from inadequate or failed internal processes, people, systems, or from external events. Within this very broad subject, we focus on the place of operational risk
Power generation, operation and control
Wood, Allen J; Sheblé, Gerald B
Since publication of the second edition, there have been extensive changes in the algorithms, methods, and assumptions in energy management systems that analyze and control power generation. This edition is updated to acquaint electrical engineering students and professionals with current power generation systems. Algorithms and methods for solving integrated economic, network, and generating system analysis are provided. Also included are the state-of-the-art topics undergoing evolutionary change, including market simulation, multiple market analysis, multiple interchange contract analysis, c
Computer control for remote wind turbine operation
Manwell, J.F.; Rogers, A.L.; Abdulwahid, U.; Driscoll, J. [Univ. of Massachusetts, Amherst, MA (United States)
Light weight wind turbines located in harsh, remote sites require particularly capable controllers. Based on extensive operation of the original ESI-807 moved to such a location, a much more sophisticated controller than the original one has been developed. This paper describes the design, development and testing of that new controller. The complete control and monitoring system consists of sensor and control inputs, the control computer, control outputs, and additional equipment. The control code was written in Microsoft Visual Basic on a PC type computer. The control code monitors potential faults and allows the turbine to operate in one of eight states: off, start, run, freewheel, low wind shut down, normal wind shutdown, emergency shutdown, and blade parking. The controller also incorporates two {open_quotes}virtual wind turbines,{close_quotes} including a dynamic model of the machine, for code testing. The controller can handle numerous situations for which the original controller was unequipped.
Model based Control of a Continuous Yeast Fermentation
Andersen, Maria Yolanda; Brabrand, Henrik; Jørgensen, Sten Bay
Control of a continuous fermentation with Saccharomyces cerevisiae is performed by manipulation of the feed flow rate using an ethanol measurement in the exit gas The process is controlled at the critical dilution rate with a low ethanol concentration of 40-50 mg/l. A standard PI controller is able...
Organization of Control Units with Operational Addressing
Alexander A. Barkalov; Roman M. Babakov; Larysa A. Titarenko
The using of operational addressing unit as the block of control unit is proposed. The new structure model of Moore finite-state machine with reduced hardware amount is developed. The generalized structure of operational addressing unit is suggested. An example of synthesis process for Moore finite-state machine with operational addressing unit is given. The analytical researches of proposed structure of control unit are executed.
Automatic operation device for control rods
Sekimizu, Koichi
Purpose: To enable automatic operation of control rods based on the reactor operation planning, and particularly, to decrease the operator's load upon start up and shutdown of the reactor. Constitution: Operation plannings, demand for the automatic operation, break point setting value, power and reactor core flow rate change, demand for operation interrupt, demand for restart, demand for forecasting and the like are inputted to an input device, and an overall judging device performs a long-term forecast as far as the break point by a long-term forecasting device based on the operation plannings. The automatic reactor operation or the like is carried out based on the long-term forecasting and the short time forecasting is performed by the change in the reactor core status due to the control rod operation sequence based on the control rod pattern and the operation planning. Then, it is judged if the operation for the intended control rod is possible or not based on the result of the short time forecasting. (Aizawa, K.)
A continuous-time control model on production planning network ...
A continuous-time control model on production planning network. DEA Omorogbe, MIU Okunsebor. Abstract. In this paper, we give a slightly detailed review of Graves and Hollywood model on constant inventory tactical planning model for a job shop. The limitations of this model are pointed out and a continuous time ...
Continuity of Operations Planning (COOP): A Strategy for Implementation
5050 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/ MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR'S ACRONYM(S) 11. SPONSOR...a position of building an alternate command and control site from ground zero , with little time or thought going into the functions, capacities and...above, there are two other approaches available to leaders in selecting a site. One option is to allow employees to telecommute and work from home
Method of controlling the reactor operation
Ishiguro, Akira; Nakakura, Hiroyuki.
Purpose: To moderate vibratory response due to delayed operation thereby obtain stable controlled response in the operation control for a PWR type reactor. Method: the reactor operation is controlled by the axial power distribution control by regulating the boron concentration in primary coolants with a boron density control system and controlling the average temperature for the primary coolants with the control rod control system. In this case, the control operation and the control response become instable due to transmission delay, etc. of aqueous boric acid injection to the primary coolant circuits to result in vibratory response. In the present invention, signals are prepared by adding the amount in proportion to the variation coefficient with time of xenone concentration obtained from the measured value for the reactor power added to the conventional axial power distribution parameter deviation and used as the input signals for the boron concentration control system. As a result, the instability due to the transmission delay of the aqueous boric acid injection is improved by the preceding control by the amount in proportion with the variation coefficient with time of the xenone concentration. An advantageous effect can be expected for the load following operation during day time according to the present invention. (Kamimura, M.)
Operating control techniques for maglev transport systems
Kraft, K H; Schnieder, E
The technical and operational possibilities of magnetic levitation transport systems can only be fully exploited by introducing 'intelligent' control systems which ensure automatic and trouble-free train running. The solution of exacting requirements in the fields of traction dynamics, security and control as well as information gathering transmission and processing is an important prior condition in that respect. The authors report here on the present state of research and development in operating control techniques applicable to maglev transport systems.
Computer control of shielded cell operations
Jeffords, W.R. III.
This paper describes in detail a computer system to remotely control shielded cell operations. System hardware, software, and design criteria are discussed. We have designed a computer-controlled buret that provides a tenfold improvement over the buret currently in service. A computer also automatically controls cell analyses, calibrations, and maintenance. This system improves conditions for the operators by providing a safer, more efficient working environment and is expandable for future growth and development
Process automation using combinations of process and machine control technologies with application to a continuous dissolver
Spencer, B.B.; Yarbro, O.O.
Operation of a continuous rotary dissolver, designed to leach uranium-plutonium fuel from chopped sections of reactor fuel cladding using nitric acid, has been automated. The dissolver is a partly continuous, partly batch process that interfaces at both ends with batchwise processes, thereby requiring synchronization of certain operations. Liquid acid is fed and flows through the dissolver continuously, whereas chopped fuel elements are fed to the dissolver in small batches and move through the compartments of the dissolver stagewise. Sequential logic (or machine control) techniques are used to control discrete activities such as the sequencing of isolation valves. Feedback control is used to control acid flowrates and temperatures. Expert systems technology is used for on-line material balances and diagnostics of process operation. 1 ref., 3 figs
Nuclear thermal rocket engine operation and control
Gunn, S.V.; Savoie, M.T.; Hundal, R.
The operation of a typical Rover/Nerva-derived nuclear thermal rocket (NTR) engine is characterized and the control requirements of the NTR are defined. A rationale for the selection of a candidate diverse redundant NTR engine control system is presented and the projected component operating requirements are related to the state of the art of candidate components and subsystems. The projected operational capabilities of the candidate system are delineated for the startup, full-thrust, shutdown, and decay heat removal phases of the engine operation. 9 refs
Intelligent tutors for control center operator training
Vale, Z.A. [Porto Univ. (Portugal). Dept. of Electrical and Computer Engineering; Fernandes, M.F.; Marques, A. [Electricity of Portugal, Sacavem (Portugal)
Power systems are presently remotely operated and controlled from control centers that receive on-line information about the power system state. Control center operators have very high-demanding tasks what makes their training a key issue for the performance of the whole power system. Simulators are usually used by electrical utilities for this purpose but they are very expensive applications and their use requires the preparation of the training sessions by qualified training staff which is a very time consuming task. Due to this, these simulators are only used a few times a year. Intelligent Tutoring Systems (ITS) provide some new possibilities for control center operator training making easier its use without much assistance of the teaching staff. On the other hand, an expert system in use in a control center can be adapted to an ITS to train operators without much effort. 18 refs
18 CFR 376.209 - Procedures during periods of emergency requiring activation of the Continuity of Operations Plan.
... periods of emergency requiring activation of the Continuity of Operations Plan. 376.209 Section 376.209... of the Continuity of Operations Plan. (a)(1) The Commission's Continuity of Operations Plan is...) During periods when the Continuity of Operations Plan is activated, the Commission will continue to act...
47 CFR 78.51 - Remote control operation.
... 47 Telecommunication 4 2010-10-01 2010-10-01 false Remote control operation. 78.51 Section 78.51 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES CABLE TELEVISION RELAY... shall also be equipped with suitable devices for observing the overall characteristics of the...
"Batch" kinetics in flow: online IR analysis and continuous control.
Moore, Jason S; Jensen, Klavs F
Currently, kinetic data is either collected under steady-state conditions in flow or by generating time-series data in batch. Batch experiments are generally considered to be more suitable for the generation of kinetic data because of the ability to collect data from many time points in a single experiment. Now, a method that rapidly generates time-series reaction data from flow reactors by continuously manipulating the flow rate and reaction temperature has been developed. This approach makes use of inline IR analysis and an automated microreactor system, which allowed for rapid and tight control of the operating conditions. The conversion/residence time profiles at several temperatures were used to fit parameters to a kinetic model. This method requires significantly less time and a smaller amount of starting material compared to one-at-a-time flow experiments, and thus allows for the rapid generation of kinetic data. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Intelligent Control and Operation of Distribution System
Bhattarai, Bishnu Prasad
methodology to ensure efficient control and operation of the future distribution networks. The major scientific challenge is thus to develop control models and strategies to coordinate responses from widely distributed controllable loads and local generations. Detailed models of key Smart Grid (SG) elements...... in this direction but also benefit distribution system operators in the planning and development of the distribution network. The major contributions of this work are described in the following four stages: In the first stage, an intelligent Demand Response (DR) control architecture is developed for coordinating...... the key SG actors, namely consumers, network operators, aggregators, and electricity market entities. A key intent of the architecture is to facilitate market participation of residential consumers and prosumers. A Hierarchical Control Architecture (HCA) having primary, secondary, and tertiary control...
Modeling Control Situations in Power System Operations
Saleem, Arshad; Lind, Morten; Singh, Sri Niwas
for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system......Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...
Development of stereo endoscope system with its innovative master interface for continuous surgical operation.
Kim, Myungjoon; Lee, Chiwon; Hong, Nhayoung; Kim, Yoon Jae; Kim, Sungwan
Although robotic laparoscopic surgery has various benefits when compared with conventional open surgery and minimally invasive surgery, it also has issues to overcome and one of the issues is the discontinuous surgical flow that occurs whenever control is swapped between the endoscope system and the operating robot arm system. This can lead to problems such as collision between surgical instruments, injury to patients, and increased operation time. To achieve continuous surgical operation, a wireless controllable stereo endoscope system is proposed which enables the simultaneous control of the operating robot arm system and the endoscope system. The proposed system consists of two improved novel master interfaces (iNMIs), a four-degrees of freedom (4-DOFs) endoscope control system (ECS), and a simple three-dimensional (3D) endoscope. In order to simultaneously control the proposed system and patient side manipulators of da Vinci research kit (dVRK), the iNMIs are installed to the master tool manipulators of dVRK system. The 4-DOFs ECS consists of four servo motors and employs a two-parallel link structure to provide translational and fulcrum point motion to the simple 3D endoscope. The images acquired by the endoscope undergo stereo calibration and rectification to provide a clear 3D vision to the surgeon as available in clinically used da Vinci surgical robot systems. Tests designed to verify the accuracy, data transfer time, and power consumption of the iNMIs were performed. The workspace was calculated to estimate clinical applicability and a modified peg transfer task was conducted with three novice volunteers. The iNMIs operated for 317Â min and moved in accordance with the surgeon's desire with a mean latency of 5Â ms. The workspace was calculated to be 20378.3Â cm 3 , which exceeds the reference workspace of 549.5Â cm 3 . The novice volunteers were able to successfully execute the modified peg transfer task designed to evaluate the proposed system's overall
Enhancing Safety at Airline Operations Control Centre
Lukáš Řasa
Full Text Available In recent years a new term of Safety Management System (SMS has been introduced into aviation legislation. This system is being adopted by airline operators. One of the groundbased actors of everyday operations is Operations Control Centre (OCC. The goal of this article has been to identify and assess risks and dangers which occur at OCC and create a template for OCC implementation into SMS.
Operational Assessment of Controller Complexity, Phase I
National Aeronautics and Space Administration — In today's operations, acceptable levels of controller workload are maintained by assigning sector capacities based on simple aircraft count and a capacity threshold...
Developing control room operator selection procedures
Bosshardt, M.J.; Bownas, D.A.
PDRI is performing a two-year study to identify the tasks performed and attributes required in electric power generating plant operating jobs, and focusing on the control room operator position. Approximately 65 investor-owned utilities are participating in the study
Automatic Control of Freeboard and Turbine Operation
Kofoed, Jens Peter; Frigaard, Peter Bak; Friis-Madsen, Erik
The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003.......The report deals with the modules for automatic control of freeboard and turbine operation on board the Wave dragon, Nissum Bredning (WD-NB) prototype, and covers what has been going on up to ultimo 2003....
Terminal sliding mode control for continuous stirred tank reactor
Zhao, D.; Zhu, Q.; Dubbeldam, J.
A continuous stirred tank reactor (CSTR) is a typical example of chemical industrial equipment, whose dynamics represent an extensive class of second order nonlinear systems. It has been witnessed that designing a good control algorithm for the CSTR is very challenging due to the high complexity. The two difficult issues in CSTR control are state estimation and external disturbance attenuation. In general, in industrial process control a fast and robust response is essential. Driven by these ...
Operational trade-offs in reservoir control
Georgakakos, Aris P.
Reservoir operation decisions require constant reevaluation in the face of conflicting objectives, varying hydrologic conditions, and frequent operational policy changes. Optimality is a relative concept very much dependent on the circumstances under which a decision is made. More than anything else, reservoir management authorities need the means to assess the impacts of various operational options. It is their responsibility to define what is desirable after a thorough evaluation of the existing circumstances. This article presents a model designed to generate operational trade-offs common among reservoir systems. The model avoids an all-encompassing problem formulation and distinguishes three operational modes (levels) corresponding to normal, drought, and flood operations. Each level addresses only relevant system elements and uses a static and a dynamic control module to optimize turbine performance within each planning period and temporally. The model is used for planning the operation of the Savannah River System.
Estimated Costs of Continuing Operations in Iraq and Other Operations of the Global War on Terrorism
Holtz-Eakin, Douglas
At the request of Senator Conrad, the Congressional Budget Office (CBO) has estimated the costs of military operations in Iraq and Afghanistan and other operations associated with the global war on terrorism (GWOT...
Adding control to arbitrary unknown quantum operations
Zhou, Xiao-Qi; Ralph, Timothy C.; Kalasuwan, Pruet; Zhang, Mian; Peruzzo, Alberto; Lanyon, Benjamin P.; O'Brien, Jeremy L.
Although quantum computers promise significant advantages, the complexity of quantum algorithms remains a major technological obstacle. We have developed and demonstrated an architecture-independent technique that simplifies adding control qubits to arbitrary quantum operations—a requirement in many quantum algorithms, simulations and metrology. The technique, which is independent of how the operation is done, does not require knowledge of what the operation is, and largely separates the problems of how to implement a quantum operation in the laboratory and how to add a control. Here, we demonstrate an entanglement-based version in a photonic system, realizing a range of different two-qubit gates with high fidelity. PMID:21811242
Absence of singular continuous spectrum for certain self-adjoint operators
Mourre, E.
An adequate condition is given for a self-adjoint operator to show in the vinicity of a point E of its spectrum the following properties: its point spectrum is of finite size; its singular continuous spectrum is empty. In the way of new applications the absence of singular continuous spectrum is demonstrated in the following two cases: perturbations of pseudo-differential operators; Schroedinger operators of a three-body system [fr
Applying interactive control to waste processing operations
Grasz, E.L.; Merrill, R.D.; Couture, S.A.
At present waste and residue processing includes steps that require human interaction. The risk of exposure to unknown hazardous materials and the potential for radiation contamination motivates the desire to remove operators from these processes. Technologies that facilitate this include glove box robotics, modular systems for remote and automated servicing, and interactive controls that minimize human intervention. LLNL is developing an automated system which is designed to supplant the operator for glove box tasks, thus protecting the operator from the risk of radiation exposure and minimizing operator-associated waste. Although most of the processing can be automated with minimal human interaction, there are some tasks where intelligent intervention is both desirable and necessary to adapt to Enexpected circumstances and events. These activities require that the operator interact with the process using a remote manipulator which provides or reflects a natural feel to the operator. The remote manipulation system which was developed incorporates sensor fusion and interactive control, and provides the operator with an effective means of controlling the robot in a potentially unknown environment. This paper describes recent accomplishments in technology development and integration, and outlines the future goals of Lawrence Livermore National Laboratory for achieving this integrated interactive control capability
Plant status control - with an operational focus
Lane, L.A.
In the Nuclear industry, we have done a very good job of designing, developing, constructing, and improving our nuclear facilities. We have, however, often been inconsistent in documenting the details of our facilities, clearly addressing the rules around facility operation, and controlling and tracking the temporary, or permanent changes to our facilities. The reality is, that once we build a facility, we then must operate the facility, for it to be viable. Further we must operate it safely and efficiently for the facility to produce its product, and be acceptable to the public. Unfortunately, when we design and build these large, complicated facilities, we cannot project all the nuances of facility operation, although we can recognize this potential gap, and prepare for it. In order to allow for the complexities of the real world, we must provide the individuals who are tasked with operating our nuclear facilities, with the tools and processes to deal with 'all the nuances' of facility operation. This discussion will focus on the concepts behind a key process for ensuring that we meet our design and operating needs for our facilities, as well as recognizing and dealing with the potential gaps. The key process is 'Plant Status Control', and the discussion will have a primary focus on the needs of the end users, that being the individuals that have the immediate and current accountability for control and safety of the facility, the equipment, the staff, and ultimately the public, that being our Operations staff, and the Shift Manager. (author)
Operation control device for nuclear power plants
Suto, Osamu.
Purpose: To render the controlling functions of a central control console more centralized by constituting the operation controls for a nuclear power plant with computer systems having substantially independent functions such as those of plant monitor controls, reactor monitor management and CRT display and decreasing interactions between each of the systems. Constitution: An input/output device for the input of process data for a nuclear power plant and indication data for a plant control console is connected to a plant supervisory and control computer system and a display computer system, the plant supervisory control computer system and a reactor and management computer system are connected with a CRT display control device, a printer and a CRT display input/output device, and the display computer system is connected with the CRT display control device and the CRT display unit on the central control console, whereby process input can be processed and displayed at high speed. (Yoshino, Y.)
Hydrocarbon control strategies for gasoline marketing operations
Norton, R.L.; Sakaida, R.R.; Yamada, M.M.
This informational document provides basic and current descriptions of gasoline marketing operations and methods that are available to control hydrocarbon emissions from these operations. The three types of facilities that are described are terminals, bulk plants, and service stations. Operational and business trends are also discussed. The potential emissions from typical facilities, including transport trucks, are given. The operations which lead to emissions from these facilities include (1) gasoline storage, (2) gasoline loading at terminals and bulk plants, (3) gasoline delivery to bulk plants and service stations, and (4) the refueling of vehicles at service stations. Available and possible methods for controlling emissions are described with their estimated control efficiencies and costs. This report also includes a bibliography of references cited in the text, and supplementary sources of information.
CV controls from design to operation
Blanc, D
The cooling and Ventilation (CV) group has emphasised the need to redefine its organisational structure at the end of 98. The main objective of this operation was to ensure the CV group to be more competitive and efficient through the growing tasks of the LHC projects. The main evolution given to this reorganisation is that the new structure is more project oriented and then operates on three distinct axes: Design, Work and Operation. Process control project management requires a complete and early interaction and participation of all the actors involved. This procedure to be efficient and constructive must be considered and performed not only during the design stage but along the project planning phases and must go beyond the completion work including the process control operation activity. The paper explains the present project management for process control. It describes the present constraints and gives suggestions to a different approach to these projects to improve performances and efficiency of a contr...
Hydrate Control for Gas Storage Operations
Jeffrey Savidge
The overall objective of this project was to identify low cost hydrate control options to help mitigate and solve hydrate problems that occur in moderate and high pressure natural gas storage field operations. The study includes data on a number of flow configurations, fluids and control options that are common in natural gas storage field flow lines. The final phase of this work brings together data and experience from the hydrate flow test facility and multiple field and operator sources. It includes a compilation of basic information on operating conditions as well as candidate field separation options. Lastly the work is integrated with the work with the initial work to provide a comprehensive view of gas storage field hydrate control for field operations and storage field personnel.
Control of TFTR during DT operations
Pearson, G.G.; Alling, P.D.; Blanchard, W.; Camp, R.A.; Hawryluk, R.J.; Hosea, J.C.; Nagy, A.
Since beginning routine D-T operations in December, 1993, there have been more than 500 DT plasmas and approximately 600,000 Ci of tritium processed through TFTR culminating in greater than 10 MW of fusion power produced in a single discharge in November, 1994. These performance levels were achieved while maintaining the highest levels of personnel and equipment safety. Prior to D-T operations, a Chain of Command structure and a TFTR Shift Supervisor (TFTRSS) position were developed for centralized control of the facility with all subsystems reporting to this position. A comprehensive surveillance system was incorporated such that the TFTR SS could easily review the operational readiness of subsystems for D-T operations. A TFTR SS Station was constructed to facilitate monitoring and control of TFTR. This station includes a camera system, FAX, a networked personal computer, a computerized tritium monitor and control system and a hardware interlock system. In the transition from D-D to D-T operations, TFTR's procedures were reviewed/revised and a number of additional procedures developed for control of activities at the facility. This paper details the equipment, administrative and organizational configurations used for controlling TFTR during D-T operations
Modifications in the operational conditions of the IEA-R1 reactor under continuous 48 hours operation
Moreira, Joao Manoel Losada; Frajndlich, Roberto
This work shows the required changes in the IEA-R1 reactor for operation at 2 Mw, 48 hours continuously. The principal technical change regards the operating conditions of the reactor, namely, the required excess reactivity which now will amount to 4800 pcm in order to compensate the Xe poisoning at equilibrium at 2 Mw. (author). 6 refs, 1 fig, 1 tab
How do Continuous Climb Operations affect the capacity of a Terminal Manoeuvre Area?
Perez Casan, J.A.
Continuous climb operations are the following step to optimise departure trajectories with the goals of minimizing fuel consumption and pollutants and noise emissions in the airports neighbourhood, although due to intrinsic nature of these procedures, the integration of these procedures need to develop a new framework for airline operators and air traffic control. Based on the BADA model developed by EUROCONTROL, three activities have been carried out: simulation of several continuous climbs for three aircraft types (Light, Medium and Heavy), analysation of different applied separations throughout the climb from the runway up to cruise level and, as third activity, definition of new separation minima to ensure that the minimum separations are not violated with this new procedures along the climb. In this work are presented the results of modelling three continuous climb type (constant true airspeed, constant climb angle and constant vertical speed) and new time-based separations for most used models in Palma TMA, which will be the case-study scenario. Finally, this theoretical analysis has been applied to a real scenario in Palma de Mallorca TMA in order to compare how the capacity deals with the introduction of this new procedure to standard departures, standard departures are understood as a departure with a level-off at a determined altitude and with the possibility to be affected by any ATC action. First outcomes are promising because capacity, theoretically, would not be grossly diminished, which could initially be expected based on previous studies on continuous descent approaches, although these results should be considered cautiously due to the fact that the model lacks several factors of associated uncertainty for a real climb. (Author)
Status of Siemens steam generator design and measures to assure continuous long-term reliable operation
Hoch, G.
Operating pressurized water reactors with U-tube steam generators have encountered difficulties with either one or a combination of inadequate material selection, poor design or manufacturing and an insufficient water chemistry control which resulted in excessive tube degradation. In contrast to the above mentioned problems, steam generators from Siemens/KWU are proving by operating experience that all measures undertaken at the design stage as well as during the operating and maintenance phase were effective enough to counteract any tube corrosion phenomena or other steam generator related problem. An Integrated Service Concept has been developed, applied and wherever necessary improved in order to ensure reliable steam generator operation. The performance of the steam generators is updated continuously, evaluated and implemented in lifetime databases. The main indicator for steam generator integrity are the results of the eddy current testing of the steam generator tubes. Tubes with indications are rated with lifetime threshold values and if necessary plugged, based on individual assessment criteria.(author)
Continuous operation of an ultra-low-power microcontroller using glucose as the sole energy source.
Lee, Inyoung; Sode, Takashi; Loew, Noya; Tsugawa, Wakako; Lowe, Christopher Robin; Sode, Koji
An ultimate goal for those engaged in research to develop implantable medical devices is to develop mechatronic implantable artificial organs such as artificial pancreas. Such devices would comprise at least a sensor module, an actuator module, and a controller module. For the development of optimal mechatronic implantable artificial organs, these modules should be self-powered and autonomously operated. In this study, we aimed to develop a microcontroller using the BioCapacitor principle. A direct electron transfer type glucose dehydrogenase was immobilized onto mesoporous carbon, and then deposited on the surface of a miniaturized Au electrode (7mm 2 ) to prepare a miniaturized enzyme anode. The enzyme fuel cell was connected with a 100 μF capacitor and a power boost converter as a charge pump. The voltage of the enzyme fuel cell was increased in a stepwise manner by the charge pump from 330mV to 3.1V, and the generated electricity was charged into a 100μF capacitor. The charge pump circuit was connected to an ultra-low-power microcontroller. Thus prepared BioCapacitor based circuit was able to operate an ultra-low-power microcontroller continuously, by running a program for 17h that turned on an LED every 60s. Our success in operating a microcontroller using glucose as the sole energy source indicated the probability of realizing implantable self-powered autonomously operated artificial organs, such as artificial pancreas. Copyright © 2016 Elsevier B.V. All rights reserved.
Operational advanced materials control and accountability system
Malanify, J.J.; Bearse, R.C.; Christensen, E.L.
An accountancy system based on the Dynamic Materials Accountability (DYMAC) System has been in operation at the Plutonium Processing Facility at the Los Alamos Scientific Laboratory (LASL) since January 1978. This system, now designated the Plutonium Facility/Los Alamos Safeguards System (PF/LASS), has enhanced nuclear material accountability and process control at the LASL facility. The nondestructive assay instruments and the central computer system are operating accurately and reliably. As anticipated, several uses of the system have developed in addition to safeguards, notably scrap control and quality control. The successes of this experiment strongly suggest that implementation of DYMAC-based systems should be attempted at other facilities. 20 refs
Planning in the Continuous Operations Environment of the International Space Station
Maxwell, Theresa; Hagopian, Jeff
The continuous operation planning approach developed for the operations planning of the International Space Station (ISS) is reported on. The approach was designed to be a robust and cost-effective method. It separates ISS planning into two planning functions: long-range planning for a fixed length planning horizon which continually moves forward as ISS operations progress, and short-range planning which takes a small segment of the long-range plan and develops a detailed operations schedule. The continuous approach is compared with the incremental approach, the short and long-range planning functions are described, and the benefits and challenges of implementing a continuous operations planning approach for the ISS are summarized.
The continuation training of operators and feedback of operational experience in the Royal Navy's nuclear submarine programme
Manson, R.P.
Naval continuation training has relied heavily on the use of realistic simulators for over ten years, and this has been proved to be a cost-effective and efficient method of training. The type of simulator used, the selection and qualification of simulator instructors, and the method of training experienced operators is described. Also, the assessment of operator performance, the use of simulators during the final stages of operator qualification, and their use for training operators on plant operation whilst shut-down are covered. The Navy also pays great attention to the feedback of operating experience from sea into both continuation and basic training. This is accomplished using Incident Reports, which are rendered whenever the plant is operated outside the approved Operating Documentation, or when any other unusual circumstance arises. Each Report is individually assessed and replied to by a qualified operator, and those incidents of more general interest are published in a wider circulation document available to all plant operators. In addition, each crew is given an annual lecture on recent operating experiences. Important lessons are fed forward into new plant design, and the incident reports are also used as a source of information for plant reliability data. (author)
Variational reconstruction using subdivision surfaces with continuous sharpness control
Xiaoqun Wu; Jianmin Zheng; Yiyu Cai; Haisheng Li
We present a variational method for subdivision surface reconstruction from a noisy dense mesh.A new set of subdivision rules with continuous sharpness control is introduced into Loop subdivision for better modeling subdivision surface features such as semi-sharp creases,creases,and corners.The key idea is to assign a sharpness value to each edge of the control mesh to continuously control the surface features.Based on the new subdivision rules,a variational model with L1 norm is formulated to find the control mesh and the corresponding sharpness values of the subdivision surface that best fits the input mesh.An iterative solver based on the augmented Lagrangian method and particle swarm optimization is used to solve the resulting non-linear,non-differentiable optimization problem.Our experimental results show that our method can handle meshes well with sharp/semi-sharp features and noise.
Genetic Algorithm Based PID Controller Tuning Approach for Continuous Stirred Tank Reactor
A. Jayachitra; R. Vinodha
Genetic algorithm (GA) based PID (proportional integral derivative) controller has been proposed for tuning optimized PID parameters in a continuous stirred tank reactor (CSTR) process using a weighted combination of objective functions, namely, integral square error (ISE), integral absolute error (IAE), and integrated time absolute error (ITAE). Optimization of PID controller parameters is the key goal in chemical and biochemical industries. PID controllers have narrowed down the operating r...
DYNAMIC SIMULATION AND FUZZY CONTROL OF A CONTINUOUS DISTILLATION COLUMN
Arbildo López, A.; Lombira Echevarría, J.; Osario López, l.
The objective of this work is the study of the dinamic simulation and fuzzy control of a multicomponent continuous distillation column. In this work, the mathematical model of the distillation column and the computing program for the simulation are described. Also, the structure and implementation of the fuzzy controller are presentad. Finally, the results obtained using this programare compared with those reported in the scientific literature for different mixtures. El objetivo de nuestra...
Force characteristics in continuous path controlled crankpin grinding
Zhang, Manchao; Yao, Zhenqiang
Recent research on the grinding force involved in cylindrical plunge grinding has focused mainly on steady-state conditions. Unlike in conventional external cylindrical plunge grinding, the conditions between the grinding wheel and the crankpin change periodically in path controlled grinding because of the eccentricity of the crankpin and the constant rotational speed of the crankshaft. The objective of this study is to investigate the effects of various grinding conditions on the characteristics of the grinding force during continuous path controlled grinding. Path controlled plunge grinding is conducted at a constant rotational speed using a cubic boron nitride (CBN) wheel. The grinding force is determined by measuring the torque. The experimental results show that the force and torque vary sinusoidally during dry grinding and load grinding. The variations in the results reveal that the resultant grinding force and torque decrease with higher grinding speeds and increase with higher peripheral speeds of the pin and higher grinding depths. In path controlled grinding, unlike in conventional external cylindrical plunge grinding, the axial grinding force cannot be disregarded. The speeds and speed ratios of the workpiece and wheel are also analyzed, and the analysis results show that up-grinding and down-grinding occur during the grinding process. This paper proposes a method for describing the force behavior under varied process conditions during continuous path controlled grinding, which provides a beneficial reference for describing the material removal mechanism and for optimizing continuous controlled crankpin grinding.
Analysis on electronic control unit of continuously variable transmission
Cao, Shuanggui
Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.
Continuous control systems for non-contact ECG
Kodkin, Vladimir L.; Yakovleva, Galina V.; Smirnov, Alexey S.
South Ural State University is still conducting the research work dedicated to innovations in biomedicine. Development of system for continuous control and diagnosis of the functional state in large groups of people is based on studies of non-contact ECG recording reported by the authors at the SPIE conference in 2016. The next stage of studies has been performed this year.
Anger in School Managers: Continuity, Direction, Control and Style
Koc, Mustafa; Iskender, Murat; Cardak, Mehmet; Dusunceli, Betul
School managers undertake an important duty in structuring of education institutions. In the study carried out in this context; anger conditions, continuity, and direction of anger, anger control levels and anger styles of school managers who are the decision makers in schools were examined according to the ages, working periods, duty types, ways…
Operational experience with the CEBAF control system
Hovater, C.; Chowdhary, M.; Karn, J.; Tiefenback, M.; Zeijts, J. van; Watson, W.
The CEBAF accelerator at Thomas Jefferson National Accelerator Facility (Jefferson Lab) successfully began its experimental nuclear physics program in November of 1995 and has since surpassed predicted machine availability. Part of this success can be attributed to using the EPICS (Experimental Physics and Industrial Control System) control system toolkit. The CEBAF control system is one of the largest accelerator control system now operating. It controls approximately 338 SRF cavities, 2,300 magnets, 500 beam position monitors and other accelerator devices, such as gun hardware and other beam monitoring devices. All told, the system must be able to access over 125,000 database records. The system has been well received by both operators and the hardware designers. The EPICS utilities have made the task of troubleshooting systems easier. The graphical and test-based creation tools have allowed operators to custom build control screens. In addition, the ability to integrate EPICS with other software packages, such as Tcl/Tk, has allowed physicists to quickly prototype high-level application programs, and to provide GUI front ends for command line driven tools. Specific examples of the control system applications are presented in the areas of energy and orbit control, cavity tuning and accelerator tune up diagnostics
Process control for a continuous uranyl nitrate evaporator
Peterson, S.F.; MacIntyre, L.P.
A continuous uranyl nitrate evaporator at the Savannah River Plant (SRP) in Aiken, South Carolina ws the subject of this work. A rigorous mathematical model of the evaporator was developed. A difference equation form of the model was then constructed and used for control studies. Relative gain analysis was done on the system in order to identify any promising multivariable control schemes. Several alternate control schemes were modeled, tuned, and compared against the scheme presently in use at SRP. As the pneumatic specific gravity instrumentation at SRP is very noisy, the noise was simulated and used in the second phase of the control study. In this phase, alternate tuning methods and filters were invesigated and compared. The control studies showed that the control algorithm now in use at SRP is the simplest and best available. 10 references, 53 figures, 22 tables
Evaluation of new control rooms by operator performance analysis
Mori, M; Tomizawa, T.; Tai, I.; Monta, K.; Yoshimura, S.; Hattori, Y.
An advanced supervisory and control system called PODIA TM (Plant Operation by Displayed Information and Automation) was developed by Toshiba. Since this system utilizes computer driven CRTs as a main device for information transfer to operators, thorough system integration tests were performed at the factory and evaluations were made of operators' assessment from the initial experience of the system. The PODIA system is currently installed at two BWR power plants. Based on the experiences from the development of PODIA, a more advanced man-machine interface, Advanced-PODIA (A-PODIA), is developed. A-PODIA enhances the capabilities of PODIA in automation, diagnosis, operational guidance and information display. A-PODIA has been validated by carrying out systematic experiments with a full-scope simulator developed for the validation. The results of the experiments have been analyzed by the method of operator performance analysis and applied to further improvement of the A-PODIA system. As a feedback from actual operational experience, operator performance data in simulator training is an important source of information to evaluate human factors of a control room. To facilitate analysis of operator performance, a performance evaluation system has been developed by applying AI techniques. The knowledge contained in the performance evaluation system was elicited from operator training experts and represented as rules. The rules were implemented by employing an object-oriented paradigm to facilitate knowledge management. In conclusion, it is stated that the feedback from new control room operation can be obtained at an early stage by validation tests and also continuously by comprehensive evaluation (with the help of automated tools) of operator performance in simulator training. The results of operator performance analysis can be utilized for improvement of system design as well as operator training. (author)
Planning Risk-Based SQC Schedules for Bracketed Operation of Continuous Production Analyzers.
Westgard, James O; Bayat, Hassan; Westgard, Sten A
To minimize patient risk, "bracketed" statistical quality control (SQC) is recommended in the new CLSI guidelines for SQC (C24-Ed4). Bracketed SQC requires that a QC event both precedes and follows (brackets) a group of patient samples. In optimizing a QC schedule, the frequency of QC or run size becomes an important planning consideration to maintain quality and also facilitate responsive reporting of results from continuous operation of high production analytic systems. Different plans for optimizing a bracketed SQC schedule were investigated on the basis of Parvin's model for patient risk and CLSI C24-Ed4's recommendations for establishing QC schedules. A Sigma-metric run size nomogram was used to evaluate different QC schedules for processes of different sigma performance. For high Sigma performance, an effective SQC approach is to employ a multistage QC procedure utilizing a "startup" design at the beginning of production and a "monitor" design periodically throughout production. Example QC schedules are illustrated for applications with measurement procedures having 6-σ, 5-σ, and 4-σ performance. Continuous production analyzers that demonstrate high σ performance can be effectively controlled with multistage SQC designs that employ a startup QC event followed by periodic monitoring or bracketing QC events. Such designs can be optimized to minimize the risk of harm to patients. © 2017 American Association for Clinical Chemistry.
The operation of the GANIL control system
Prome, M.; David, L.; Lecorche, E.
When the first GANIL beams were obtained the control system was operating in an elementary way; the system behaved almost like a huge multiplexer. Now a large number of programs have been written; they allow to take benefit of the full power of the computers; they help the operators for starting, tuning and monitoring the accelerator. The paper gives a general description of these programs which are executed on the central computer: it shows how the accelerator is controlled either directly or via dedicated microprocessors. Informations are also given on the alarm system
Study of advanced control of ethanol production through continuous fermentation
AbdelHamid Ajbar
Full Text Available This paper investigates the control of an experimentally validated model of production of bioethanol. The analysis of the open loop system revealed that the maximum productivity occurred at a periodic point. A robust control was needed to avoid instabilities that may occur when disturbances are injected into the process that may drive it toward or through the unstable points. A nonlinear model predictive controller (NLMPC was used to control the process. Simulation tests were carried out using three controlled variables: the ethanol concentration, the productivity and the inverse of the productivity. In the third configuration, the controller was required to seek the maximum operating point through the optimization capability built in the NLMPC algorithm. Simulation tests presented overall satisfactory closed-loop performance for both nominal servo and regulatory control problems as well as in the presence of modeling errors. The third control configuration managed to steer the process toward the existing maximum productivity even when the process operation or its parameters changed. For comparison purposes, a standard PI controller was also designed for the same control objectives. The PI controller yielded satisfactory performance when the ethanol concentration was chosen as the controlled variable. When, on the other hand, the productivity was chosen as the controlled output, the PI controller did not work properly and needed to be adjusted using gain scheduling. In all cases, it was observed that the closed-loop response suffered from slow dynamics, and any attempt to speed up the feedback response via tuning may result in an unstable behavior.
Operation and control of ITER plasmas
Features incorporated in the design of the International Thermonuclear Experimental Reactor (ITER) tokamak and its ancillary and plasma diagnostic systems that will facilitate operation and control of ignited and/or high-Q DT plasmas are presented. Control methods based upon straight-forward extrapolation of techniques employed in the present generation of tokamaks are found to be adequate and effective for DT plasma control with burn durations of ≥1000 s. Examples of simulations of key plasma control functions including magnetic configuration control and fusion burn (power) control are given. The prospects for the creation and control of steady-state plasmas sustained by non-inductive current drive are also discussed. (author)
Continuity of Operations (COOP) in the Executive Branch: Background and Issues for Congress
Petersen, R. E
... to continuity of operations (COOP) issues. COOP planning is a segment of federal government contingency planning that refers to the internal effort of an organization, such as a branch of government, department, or office, to assure...
Conversion of continuous-direct-current TIG welder to pulse-arc operation
Lien, D. R.
Electronics package converts a continuous-dc tungsten-inert gas welder for pulse-arc operation. Package allows presetting of the pulse rate, duty cycle, and current value, and enables welding of various alloys and thicknesses of materials.
Control system and method for a power delivery system having a continuously variable ratio transmission
Frank, Andrew A.
A control system and method for a power delivery system, such as in an automotive vehicle, having an engine coupled to a continuously variable ratio transmission (CVT). Totally independent control of engine and transmission enable the engine to precisely follow a desired operating characteristic, such as the ideal operating line for minimum fuel consumption. CVT ratio is controlled as a function of commanded power or torque and measured load, while engine fuel requirements (e.g., throttle position) are strictly a function of measured engine speed. Fuel requirements are therefore precisely adjusted in accordance with the ideal characteristic for any load placed on the engine.
Performance assessment for continuing and future operations at Solid Waste Storage Area 6
This radiological performance assessment for the continued disposal operations at Solid Waste Storage Area 6 (SWSA 6) on the Oak Ridge Reservation (ORR) has been prepared to demonstrate compliance with the requirements of the US DOE. The analysis of SWSA 6 required the use of assumptions to supplement the available site data when the available data were incomplete for the purpose of analysis. Results indicate that SWSA 6 does not presently meet the performance objectives of DOE Order 5820.2A. Changes in operations and continued work on the performance assessment are expected to demonstrate compliance with the performance objectives for continuing operations at the Interim Waste Management Facility (IWMF). All other disposal operations in SWSA 6 are to be discontinued as of January 1, 1994. The disposal units at which disposal operations are discontinued will be subject to CERCLA remediation, which will result in acceptable protection of the public health and safety
This radiological performance assessment for the continued disposal operations at Solid Waste Storage Area 6 (SWSA 6) on the Oak Ridge Reservation (ORR) has been prepared to demonstrate compliance with the requirements of the US DOE. The analysis of SWSA 6 required the use of assumptions to supplement the available site data when the available data were incomplete for the purpose of analysis. Results indicate that SWSA 6 does not presently meet the performance objectives of DOE Order 5820.2A. Changes in operations and continued work on the performance assessment are expected to demonstrate compliance with the performance objectives for continuing operations at the Interim Waste Management Facility (IWMF). All other disposal operations in SWSA 6 are to be discontinued as of January 1, 1994. The disposal units at which disposal operations are discontinued will be subject to CERCLA remediation, which will result in acceptable protection of the public health and safety.
Stress, performance, and control room operations
Fontaine, C.W.
The notion of control room operator performance being detrimentally affected by stress has long been the focus of considerable conjecture. It is important to gain a better understanding of the validity of this concern for the development of effective severe-accident management approaches. This paper illustrates the undeniable negative impact of stress on a wide variety of tasks. A computer-controlled simulated work environment was designed in which both male and female operators were closely monitored during the course of the study for both stress level (using the excretion of the urine catecholamines epinephrine and norepinephrine as an index) and job performance. The experimental parameters employed by the study when coupled with the subsequent statistical analyses of the results allow one to make some rather striking comments with respect to how a given operator might respond to a situation that he or she perceives to be psychologically stressful (whether the stress be externally or internally generated). The findings of this study clearly indicated that stress does impact operator performance on tasks similar in nature to those conducted by control room operators and hence should be seriously considered in the development of severe-accident management strategies
Nuclear power plant control room operator control and monitoring tasks
Bovell, C.R.; Beck, M.G.; Carter, R.J.
Oak Ridge National Laboratory is conducting a research project the purpose of which is to develop the technical bases for regulatory review criteria for use in evaluating the safety implications of human factors associated with the use of artificial intelligence and expert systems, and with advanced instrumentation and control (I and C) systems in nuclear power plants (NPP). This report documents the results from Task 8 of that project. The primary objectives of the task was to identify the scope and type of control and monitoring tasks now performed by control-room operators. Another purpose was to address the types of controls and safety systems needed to operate the nuclear plant. The final objective of Task 8 was to identify and categorize the type of information and displays/indicators required to monitor the performance of the control and safety systems. This report also discusses state-of-the-art controls and advanced display devices which will be available for use in control-room retrofits and in control room of future plants. The fundamental types of control and monitoring tasks currently conducted by operators can be divided into four classifications: function monitoring tasks, control manipulation tasks, fault diagnostic tasks, and administrative tasks. There are three general types of controls used in today's NPPs, switches, pushbuttons, and analog controllers. Plant I and C systems include components to achieve a number of safety-related functions: measuring critical plant parameters, controlling critical plant parameters within safety limits, and automatically actuating protective devices if safe limits are exceeded. The types of information monitored by the control-room operators consist of the following parameters: pressure, fluid flow and level, neutron flux, temperature, component status, water chemistry, electrical, and process and area radiation. The basic types of monitoring devices common to nearly all NPP control rooms include: analog meters
Initial operation of NSTX with plasma control
Gates, D.; Bell, M.; Ferron, J.; Kaye, S.; Menard, J.; Mueller, D.; Neumeyer, C.; Sabbagh, S.
First plasma, with a maximum current of 300kA, was achieved on NSTX in February 1999. These results were obtained using preprogrammed coil currents. The first controlled plasmas on NSTX were made starting in August 1999 with the full 1MA plasma current achieved in December 1999. The controlled quantities were plasma position (R, Z) and current (Ip). Variations in the plasma shape are achieved by adding preprogrammed currents to those determined by the control parameters. The control system is fully digital, with plasma position and current control, data acquisition, and power supply control all occurring in the same four-processor real time computer. The system uses the PCS (Plasma Control Software) system designed at General Atomics. Modular control algorithms, specific to NSTX, were written and incorporated into the PCS. The application algorithms do the actual control calculations, with the PCS handling data passing. The control system, including planned upgrades, will be described, along with results of the initial controlled plasma operations. Analysis of the performance of the control system will also be presented
Operator approach to linear control systems
Cheremensky, A
Within the framework of the optimization problem for linear control systems with quadratic performance index (LQP), the operator approach allows the construction of a systems theory including a number of particular infinite-dimensional optimization problems with hardly visible concreteness. This approach yields interesting interpretations of these problems and more effective feedback design methods. This book is unique in its emphasis on developing methods for solving a sufficiently general LQP. Although this is complex material, the theory developed here is built on transparent and relatively simple principles, and readers with less experience in the field of operator theory will find enough material to give them a good overview of the current state of LQP theory and its applications. Audience: Graduate students and researchers in the fields of mathematical systems theory, operator theory, cybernetics, and control systems.
Control procedure for well drilling operations
Bourdon, J C
A control procedure of rotary drilling operations is proposed. It uses the Drill off test. The drill-off test permits to determine the rock drill speed variation as a function of the wright applied on the top of the pipe. We can deduce from that a rock drill wear parameter. The method permits to prevent a rupture and its grave economic consequences.
Quantum circuits cannot control unknown operations
Araújo, Mateus; Feix, Adrien; Costa, Fabio; Brukner, Časlav
One of the essential building blocks of classical computer programs is the 'if' clause, which executes a subroutine depending on the value of a control variable. Similarly, several quantum algorithms rely on applying a unitary operation conditioned on the state of a control system. Here we show that this control cannot be performed by a quantum circuit if the unitary is completely unknown. The task remains impossible even if we allow the control to be done modulo a global phase. However, this no-go theorem does not prevent implementing quantum control of unknown unitaries in practice, as any physical implementation of an unknown unitary provides additional information that makes the control possible. We then argue that one should extend the quantum circuit formalism to capture this possibility in a straightforward way. This is done by allowing unknown unitaries to be applied to subspaces and not only to subsystems. (paper)
Advanced Control Test Operation (ACTO) facility
Ball, S.J.
The Advanced Control Test Operation (ACTO) project, sponsored by the US Department of Energy (DOE), is being developed to enable the latest modern technology, automation, and advanced control methods to be incorporated into nuclear power plants. The facility is proposed as a national multi-user center for advanced control development and testing to be completed in 1991. The facility will support a wide variety of reactor concepts, and will be used by researchers from Oak Ridge National Laboratory (ORNL), plus scientists and engineers from industry, other national laboratories, universities, and utilities. ACTO will also include telecommunication facilities for remote users
Experimental Bifurcation Analysis Using Control-Based Continuation
Bureau, Emil; Starke, Jens
The focus of this thesis is developing and implementing techniques for performing experimental bifurcation analysis on nonlinear mechanical systems. The research centers around the newly developed control-based continuation method, which allows to systematically track branches of stable...... the resulting behavior, we propose and test three different methods for assessing stability of equilibrium states during experimental continuation. We show that it is possible to determine the stability without allowing unbounded divergence, and that it is under certain circumstances possible to quantify...... and unstable equilibria under variation of parameters. As a test case we demonstrate that it is possible to track the complete frequency response, including the unstable branches, for a harmonically forced impact oscillator with hardening spring nonlinearity, controlled by electromagnetic actuators. The method...
Continuous fractional-order Zero Phase Error Tracking Control.
Liu, Lu; Tian, Siyuan; Xue, Dingyu; Zhang, Tao; Chen, YangQuan
A continuous time fractional-order feedforward control algorithm for tracking desired time varying input signals is proposed in this paper. The presented controller cancels the phase shift caused by the zeros and poles of controlled closed-loop fractional-order system, so it is called Fractional-Order Zero Phase Tracking Controller (FZPETC). The controlled systems are divided into two categories i.e. with and without non-cancellable (non-minimum-phase) zeros which stand in unstable region or on stability boundary. Each kinds of systems has a targeted FZPETC design control strategy. The improved tracking performance has been evaluated successfully by applying the proposed controller to three different kinds of fractional-order controlled systems. Besides, a modified quasi-perfect tracking scheme is presented for those systems which may not have available future tracking trajectory information or have problem in high frequency disturbance rejection if the perfect tracking algorithm is applied. A simulation comparison and a hardware-in-the-loop thermal peltier platform are shown to validate the practicality of the proposed quasi-perfect control algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Continuous control of spin polarization using a magnetic field
Gifford, J. A.; Zhao, G. J.; Li, B. C.; Tracy, Brian D.; Zhang, J.; Kim, D. R.; Smith, David J.; Chen, T. Y.
The giant magnetoresistance (GMR) of a point contact between a Co/Cu multilayer and a superconductor tip varies for different bias voltage. Direct measurement of spin polarization by Andreev reflection spectroscopy reveals that the GMR change is due to a change in spin polarization. This work demonstrates that the GMR structure can be utilized as a spin source and that the spin polarization can be continuously controlled by using an external magnetic field.
Gifford, J. A.; Zhao, G. J.; Li, B. C.; Tracy, Brian D.; Zhang, J.; Kim, D. R.; Smith, David J.; Chen, T. Y., E-mail: [email protected] [Department of Physics, Arizona State University, Tempe, Arizona 85287 (United States)
Continuous quality control of mined hard and soft coals
Fertl, W.H.; Gant, P.L.
A method is provided for determining the shale content of mined coal by monitoring the thorium content of the coal. Thorium content and ash content are shown to be related whereby a direct reading of the thorium will be indicative of the shale content of the coal and the ash content of the coal. The method utilizes the natural radiation of thorium to provide the continuous or selective control of mined coals
Trajectory control with continuous thrust applied to a rendezvous maneuver
Santos, W G; Rocco, E M
A rendezvous mission can be divided into the following phases: launch, phasing, far range rendezvous, close range rendezvous and mating (docking or berthing). This paper aims to present a close range rendezvous with closed loop controlled straight line trajectory. The approaching is executed on V-bar axis. A PID controller and continuous thrust are used to eliminate the residual errors in the trajectory. A comparative study about the linear and nonlinear dynamics is performed and the results showed that the linear equations become inaccurate insofar as the chaser moves away from the target
Operator interface to the ORIC control system
Ludemann, C.A.; Casstevens, B.J.
The Oak Ridge Isochronous Cyclotron (ORIC) was built in the early 1960s with a hard-wired manual control system. Presently, it serves as a variable-energy heavy-ion cyclotron with an internal ion source, or as an energy booster for the new 25 MV tandem electrostatic accelerator of the Holifield Heavy Ion Facility. One factor which has kept the cyclotron the productive research tool it is today is the gradual transfer of its control functions to a computer-based system beginning in the 1970s. This particular placement of a computer between an accelerator and its operators afforded some unique challenges and opportunities that would not be encountered today. Historically, the transformation began at a time when computers were just beginning to gain acceptance as reliable operational tools. Veteran operators with tens of years of accelerator experience justifiably expressed skepticism that this improvement would aid them, particularly if they had to re-learn how to operate the machine. The confidence of the operators was gained when they realized that one of the primary principles of ergonomics was being upheld. The computer software and hardware was being designed to serve them and not the computer
Preparation status for continuous operation of Kori unit 1 NPP in Korea
Choi, C.H. . E-mail : [email protected]
Kori unit 1 Nuclear Power Plant is the first commercial operation plant in Korea. In Korea, the life extension of NPP beyond design lifetime reached practically application stage. Preparations status for continuous operation of Kori unit 1, Many researches have demonstrated that life extension beyond design lifetime is possible in terms of technology. This paper is to introduce and to share the continuous operation preparations status and schedule for Kori unit 1 License Renewal Process an additional every 10 years beyond the design life 30 years term. (author)
Control of water chemistry in operating reactors
Riess, R.
Water chemistry plays a major role in fuel cladding corrosion and hydriding. Although a full understanding of all mechanisms involved in cladding corrosion does not exist, controlling the water chemistry has achieved quite some progress in recent years. As an example, in PWRs the activity transport is controlled by operating the coolant under higher pH-values (i.e. the ''modified'' B/Li-Chemistry). On the other hand, the lithium concentration is limited to a maximum value of 2 ppm in order to avoid an acceleration of the fuel cladding corrosion. In BWR plants, for example, the industry has learned on how to limit the copper concentration in the feedwater in order to limit CILC (Copper Induced Localized Corrosion) on the fuel cladding. However, economic pressures are leading to more rigorous operating conditions in power reactors. Fuel burnups are to be increased, higher efficiencies are to be achieved, by running at higher temperatures, plant lifetimes are to be extended. In summary, this paper will describe the state of the art in controlling water chemistry in operating reactors and it will give an outlook on potential problems that will arise when going to more severe operating conditions. (author). 3 figs, 6 tabs
Riess, R [Siemens AG Unternehmensbereich KWU, Erlangen (Germany)
Water chemistry plays a major role in fuel cladding corrosion and hydriding. Although a full understanding of all mechanisms involved in cladding corrosion does not exist, controlling the water chemistry has achieved quite some progress in recent years. As an example, in PWRs the activity transport is controlled by operating the coolant under higher pH-values (i.e. the ``modified`` B/Li-Chemistry). On the other hand, the lithium concentration is limited to a maximum value of 2 ppm in order to avoid an acceleration of the fuel cladding corrosion. In BWR plants, for example, the industry has learned on how to limit the copper concentration in the feedwater in order to limit CILC (Copper Induced Localized Corrosion) on the fuel cladding. However, economic pressures are leading to more rigorous operating conditions in power reactors. Fuel burnups are to be increased, higher efficiencies are to be achieved, by running at higher temperatures, plant lifetimes are to be extended. In summary, this paper will describe the state of the art in controlling water chemistry in operating reactors and it will give an outlook on potential problems that will arise when going to more severe operating conditions. (author). 3 figs, 6 tabs.
Effect of Flow Rate Controller on Liquid Steel Flow in Continuous Casting Mold using Numerical Modeling
Gursoy, Kadir Ali; Yavuz, Mehmet Metin
In continuous casting operation of steel, the flow through tundish to the mold can be controlled by different flow rate control systems including stopper rod and slide-gate. Ladle changes in continuous casting machines result in liquid steel level changes in tundishes. During this transient event of production, the flow rate controller opening is increased to reduce the pressure drop across the opening which helps to keep the mass flow rate at the desired level for the reduced liquid steel level in tundish. In the present study, computational fluid dynamic (CFD) models are developed to investigate the effect of flow rate controller on mold flow structure, and particularly to understand the effect of flow controller opening on meniscus flow. First, a detailed validation of the CFD models is conducted using available experimental data and the performances of different turbulence models are compared. Then, the constant throughput casting operations for different flow rate controller openings are simulated to quantify the opening effect on meniscus region. The results indicate that the meniscus velocities are significantly affected by the flow rate controller and its opening level. The steady state operations, specified as constant throughput casting, do not provide the same mold flow if the controller opening is altered. Thus, for quality and castability purposes, adjusting the flow controller opening to obtain the fixed mold flow structure is proposed. Supported by Middle East Technical University (METU) BAP (Scientific Research Projects) Coordination.
High performance continuously variable transmission control through robust-control-relevant model validation
Oomen, T.A.E.; Meulen, van der S.H.
Optimal operation of continuously variable transmissions (CVTs) is essential to meet tightening emission and fuel consumption requirements. This is achieved by accurately tracking a prescribed transmission ratio reference and simultaneously optimizing the internal efficiency of the CVT. To reduce
A nuclear on-line sensor for continuous control of vanadium content in oil pipelines
Rizk, R.A.M.
Trace amounts of vanadium in crude oil and in heavy distillate fuels are very harmful due to their corrosive action. Thus the necessity arises for continuous control of the vanadium content in oil pipelines. Moreover, the development of a nuclear on-line sensor that can continuously analyze the vanadium content in oil pipelines may lead to a better control of processing operations. In this paper a feasibility study for on-line analysis of vanadium in crude oil by means of neutron activation analysis is presented. (author)
Microorganism selection and biosurfactant production in a continuously and periodically operated bioslurry reactor.
Cassidy, D P; Hudak, A J
A continuous-flow reactor (CSTR) and a soil slurry-sequencing batch reactor (SS-SBR) were maintained in 8l vessels for 180 days to treat a soil contaminated with diesel fuel (DF). Concentrations of Candida tropicalis, Brevibacterium casei, Flavobacterium aquatile, Pseudomonas aeruginosa, and Pseudomonas fluorescens were determined using fatty acid methyl ester (FAME) analysis. DF removal (biological and volatile) and biosurfactant concentrations were measured. The SS-SBR encouraged the growth of biosurfactant-producing species relative to the CSTR. Counts of biosurfactant-producing species (C. tropicalis, P. aeruginosa, P. fluorescens) relative to total microbial counts were 88% in the SS-SBR and 23% in the CSTR. Biosurfactants were produced in the SS-SBR to levels of nearly 70 times the critical micelle concentration (CMC) early in the cycle, but were completely degraded by the end of each cycle. No biosurfactant production was observed in the CSTR. DF biodegradation rates were over 40% greater and DF stripping was over five times lower in the SS-SBR than the CSTR. However, considerable foaming occurred in the SS-SBR. Reversing the mode of operation in the reactors on day 80 caused a complete reversal in microbial consortia and reactor performance by day 120. These results show that bioslurry reactor operation can be manipulated to control overall reactor performance.
Developmental continuity in reward-related enhancement of cognitive control.
Strang, Nicole M; Pollak, Seth D
Adolescents engage in more risky behavior than children or adults. The most prominent hypothesis for this phenomenon is that brain systems governing reward sensitivity and brain systems governing self-regulation mature at different rates. Those systems governing reward sensitivity mature in advance of those governing self-control. This hypothesis has substantial empirical support, however, the evidence supporting this theory has been exclusively derived from contexts where self-control systems are required to regulate reward sensitivity in order to promote adaptive behavior. In adults, reward promotes a shift to a proactive control strategy and better cognitive control performance. It is unclear whether children and adolescents will respond to reward in the same way. Using fMRI methodology, we explored whether children and adolescents would demonstrate a shift to proactive control in the context of reward. We tested 22 children, 20 adolescents, and 23 adults. In contrast to our hypothesis, children, adolescents, and adults all demonstrated a shift to proactive cognitive control in the context of reward. In light of the results, current neurobiological theories of adolescent behavior need to be refined to reflect that in certain contexts there is continuity in the manner reward and cognitive control systems interact across development. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Intelligent systems supporting the control room operators
Berger, E.
The operational experience obtained with the various applications of the systems discussed in this paper shows that more consequent use of the systems will make detection and management of disturbances still more efficient and faster. This holds true both for a low level of process automation and for power plants with a high level of automation. As for conventional power plants, the trend clearly is towards higher degrees of automation and consequent application of supporting systems. Thus, higher availability and rapid failure management are achieved, at low effects on normal operation. These systems are monitoring and process control systems, expert systems, and systems for optimal use of the equipment, or systems for post-incident analyses and computer-assisted on-shift protocols, or operating manuals. (orig./CB) [de
[Controlling systems for operating room managers].
Schüpfer, G; Bauer, M; Scherzinger, B; Schleppers, A
Management means developing, shaping and controlling of complex, productive and social systems. Therefore, operating room managers also need to develop basic skills in financial and managerial accounting as a basis for operative and strategic controlling which is an essential part of their work. A good measurement system should include financial and strategic concepts for market position, innovation performance, productivity, attractiveness, liquidity/cash flow and profitability. Since hospitals need to implement a strategy to reach their business objectives, the performance measurement system has to be individually adapted to the strategy of the hospital. In this respect the navigation system developed by Gälweiler is compared to the "balanced score card" system of Kaplan and Norton.
Visual operations control in administrative environments
Carson, M.L.; Levine, L.O.
When asked what comes to mind when they think of ``controlling work`` in the office, people may respond with ``overbearing boss,`` ``no autonomy,`` or ``Theory X management.`` The idea of controlling work in white collar or administrative environments can have a negative connotation. However, office life is often chaotic and miserable precisely because the work processes are out of control, and managers must spend their time looking over people`s shoulders and fighting fires. While management styles and structures vary, the need for control of work processes does not. Workers in many environments are being reorganized into self-managed work teams. These teams are expected to manage their own work through increased autonomy and empowerment. However, even empowered work teams must manage their work processes because of process variation. The amount of incoming jobs vary with both expected (seasonal) and unexpected demand. The mixture of job types vary over time, changing the need for certain skills or knowledge. And illness and turnover affect the availability of workers with needed skills and knowledge. Clearly, there is still a need to control work, whether the authority for controlling work is vested in one person or many. Visual control concepts provide simple, inexpensive, and flexible mechanisms for managing processes in work teams and continuous improvement administrative environments.
Skill retention and control room operator competency
Stammers, R.B.
The problem of skill retention in relation to the competency of control room operators is addressed. Although there are a number of related reviews of the literature, this particular topic has not been examined in detail before. The findings of these reviews are summarised and their implications for the area discussed. The limited research on skill retention in connection with process control is also reviewed. Some topics from cognitive and instructional psychology are also raised. In particular overlearning is tackled and the potential value of learning strategies is assessed. In conclusion the important topic of measurement of performance is introduced and a number of potentially valuable training approaches are outlined. (author)
Space Telescope Control System science user operations
Dougherty, H. J.; Rossini, R.; Simcox, D.; Bennett, N.
The Space Telescope science users will have a flexible and efficient means of accessing the capabilities provided by the ST Pointing Control System, particularly with respect to managing the overal acquisition and pointing functions. To permit user control of these system functions - such as vehicle scanning, tracking, offset pointing, high gain antenna pointing, solar array pointing and momentum management - a set of special instructions called 'constructs' is used in conjuction with command data packets. This paper discusses the user-vehicle interface and introduces typical operational scenarios.
Bisimulations for Delimited-Control Operators
Biernacki , Dariusz; Lenglet , Sergueï; Polesiuk , Piotr
We propose a survey of the behavioral theory of an untyped lambda-calculus extended with the delimited-control operators shift and reset. We define a contextual equivalence for this calculus, that we then aim to characterize with coinductively defined relations, called bisimilarities. We study different styles of bisimilarities (namely applicative, normal form, and environmental), and we give several examples to illustrate their respective strengths and weaknesses. We also discuss how to exte...
Integrated continuous bioprocessing: Economic, operational, and environmental feasibility for clinical and commercial antibody manufacture
Pollock, James; Coffman, Jon; Ho, Sa V.
This paper presents a systems approach to evaluating the potential of integrated continuous bioprocessing for monoclonal antibody (mAb) manufacture across a product's lifecycle from preclinical to commercial manufacture. The economic, operational, and environmental feasibility of alternative continuous manufacturing strategies were evaluated holistically using a prototype UCL decisional tool that integrated process economics, discrete�event simulation, environmental impact analysis, operational risk analysis, and multiattribute decision�making. The case study focused on comparing whole bioprocesses that used either batch, continuous or a hybrid combination of batch and continuous technologies for cell culture, capture chromatography, and polishing chromatography steps. The cost of goods per gram (COG/g), E�factor, and operational risk scores of each strategy were established across a matrix of scenarios with differing combinations of clinical development phase and company portfolio size. The tool outputs predict that the optimal strategy for early phase production and small/medium�sized companies is the integrated continuous strategy (alternating tangential flow filtration (ATF) perfusion, continuous capture, continuous polishing). However, the top ranking strategy changes for commercial production and companies with large portfolios to the hybrid strategy with fed�batch culture, continuous capture and batch polishing from a COG/g perspective. The multiattribute decision�making analysis highlighted that if the operational feasibility was considered more important than the economic benefits, the hybrid strategy would be preferred for all company scales. Further considerations outside the scope of this work include the process development costs required to adopt continuous processing. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:854–866, 2017
Integrated continuous bioprocessing: Economic, operational, and environmental feasibility for clinical and commercial antibody manufacture.
Pollock, James; Coffman, Jon; Ho, Sa V; Farid, Suzanne S
This paper presents a systems approach to evaluating the potential of integrated continuous bioprocessing for monoclonal antibody (mAb) manufacture across a product's lifecycle from preclinical to commercial manufacture. The economic, operational, and environmental feasibility of alternative continuous manufacturing strategies were evaluated holistically using a prototype UCL decisional tool that integrated process economics, discrete-event simulation, environmental impact analysis, operational risk analysis, and multiattribute decision-making. The case study focused on comparing whole bioprocesses that used either batch, continuous or a hybrid combination of batch and continuous technologies for cell culture, capture chromatography, and polishing chromatography steps. The cost of goods per gram (COG/g), E-factor, and operational risk scores of each strategy were established across a matrix of scenarios with differing combinations of clinical development phase and company portfolio size. The tool outputs predict that the optimal strategy for early phase production and small/medium-sized companies is the integrated continuous strategy (alternating tangential flow filtration (ATF) perfusion, continuous capture, continuous polishing). However, the top ranking strategy changes for commercial production and companies with large portfolios to the hybrid strategy with fed-batch culture, continuous capture and batch polishing from a COG/g perspective. The multiattribute decision-making analysis highlighted that if the operational feasibility was considered more important than the economic benefits, the hybrid strategy would be preferred for all company scales. Further considerations outside the scope of this work include the process development costs required to adopt continuous processing. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 33:854-866, 2017. © 2017 The
Controlled synthesis of poly(3-hexylthiophene in continuous flow
Helga Seyler
Full Text Available There is an increasing demand for organic semiconducting materials with the emergence of organic electronic devices. In particular, large-area devices such as organic thin-film photovoltaics will require significant quantities of materials for device optimization, lifetime testing and commercialization. Sourcing large quantities of materials required for the optimization of large area devices is costly and often impossible to achieve. Continuous-flow synthesis enables straight-forward scale-up of materials compared to conventional batch reactions. In this study, poly(3-hexylthiophene, P3HT, was synthesized in a bench-top continuous-flow reactor. Precise control of the molecular weight was demonstrated for the first time in flow for conjugated polymers by accurate addition of catalyst to the monomer solution. The P3HT samples synthesized in flow showed comparable performance to commercial P3HT samples in bulk heterojunction solar cell devices.
Experiment on continuous operation of the Brazilian IEA-R1 research reactor
Freitas Pintaud, M. de
In order to increase the radioisotope production in the IEA-R1 research reactor at IPEN/CNEN-SP, it has been proposed a change in its operation regime from 8 hours per day and 5 days per week to continuous 48 hours per week. The necessary reactor parameters for this new operation regime were obtained through an experiment in which the reactor was for the first time operated in the new regime. This work presents the principal results from this experiment: xenon reactivity, new shutdown margins, and reactivity loss due to fuel burnup in the new operation regime. (author)
21 CFR 111.127 - What quality control operations are required for packaging and labeling operations?
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What quality control operations are required for... and Process Control System: Requirements for Quality Control § 111.127 What quality control operations are required for packaging and labeling operations? Quality control operations for packaging and...
Continuity of operations planning in college athletic programs: The case for incorporating Federal Emergency Management Guidelines.
Hall, Stacey A; Allen, Brandon L; Phillips, Dennis
College athletic departments have a responsibility to provide a safe environment for student-athletes; however, most colleges do not have a crisis management plan that includes procedures for displaced student-athletes or alternate facilities to perform athletic events. Continuity of operations planning ensures athletic programs are equipped to maintain essential functions during, or shortly after, a disruption of operations due to possible hazards. Previous studies have identified a lack of emergency preparedness and continuity planning in college athletic departments. The purpose of this article is to illustrate in detail one approach to disaster planning for college athletic departments, namely the Federal Emergency Management Agency (FEMA) continuity of operations framework. By adhering to FEMA guidelines and promoting a best practices model, athletic programs can effectively plan to address potential hazards, as well as protect the organization's brand, image, and financial sustainability after a crisis event.
CARMENES instrument control system and operational scheduler
Garcia-Piquer, Alvaro; Guà rdia, Josep; Colomé, Josep; Ribas, Ignasi; Gesa, Lluis; Morales, Juan Carlos; Pérez-Calpena, Ana; Seifert, Walter; Quirrenbach, Andreas; Amado, Pedro J.; Caballero, José A.; Reiners, Ansgar
The main goal of the CARMENES instrument is to perform high-accuracy measurements of stellar radial velocities (1m/s) with long-term stability. CARMENES will be installed in 2015 at the 3.5 m telescope in the Calar Alto Observatory (Spain) and it will be equipped with two spectrographs covering from the visible to the near-infrared. It will make use of its near-IR capabilities to observe late-type stars, whose peak of the spectral energy distribution falls in the relevant wavelength interval. The technology needed to develop this instrument represents a challenge at all levels. We present two software packages that play a key role in the control layer for an efficient operation of the instrument: the Instrument Control System (ICS) and the Operational Scheduler. The coordination and management of CARMENES is handled by the ICS, which is responsible for carrying out the operations of the different subsystems providing a tool to operate the instrument in an integrated manner from low to high user interaction level. The ICS interacts with the following subsystems: the near-IR and visible channels, composed by the detectors and exposure meters; the calibration units; the environment sensors; the front-end electronics; the acquisition and guiding module; the interfaces with telescope and dome; and, finally, the software subsystems for operational scheduling of tasks, data processing, and data archiving. We describe the ICS software design, which implements the CARMENES operational design and is planned to be integrated in the instrument by the end of 2014. The CARMENES operational scheduler is the second key element in the control layer described in this contribution. It is the main actor in the translation of the survey strategy into a detailed schedule for the achievement of the optimization goals. The scheduler is based on Artificial Intelligence techniques and computes the survey planning by combining the static constraints that are known a priori (i.e., target
The Quality Control of the LHC Continuous Cryostat Interconnections
Bertinelli, F; Bozzini, D; Cruikshank, P; Fessia, P; Grimaud, A; Kotarba, A; Maan, W; Olek, S; Poncet, A; Russenschuck, Stephan; Savary, F; Sulek, Z; Tock, J P; Tommasini, D; Vaudaux, L; Williams, L
The interconnections between the Large Hadron Collider (LHC) magnets have required some 40 000 TIG welded joints and 65 000 electrical splices. At the level of single joints and splices, non-destructive techniques find limited application: quality control is based on the qualification of the process and of operators, on the recording of production parameters and on production samples. Visual inspection and process audits were the main techniques used. At the level of an extended chain of joints and splices - from a 53.5 m half-cell to a complete 2.7 km arc sector - quality control is based on vacuum leak tests, electrical tests and RF microwave reflectometry that progressively validated the work performed. Subsequent pressure tests, cryogenic circuits flushing with high pressure helium and cool-downs revealed a few unseen or new defects. This paper presents an overview of the quality control techniques used, seeking lessons applicable to similar large, complex projects.
Self operation type reactor control device
Saito, Makoto; Gunji, Minoru.
A boiling-requefication chamber containing transporting materials having somewhat higher boiling point that the usual reactor operation temperature and liquid neutron absorbers having a boiling point sufficiently higher than that of the transporting materials is disposed near the coolant exit of a fuel assembly and connected with a tubular chamber in the reactor core with a moving pipe at the bottom. Since the transporting materials in the boiling-requefication chamber is boiled and expanded by heating, the liquid neutron absorbers are introduced passing through the moving pipe into the cylindrical chamber to control the nuclear reactions. When the temperature is lowered by the control, the transporting materials are liquefied to contract the volume and the liquid neutron absorbers in the cylindrical chamber are returned passing through the moving tube into the boiling-liquefication chamber to make the nuclear reaction vigorous. Thus, self-operation type power conditioning and power stopping are enabled not by way of control rods and not requiring external control, to prevent scram failure or misoperation. (N.H.)
Developments in operator assistance techniques for nuclear power plant control and operation
Poujol, A.; Papin, B.; Beltranda, G.; Soldermann, R.
This paper describes an approach which has been developed in order to improve nuclear power plants control and monitoring in normal and abnormal situations. These developments take full advantage of the trend towards the computerization of control rooms in industrial continuous processes. This research program consists in a thorough exploration of different information processing techniques, ranking from the rather simple visual synthetization of informations on graphic displays to sophisticated Artificial Intelligence (AI) techniques. These techniques are put into application for the solving of man-machine interface problems in the different domains of plant operation
Design and operation of a continuous integrated monoclonal antibody production process.
Steinebach, Fabian; Ulmer, Nicole; Wolf, Moritz; Decker, Lara; Schneider, Veronika; Wälchli, Ruben; Karst, Daniel; Souquet, Jonathan; Morbidelli, Massimo
The realization of an end-to-end integrated continuous lab-scale process for monoclonal antibody manufacturing is described. For this, a continuous cultivation with filter-based cell-retention, a continuous two column capture process, a virus inactivation step, a semi-continuous polishing step (twin-column MCSGP), and a batch-wise flow-through polishing step were integrated and operated together. In each unit, the implementation of internal recycle loops allows to improve the performance: (a) in the bioreactor, to simultaneously increase the cell density and volumetric productivity, (b) in the capture process, to achieve improved capacity utilization at high productivity and yield, and (c) in the MCSGP process, to overcome the purity-yield trade-off of classical batch-wise bind-elute polishing steps. Furthermore, the design principles, which allow the direct connection of these steps, some at steady state and some at cyclic steady state, as well as straight-through processing, are discussed. The setup was operated for the continuous production of a commercial monoclonal antibody, resulting in stable operation and uniform product quality over the 17 cycles of the end-to-end integration. The steady-state operation was fully characterized by analyzing at the outlet of each unit at steady state the product titer as well as the process (HCP, DNA, leached Protein A) and product (aggregates, fragments) related impurities. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1303-1313, 2017. © 2017 American Institute of Chemical Engineers.
Continuous spins in 2D gravity: Chiral vertex operators and local fields
Gervais, Jean-Loup; Schnittger, Jens
We construct the exponentials of the Liouville field with continuous powers within the operator approach. Their chiral decomposition is realized using the explicit Coulomb-gas operators we introduced earlier. From the quantum group viewpoint, they are related to semi-infinite highest- or lowest-weight representations with continuous spins. The Liouville field itself is defined, and the canonical commutation relations are verified, as well as the validity of the quantum Liouville field equations. In a second part, both screening charges are considered. The braiding of the chiral components is derived and shown to agree with an ansatz of a parallel paper of Gervais and Roussel. ((orig.))
Chapter 8: Plasma operation and control
ITER Physics Expert Group on Disruptions, Control, Plasma, and MHD; ITER Physics Expert Group on Energetic Particles, Heating, Current and Drive; ITER Physics Expert Group on Diagnostics; ITER Physics Basis Editors
Wall conditioning of fusion devices involves removal of desorbable hydrogen isotopes and impurities from interior device surfaces to permit reliable plasma operation. Techniques used in present devices include baking, metal film gettering, deposition of thin films of low-Z material, pulse discharge cleaning, glow discharge cleaning, radio frequency discharge cleaning, and in situ limiter and divertor pumping. Although wall conditioning techniques have become increasingly sophisticated, a reactor scale facility will involve significant new challenges, including the development of techniques applicable in the presence of a magnetic field and of methods for efficient removal of tritium incorporated into co-deposited layers on plasma facing components and their support structures. The current status of various approaches is reviewed, and the implications for reactor scale devices are summarized. Creation and magnetic control of shaped and vertically unstable elongated plasmas have been mastered in many present tokamaks. The physics of equilibrium control for reactor scale plasmas will rely on the same principles, but will face additional challenges, exemplified by the ITER/FDR design. The absolute positioning of outermost flux surface and divertor strike points will have to be precise and reliable in view of the high heat fluxes at the separatrix. Long pulses will require minimal control actions, to reduce accumulation of AC losses in superconducting PF and TF coils. To this end, more complex feedback controllers are envisaged, and the experimental validation of the plasma equilibrium response models on which such controllers are designed is encouraging. Present simulation codes provide an adequate platform on which equilibrium response techniques can be validated. Burning plasmas require kinetic control in addition to traditional magnetic shape and position control. Kinetic control refers to measures controlling density, rotation and temperature in the plasma core as
Resonance control for a cw [continuous wave] accelerator
Young, L.M.; Biddle, R.S.
A resonance-control technique is described that has been successfully applied to several cw accelerating structures built by the Los Alamos National Laboratory for the National Bureau of Standards and for the University of Illinois. The technique involves sensing the rf fields in an accelerating structure as well as the rf power feeding into the cavity and, then, using the measurement to control the resonant frequency of the structure by altering the temperature of the structure. The temperature of the structure is altered by adjusting the temperature of the circulating cooling water. The technique has been applied to continuous wave (cw) side-coupled cavities only but should have applications with most high-average-power accelerator structures. Some additional effort would be required for pulsed systems
Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier
Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist
The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-day...... measurement campaign was performed. The campaign verified a stable operation of the plant, and the energy balance resulted in an overall fuel to gas efficiency of 93% and a wood to electricity efficiency of 25%. Very low tar content in the producer gas was observed: only 0.1 mg/Nm3 naphthalene could...... be measured in raw gas. A stable engine operation on the producer gas was observed, and very low emissions of aldehydes, N2O, and polycyclic aromatic hydrocarbons were measured....
Operating large controlled thermonuclear fusion research facilities
Gaudreau, M.P.J.; Tarrh, J.M.; Post, R.S.; Thomas, P.
The MIT Tara Tandem Mirror is a large, state of the art controlled thermonuclear fusion research facility. Over the six years of its design, implementation, and operation, every effort was made to minimize cost and maximize performance by using the best and latest hardware, software, and scientific and operational techniques. After reviewing all major DOE fusion facilities, an independent DOE review committee concluded that the Tara operation was the most automated and efficient of all DOE facilities. This paper includes a review of the key elements of the Tara design, construction, operation, management, physics milestones, and funding that led to this success. The authors emphasize a chronological description of how the system evolved from the proposal stage to a mature device with an emphasis on the basic philosophies behind the implementation process. This description can serve both as a qualitative and quantitative database for future large experiment planning. It includes actual final costs and manpower spent as well as actual run and maintenance schedules, number of data shots, major system failures, etc. The paper concludes with recommendations for the next generation of facilities
Human reliability analysis of control room operators
Santos, Isaac J.A.L.; Carvalho, Paulo Victor R.; Grecco, Claudio H.S. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)
Human reliability is the probability that a person correctly performs some system required action in a required time period and performs no extraneous action that can degrade the system Human reliability analysis (HRA) is the analysis, prediction and evaluation of work-oriented human performance using some indices as human error likelihood and probability of task accomplishment. Significant progress has been made in the HRA field during the last years, mainly in nuclear area. Some first-generation HRA methods were developed, as THERP (Technique for human error rate prediction). Now, an array of called second-generation methods are emerging as alternatives, for instance ATHEANA (A Technique for human event analysis). The ergonomics approach has as tool the ergonomic work analysis. It focus on the study of operator's activities in physical and mental form, considering at the same time the observed characteristics of operator and the elements of the work environment as they are presented to and perceived by the operators. The aim of this paper is to propose a methodology to analyze the human reliability of the operators of industrial plant control room, using a framework that includes the approach used by ATHEANA, THERP and the work ergonomics analysis. (author)
The MIT Tara Tandem Mirror is a large, state of the art controlled thermonuclear fusion research facility. Over the six years of its design, implementation, and operation, every effort was made to minimize cost and maximize performance by using the best and latest hardware, software, and scientific and operational techniques. After reviewing all major DOE fusion facilities, an independent DOE review committee concluded that the Tara operation was the most automated and efficient of all DOE facilities. This paper includes a review of the key elements of the Tara design, construction, operation, management, physics milestones, and funding that led to this success. We emphasize a chronological description of how the system evolved from the proposal stage to a mature device with an emphasis on the basic philosophies behind the implementation process. This description can serve both as a qualitative and quantitative database for future large experiment planning. It includes actual final costs and manpower spent as well as actual run and maintenance schedules, number of data shots, major system failures, etc. The paper concludes with recommendations for the next generation of facilities. 13 refs., 15 figs., 3 tabs
This revised performance assessment (PA) for the continued disposal operations at Solid Waste Storage Area (SWSA) 6 on the Oak Ridge Reservation (ORR) has been prepared to demonstrate compliance with the performance objectives for low-level radioactive waste (LLW) disposal contained in the US Department of Energy (DOE) Order 5820.2A. This revised PA considers disposal operations conducted from September 26, 1988, through the projects lifetime of the disposal facility
Spectra of random operators with absolutely continuous integrated density of states
Rio, Rafael del, E-mail: [email protected], E-mail: [email protected] [Departamento de Fisica Matematica, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, C.P. 04510, México D.F. (Mexico)
The structure of the spectrum of random operators is studied. It is shown that if the density of states measure of some subsets of the spectrum is zero, then these subsets are empty. In particular follows that absolute continuity of the integrated density of states implies singular spectra of ergodic operators is either empty or of positive measure. Our results apply to Anderson and alloy type models, perturbed Landau Hamiltonians, almost periodic potentials, and models which are not ergodic.
Rio, Rafael del
The structure of the spectrum of random operators is studied. It is shown that if the density of states measure of some subsets of the spectrum is zero, then these subsets are empty. In particular follows that absolute continuity of the integrated density of states implies singular spectra of ergodic operators is either empty or of positive measure. Our results apply to Anderson and alloy type models, perturbed Landau Hamiltonians, almost periodic potentials, and models which are not ergodic
This revised performance assessment (PA) for the continued disposal operations at Solid Waste Storage Area (SWSA) 6 on the Oak Ridge Reservation (ORR) has been prepared to demonstrate compliance with the performance objectives for low-level radioactive waste (LLW) disposal contained in the US Department of Energy (DOE) Order 5820.2A. This revised PA considers disposal operations conducted from September 26, 1988, through the projects lifetime of the disposal facility.
Study of optimal operation for producing onion vinegar using two continuously stirred tank reactors
�林, 秀彰; 山�, 文; 富田, 弘毅; 管野, 亨; �林, 正義; KOBAYASHI, Hideaki; YAMAGUCHI, Kazaru; TOMITA, Koki; KANNO, Tohru; KOBAYASHI, Masayoshi
Onion vinegar was produced using a 2-stage continuously stirred tank reactor. Regarding the alcohol fermentation and the acetic acid fermentation examined in this study, the immobilized cells on porous ceramics offered stable production of alcohol and acetic acid for long periods of 300 and 700 days, respectively. Compared with the steady-state operation method, the temperature-change forced-cyclic operation method increased ethanol yield of alcohol fermentation by a maximum of 15%. Acetic a...
40 CFR 63.5820 - What are my options for meeting the standards for continuous lamination/casting operations?
... standards for continuous lamination/casting operations? 63.5820 Section 63.5820 Protection of Environment... meeting the standards for continuous lamination/casting operations? You must use one or more of the... continuous lamination line and each continuous casting line complies with the applicable standard. (b...
``DMS-R, the Brain of the ISS'', 10 Years of Continuous Successful Operation in Space
Wolff, Bernd; Scheffers, Peter
-R equipment for the ISS related to availability and reliability is reported in paragraph 1.2, describing a serious incident.The DMS-R architecture, consisting of two fault tolerant computers, their interconnection via MIL 1553 STD Bus and the Control Post Computer (CPC) as man- machine interface is given in figure 1. The main data transfer within the ISS and therefore also the Russian segment is managed by the MIL1553 STD bus. The focus of this script is neither the operational concept nor the fault tolerant design according the Byzantine Theorem, but the architectural embedment. One fault tolerant computer consists out of up to four fault containment regions (FCR), comparing in- and output data and deciding by majority voting whether a faulty FCR has to be isolated. For this purpose all data have to pass the so-called fault management element and are distributed to the other participants in the computer pool (FTC). Each fault containment region is connected to the avionic busses of the vehicle avionics system. In case of a faulty FCR (wrong calculation result was detected by the other FCRs or by build-in self-detection) the dedicated FCR will reset itself or will be reset by the others. The bus controller functions of the isolated FCR will be taken over according to a specific deterministic scheme from another FCR. The FTC data throughput will be maintained, the FTC operation will continue without interruption. Each FCR consists of an application CPU board (ALB), the fault management layer (FML), the avionics bus interface board (AVI) and a power supply (PSU), sharing a VME data bus.The FML is fully transparent, in terms of I/O accessibility, to the application S/W and votes the data autonomously received from the avionics busses and transmitted from the application.
Feasibility of Construction of the Continuously Operating Geodetic GPS Network of Sinaloa, Mexico
Vazquez, G. E.; Jacobo, C.
This research is based on the study and analysis of feasibility for the construction of the geodetic network for GPS continuous operation for Sinaloa, hereafter called (RGOCSIN). A GPS network of continuous operation is defined as that materialized structure physically through permanent monuments where measurements to the systems of Global Positioning (GPS) is performed continuously throughout a region. The GPS measurements in this network are measurements of accuracy according to international standards to define its coordinates, thus constituting the basic structure of geodetic referencing for a country. In this context is that in the near future the RGOCSIN constitutes a system state only accurate and reliable georeferencing in real-time (continuous and permanent operation) and will be used for different purposes; i.e., in addition to being fundamental basis for any lifting topographic or geodetic survey, and other areas such as: (1) Different construction processes (control and monitoring of engineering works); (2) Studies of deformation of the Earth's crust (before and after a seismic event); (3) GPS meteorology (weather forecasting); (4) Demarcation projects (natural and political); (5) Establishment of bases to generate mapping (necessary for the economic and social development of the state); (6) Precision agriculture (optimization of economic resources to the various crops); (7) Geographic information systems (Organization and planning activities associated with the design and construction of public services); (8) Urban growth (possible settlements in the appropriate form and taking care of the environmental aspect), among others. However there are criteria and regulations according to the INEGI (Instituto Nacional de Estadística y Geografía, http://www.inegi.org.mx/) that must be met; even for this stage of feasibility of construction that sees this project as a first phase. The fundamental criterion to be taken into account according to INEGI is a
14 CFR 91.1013 - Operational control briefing and acknowledgment.
... Ownership Operations Operational Control § 91.1013 Operational control briefing and acknowledgment. (a) Upon...) Liability risk in the event of a flight-related occurrence that causes personal injury or property damage...
Stochastic Modelling and Self Tuning Control of a Continuous Cement Raw Material Mixing System
Hannu T. Toivonen
Full Text Available The control of a continuously operating system for cement raw material mixing is studied. The purpose of the mixing system is to maintain a constant composition of the cement raw meal for the kiln despite variations of the raw material compositions. Experimental knowledge of the process dynamics and the characteristics of the various disturbances is used for deriving a stochastic model of the system. The optimal control strategy is then obtained as a minimum variance strategy. The control problem is finally solved using a self-tuning minimum variance regulator, and results from a successful implementation of the regulator are given.
Operating System For Numerically Controlled Milling Machine
Ray, R. B.
OPMILL program is operating system for Kearney and Trecker milling machine providing fast easy way to program manufacture of machine parts with IBM-compatible personal computer. Gives machinist "equation plotter" feature, which plots equations that define movements and converts equations to milling-machine-controlling program moving cutter along defined path. System includes tool-manager software handling up to 25 tools and automatically adjusts to account for each tool. Developed on IBM PS/2 computer running DOS 3.3 with 1 MB of random-access memory.
Continued advancement of the programming language HAL to an operational status
The continued advancement of the programming language HAL to operational status is reported. It is demonstrated that the compiler itself can be written in HAL. A HAL-in-HAL experiment proves conclusively that HAL can be used successfully as a compiler implementation tool.
Laser vision seam tracking system based on image processing and continuous convolution operator tracker
Zou, Yanbiao; Chen, Tao
To address the problem of low welding precision caused by the poor real-time tracking performance of common welding robots, a novel seam tracking system with excellent real-time tracking performance and high accuracy is designed based on the morphological image processing method and continuous convolution operator tracker (CCOT) object tracking algorithm. The system consists of a six-axis welding robot, a line laser sensor, and an industrial computer. This work also studies the measurement principle involved in the designed system. Through the CCOT algorithm, the weld feature points are determined in real time from the noise image during the welding process, and the 3D coordinate values of these points are obtained according to the measurement principle to control the movement of the robot and the torch in real time. Experimental results show that the sensor has a frequency of 50 Hz. The welding torch runs smoothly with a strong arc light and splash interference. Tracking error can reach ±0.2 mm, and the minimal distance between the laser stripe and the welding molten pool can reach 15 mm, which can significantly fulfill actual welding requirements.
Development of a completely decentralized control system for modular continuous conveyors
Mayer, Stephan H.
To increase the flexibility of application of continuous conveyor systems, a completely decentralized control system for a modular conveyor system is introduced in the paper. This system is able to carry conveyor units without any centralized infrastructure. Based on existing methods of decentralized data transfer in IT networks, single modules operate autonomously and, after being positioned into the required topology, independently connect together to become a functioning conveyor system.
New operator assistance features in the CMS Run Control System
Andre, Jean-Marc Olivier; Branson, James; Brummer, Philipp Maximilian; Chaze, Olivier; Cittolin, Sergio; Contescu, Cristian; Craigs, Benjamin Gordon; Darlea, Georgiana Lavinia; Deldicque, Christian; Demiragli, Zeynep; Dobson, Marc; Doualot, Nicolas; Erhan, Samim; Fulcher, Jonathan F; Gigi, Dominique; Michail GÅ'adki; Glege, Frank; Gomez Ceballos, Guillelmo; Hegeman, Jeroen Guido; Holzner, Andre Georg; Janulis, Mindaugas; Jimenez Estupinan, Raul; Masetti, Lorenzo; Meijers, Franciscus; Meschi, Emilio; Mommsen, Remigius; Morovic, Srecko; O'Dell, Vivian; Orsini, Luciano; Paus, Christoph Maria Ernst; Petrova, Petia; Pieri, Marco; Racz, Attila; Reis, Thomas; Sakulin, Hannes; Schwick, Christoph; Simelevicius, Dainius; Zejdl, Petr; Vougioukas, M.
The Run Control System of the Compact Muon Solenoid (CMS) experiment at CERN is a distributed Java web application running on Apache Tomcat servers. During Run-1 of the LHC, many operational procedures have been automated. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following t...
Control rod guide tube wear in operating reactors; operating experience report. Technical report December 1977-December 1979
Riggs, R.
Evidence of control rod guide tube wear has been observed in operating pressurized water reactors. The cause of this wear is identified as flow-induced vibration of the control rods. This report describes the measures being taken by both the industry and the NRC to deal with this matter. The staff also presents its technical positions and requirements to support continued operation of the plants as of December 1979 pending completion of this generic effort
Compositional control of continuously graded anode functional layer
McCoppin, J.; Barney, I.; Mukhopadhyay, S.; Miller, R.; Reitz, T.; Young, D.
In this work, solid oxide fuel cells (SOFC's) are fabricated with linear-compositionally graded anode functional layers (CGAFL) using a computer-controlled compound aerosol deposition (CCAD) system. Cells with different CGAFL thicknesses (30 um and 50 um) are prepared with a continuous compositionally graded interface deposited between the electrolyte and anode support current collecting regions. The compositional profile was characterized using energy dispersive X-ray spectroscopic mapping. An analytical model of the compound aerosol deposition was developed. The model predicted compositional profiles for both samples that closely matched the measured profiles, suggesting that aerosol-based deposition methods are capable of creating functional gradation on length scales suitable for solid oxide fuel cell structures. The electrochemical performances of the two cells are analyzed using electrochemical impedance spectroscopy (EIS).
Andre, J.-M.; Behrens, U.; Branson, J.; Brummer, P.; Chaze, O.; Cittolin, S.; Contescu, C.; Craigs, B. G.; Darlea, G.-L.; Deldicque, C.; Demiragli, Z.; Dobson, M.; Doualot, N.; Erhan, S.; Fulcher, J. R.; Gigi, D.; Gładki, M.; Glege, F.; Gomez-Ceballos, G.; Hegeman, J.; Holzner, A.; Janulis, M.; Jimenez-Estupiñán, R.; Masetti, L.; Meijers, F.; Meschi, E.; Mommsen, R. K.; Morovic, S.; O'Dell, V.; Orsini, L.; Paus, C.; Petrova, P.; Pieri, M.; Racz, A.; Reis, T.; Sakulin, H.; Schwick, C.; Simelevicius, D.; Vougioukas, M.; Zejdl, P.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
Andre, J.M.; et al.
14 CFR Special Federal Aviation... - Air Traffic Control System Emergency Operation
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Air Traffic Control System Emergency Operation Federal Special Federal Aviation Regulation No. 60 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIR TRAFFIC AND GENERAL OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, SFAR No. 60...
High temperature continuous operation in the HTTR (HP-11). Summary of the test results in the high temperature operation mode
Takamatsu, Kuniyoshi; Ueta, Shohei; Sumita, Junya; Goto, Minoru; Nakagawa, Shigeaki; Hamamoto, Shimpei; Tochio, Daisuke
A high temperature (950 degrees C) continuous operation has been performed for 50 days on the HTTR from January to March in 2010, and the potential to supply stable heat of high temperature for hydrogen production for a long time was demonstrated for the first time in the world. JAEA has evaluated the experimental data obtained by this operation and past rated continuous one, and built the database necessary for commercial HTGRs. According to the results, the concentration of FP released from the fuels in the HTTR was a single through triple-digit lower than that in the foreign HTGRs. It became apparent that the fuels used in the HTTR are the best quality in the world. This successful operation could establish technological basis of HTGRs and show potential of nuclear energy as heat source for innovative thermo-chemical-based hydrogen production, emitting greenhouse gases on a 'low-carbon path' for the first time in the world. We have a plan to progress R and D for practical use of hydrogen production system with HTGRs in the future. (author)
Process control upgrades yield huge operational improvements
Fitzgerald, W.V.
Most nuclear plants in North America were designed and built in the late 60 and 70. The regulatory nature of this industry over the years has made design changes at the plant level difficult, if not impossible, to implement. As a result, many plants in this world region have been getting by on technology that is over 40 years behind the times. What this translates into is that the plants have not been able to take advantage of the huge technology gains that have been made in process control during this period. As a result, most of these plants are much less efficient and productive than they could be. One particular area of the plant that is receiving a lot of attention is the feedwater heaters. These systems were put in place to improve efficiency, but most are not operating correctly. This paper will present a case study where one progressive mid-western utility decided that enough was enough and implemented a process control audit of their heater systems. The audit clearly pointed out the existing problems with the current process control system. It resulted in a proposal for the implementation of a state of the art, digital distributed process control system for the heaters along with a complete upgrade of the level controls and field devices that will stabilize heater levels, resulting in significant efficiency gains and lower maintenance bills. Overall the payback period for this investment should be less than 6 months and the plant is now looking for more opportunities that can provide even bigger gains. (author)
49 CFR 238.447 - Train operator's controls and power car cab layout.
... 49 Transportation 4 2010-10-01 2010-10-01 false Train operator's controls and power car cab layout. 238.447 Section 238.447 Transportation Other Regulations Relating to Transportation (Continued... layout. (a) Train operator controls in the power car cab shall be arranged so as to minimize the chance...
Urban wastewater photobiotreatment with microalgae in a continuously operated photobioreactor: growth, nutrient removal kinetics and biomass coagulation-flocculation.
Mennaa, Fatima Zahra; Arbib, Zouhayr; Perales, José Antonio
The aim of this study was to investigate the growth, nutrient removal and harvesting of a natural microalgae bloom cultivated in urban wastewater in a bubble column photobioreactor. Batch and continuous mode experiments were carried out with and without pH control by means of CO 2 dosage. Four coagulants (aluminium sulphate, ferric sulphate, ferric chloride and polyaluminium chloride (PAC)) and five flocculants (Chemifloc CM/25, FO 4498SH, cationic polymers Zetag (Z8165, Z7550 and Z8160)) were tested to determine the optimal dosage to reach 90% of biomass recovery. The maximum volumetric productivity obtained was 0.11 g SS L -1 d -1 during the continuous mode. Results indicated that the removal of total dissolved nitrogen and total dissolved phosphorous under continuous operation were greater than 99%. PAC, Fe 2 (SO 4 ) 3 and Al 2 (SO 4 ) 3 were the best options from an economical point of view for microalgae harvesting.
The control of operational risk in nuclear power plant operations - Some cross-cultural perspectives
Suchard, A.; Rochlin, G.
The operation of nuclear power plants requires the management of a complex technology under exacting performance and safety criteria. Organizations operating nuclear power plants are faced with the challenge of simultaneously meeting technical, organizational, and social demands, striving toward perfection in a situation where learning by trial and error can be too costly. In this process, they interact with regulatory bodies who seek to help minimize operational risk by imposing and upholding safety standards. The character of this interaction differs in various countries, as does the larger cultural setting. The study generally pursued the question of how organizations operating complex and demanding technologies adapt to such requirements and circumstances, and how they can succeed in delivering nearly error-free performance. One aspect of this study includes the comparison of organizational and cultural environments for nuclear power plant operations in the US, France, Germany, Sweden, and Switzerland. The research involved in-depth, continuous observations on location and interviews with plant personnel, especially control operators, at one plant in each country
Simulations of Continuous Descent Operations with Arrival-management Automation and Mixed Flight-deck Interval Management Equipage
Callantine, Todd J.; Kupfer, Michael; Martin, Lynne Hazel; Prevot, Thomas
Air traffic management simulations conducted in the Airspace Operations Laboratory at NASA Ames Research Center have addressed the integration of trajectory-based arrival-management automation, controller tools, and Flight-Deck Interval Management avionics to enable Continuous Descent Operations (CDOs) during periods of sustained high traffic demand. The simulations are devoted to maturing the integrated system for field demonstration, and refining the controller tools, clearance phraseology, and procedures specified in the associated concept of operations. The results indicate a variety of factors impact the concept's safety and viability from a controller's perspective, including en-route preconditioning of arrival flows, useable clearance phraseology, and the characteristics of airspace, routes, and traffic-management methods in use at a particular site. Clear understanding of automation behavior and required shifts in roles and responsibilities is important for controller acceptance and realizing potential benefits. This paper discusses the simulations, drawing parallels with results from related European efforts. The most recent study found en-route controllers can effectively precondition arrival flows, which significantly improved route conformance during CDOs. Controllers found the tools acceptable, in line with previous studies.
Different operational meanings of continuous variable Gaussian entanglement criteria and Bell inequalities
Buono, D.; Nocerino, G.; Solimeno, S.; Porzio, A.
Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein-Podolsky-Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs).
Buono, D; Nocerino, G; Solimeno, S; Porzio, A
Entanglement, one of the most intriguing aspects of quantum mechanics, marks itself into different features of quantum states. For this reason different criteria can be used for verifying entanglement. In this paper we review some of the entanglement criteria casted for continuous variable states and link them to peculiar aspects of the original debate on the famous Einstein–Podolsky–Rosen (EPR) paradox. We also provide a useful expression for valuating Bell-type non-locality on Gaussian states. We also present the experimental measurement of a particular realization of the Bell operator over continuous variable entangled states produced by a sub-threshold type-II optical parametric oscillators (OPOs). (paper)
Technical Note: Continuity of MIPAS-ENVISAT operational ozone data quality from full- to reduced-spectral-resolution operation mode
S. Ceccherini
Full Text Available MIPAS (Michelson Interferometer for Passive Atmospheric Sounding is operating on the ENVIronmental SATellite (ENVISAT since March 2002. After two years of nearly continuous limb scanning measurements, at the end of March 2004, the instrument was stopped due to problems with the mirror drive of the interferometer. Operations with reduced maximum path difference, corresponding to both a reduced-spectral-resolution and a shorter measurement time, were resumed on January 2005. In order to exploit the reduction in measurement time, the measurement scenario was changed adopting a finer vertical limb scanning. The change of spectral resolution and of measurement scenario entailed an update of the data processing strategy. The aim of this paper is the assessment of the differences in the quality of the MIPAS ozone data acquired before and after the stop of the operations. Two sets of MIPAS ozone profiles acquired in 2003–2004 (full-resolution measurements and in 2005–2006 (reduced-resolution measurements are compared with collocated ozone profiles obtained by GOMOS (Global Ozone Monitoring by Occultation of Stars, itself also onboard ENVISAT. The continuity of the GOMOS data quality allows to assess a possible discontinuity of the MIPAS performances. The relative bias and precision of MIPAS ozone profiles with respect to the GOMOS ones have been compared for the measurements acquired before and after the stop of the MIPAS operations. The results of the comparison show that, in general, the quality of the MIPAS ozone profiles retrieved from reduced-resolution measurements is comparable or better than that obtained from the full-resolution dataset. The only significant change in MIPAS performances is observed at pressures around 2 unit{hPa}, where the relative bias of the instruments increases by a factor of 2 from the 2003–2004 to 2005–2006 measurements.
46 CFR 196.85-1 - Magazine operation and control.
... 46 Shipping 7 2010-10-01 2010-10-01 false Magazine operation and control. 196.85-1 Section 196.85... OPERATIONS Magazine Control § 196.85-1 Magazine operation and control. (a) Keys to magazine spaces and magazine chests shall be kept in the sole control or custody of the Master or one delegated qualified...
40 CFR 60.1240 - How do I make sure my continuous emission monitoring systems are operating correctly?
... emission monitoring systems are operating correctly? 60.1240 Section 60.1240 Protection of Environment... Continuous Emission Monitoring § 60.1240 How do I make sure my continuous emission monitoring systems are operating correctly? (a) Conduct initial, daily, quarterly, and annual evaluations of your continuous...
Variable Camber Continuous Aerodynamic Control Surfaces and Methods for Active Wing Shaping Control
Nguyen, Nhan T. (Inventor)
An aerodynamic control apparatus for an air vehicle improves various aerodynamic performance metrics by employing multiple spanwise flap segments that jointly form a continuous or a piecewise continuous trailing edge to minimize drag induced by lift or vortices. At least one of the multiple spanwise flap segments includes a variable camber flap subsystem having multiple chordwise flap segments that may be independently actuated. Some embodiments also employ a continuous leading edge slat system that includes multiple spanwise slat segments, each of which has one or more chordwise slat segment. A method and an apparatus for implementing active control of a wing shape are also described and include the determination of desired lift distribution to determine the improved aerodynamic deflection of the wings. Flap deflections are determined and control signals are generated to actively control the wing shape to approximate the desired deflection.
The continuous spectrum and the effect of parametric resonance. The case of bounded operators
Skazka, V V
The paper is concerned with the Mathieu-type differential equation u ″ =−A 2 u+εB(t)u in a Hilbert space H. It is assumed that A is a bounded self-adjoint operator which only has an absolutely continuous spectrum and B(t) is almost periodic operator-valued function. Sufficient conditions are obtained under which the Cauchy problem for this equation is stable for small ε and hence free of parametric resonance. Bibliography: 10 titles
An adaptive neuro-fuzzy controller for mold level control in continuous casting
Zolghadri Jahromi, M.; Abolhassan Tash, F.
Mold variations in continuous casting are believed to be the main cause of surface defects in the final product. Although a Pid controller is well capable of controlling the level under normal conditions, it cannot prevent large variations of mold level when a disturbance occurs in the form of nozzle unclogging. In this paper, dual controller architecture is presented, a Pid controller is used as the main controller of the plant and an adaptive neuro-fuzzy controller is used as an auxiliary controller to help the Pid during disturbed phases. The control is passed back to the Pid controller after the disturbance is being dealt with. Simulation results prove the effectiveness of this control strategy in reducing mold level variations during the unclogging period
Development of Regulatory Audit Programs for Wolsong Unit 1 Continued Operation
Kim, Hong Key; Nho, Seung Hwan; Song, Myung Ho [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
Wolsong Unit 1 (PHWR type) design life expires on November 20, 2010. In relation to it, KHNP submitted its application to get approval of the MEST on December 30, 2009 and KINS is under review to confirm the appropriateness of continued operation. For the comprehensive review of Wolsong Unit 1 continued operation, KINS has developed the review guidelines for PHWR type reactor including a total of 39 aging management program (AMP) items and 7 time limited aging analysis (TLAA) items. Evaluations or calculations to verify the integrity of nuclear components are required for plant specific AMP and TLAA items as well as the ones specified in the guidelines. In this paper, audit calculation programs developed for KINS staff use in reviewing applicant's submitted evaluation results are presented
Optimising the design and operation of semi-continuous affinity chromatography for clinical and commercial manufacture.
Pollock, James; Bolton, Glen; Coffman, Jon; Ho, Sa V; Bracewell, Daniel G; Farid, Suzanne S
This paper presents an integrated experimental and modelling approach to evaluate the potential of semi-continuous chromatography for the capture of monoclonal antibodies (mAb) in clinical and commercial manufacture. Small-scale single-column experimental breakthrough studies were used to derive design equations for the semi-continuous affinity chromatography system. Verification runs with the semi-continuous 3-column and 4-column periodic counter current (PCC) chromatography system indicated the robustness of the design approach. The product quality profiles and step yields (after wash step optimisation) achieved were comparable to the standard batch process. The experimentally-derived design equations were incorporated into a decisional tool comprising dynamic simulation, process economics and sizing optimisation. The decisional tool was used to evaluate the economic and operational feasibility of whole mAb bioprocesses employing PCC affinity capture chromatography versus standard batch chromatography across a product's lifecycle from clinical to commercial manufacture. The tool predicted that PCC capture chromatography would offer more significant savings in direct costs for early-stage clinical manufacture (proof-of-concept) (∼30%) than for late-stage clinical (∼10-15%) or commercial (∼5%) manufacture. The evaluation also highlighted the potential facility fit issues that could arise with a capture resin (MabSelect) that experiences losses in binding capacity when operated in continuous mode over lengthy commercial campaigns. Consequently, the analysis explored the scenario of adopting the PCC system for clinical manufacture and switching to the standard batch process following product launch. The tool determined the PCC system design required to operate at commercial scale without facility fit issues and with similar costs to the standard batch process whilst pursuing a process change application. A retrofitting analysis established that the direct cost
Continuous primary fermentation of beer with yeast immobilized on spent grains : the effect of operational conditions
Brányik, Tomáš; Vicente, A. A.; Cruz, José Machado; Teixeira, J. A.
A one-stage continuous primary beer fermentation with immobilized brewing yeast was studied. The objective of the work was to optimize the operational conditions (aeration and temperature) in terms of volumetric productivity and organoleptic quality of green beer. The system consisted of an internal-loop airlift reactor and a carrier material prepared from spent grains (a brewing by-product). An industrial wort and yeast strain were used. The immobilized biomass (in amounts from two to sevenf...
A new continuous-time formulation for scheduling crude oil operations
Reddy, P. Chandra Prakash; Karimi, I.A.; Srinivasan, R.
In today's competitive business climate characterized by uncertain oil markets, responding effectively and speedily to market forces, while maintaining reliable operations, is crucial to a refinery's bottom line. Optimal crude oil scheduling enables cost reduction by using cheaper crudes intelligently, minimizing crude changeovers, and avoiding ship demurrage. So far, only discrete-time formulations have stood up to the challenge of this important, nonlinear problem. A continuous-time formulation would portend numerous advantages, however, existing work in this area has just begun to scratch the surface. In this paper, we present the first complete continuous-time mixed integer linear programming (MILP) formulation for the short-term scheduling of operations in a refinery that receives crude from very large crude carriers via a high-volume single buoy mooring pipeline. This novel formulation accounts for real-world operational practices. We use an iterative algorithm to eliminate the crude composition discrepancy that has proven to be the Achilles heel for existing formulations. While it does not guarantee global optimality, the algorithm needs only MILP solutions and obtains excellent maximum-profit schedules for industrial problems with up to 7 days of scheduling horizon. We also report the first comparison of discrete- vs. continuous-time formulations for this complex problem. (Author)
Absolute continuity for operator valued completely positive maps on C∗-algebras
Gheondea, Aurelian; Kavruk, Ali Åžamil
Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.
Continuously-stirred anaerobic digester to convert organic wastes into biogas: system setup and basic operation.
Usack, Joseph G; Spirito, Catherine M; Angenent, Largus T
Anaerobic digestion (AD) is a bioprocess that is commonly used to convert complex organic wastes into a useful biogas with methane as the energy carrier. Increasingly, AD is being used in industrial, agricultural, and municipal waste(water) treatment applications. The use of AD technology allows plant operators to reduce waste disposal costs and offset energy utility expenses. In addition to treating organic wastes, energy crops are being converted into the energy carrier methane. As the application of AD technology broadens for the treatment of new substrates and co-substrate mixtures, so does the demand for a reliable testing methodology at the pilot- and laboratory-scale. Anaerobic digestion systems have a variety of configurations, including the continuously stirred tank reactor (CSTR), plug flow (PF), and anaerobic sequencing batch reactor (ASBR) configurations. The CSTR is frequently used in research due to its simplicity in design and operation, but also for its advantages in experimentation. Compared to other configurations, the CSTR provides greater uniformity of system parameters, such as temperature, mixing, chemical concentration, and substrate concentration. Ultimately, when designing a full-scale reactor, the optimum reactor configuration will depend on the character of a given substrate among many other nontechnical considerations. However, all configurations share fundamental design features and operating parameters that render the CSTR appropriate for most preliminary assessments. If researchers and engineers use an influent stream with relatively high concentrations of solids, then lab-scale bioreactor configurations cannot be fed continuously due to plugging problems of lab-scale pumps with solids or settling of solids in tubing. For that scenario with continuous mixing requirements, lab-scale bioreactors are fed periodically and we refer to such configurations as continuously stirred anaerobic digesters (CSADs). This article presents a general
Using CONFIG for Simulation of Operation of Water Recovery Subsystems for Advanced Control Software Evaluation
Malin, Jane T.; Flores, Luis; Fleming, Land; Throop, Daiv
A hybrid discrete/continuous simulation tool, CONFIG, has been developed to support evaluation of the operability life support systems. CON FIG simulates operations scenarios in which flows and pressures change continuously while system reconfigurations occur as discrete events. In simulations, intelligent control software can interact dynamically with hardware system models. CONFIG simulations have been used to evaluate control software and intelligent agents for automating life support systems operations. A CON FIG model of an advanced biological water recovery system has been developed to interact with intelligent control software that is being used in a water system test at NASA Johnson Space Center
Operation Aspect of the Main Control Room of NPP
Sahala M Lumbanraja
The main control room of Nuclear Power Plant (NPP) is operational centre to control all of the operation activity of NPP. NPP must be operated carefully and safely. Many aspect that contributed to operation of NPP, such as man power whose operated, technology type used, ergonomic of main control room, operational management, etc. The disturbances of communication in control room must be anticipated so the high availability of NPP can be achieved. The ergonomic of the NPP control room that will be used in Indonesia must be designed suitable to anthropometric of Indonesia society. (author)
Development of the automatic control rod operation system for JOYO. Verification of automatic control rod operation guide system
Terakado, Tsuguo; Suzuki, Shinya; Kawai, Masashi; Aoki, Hiroshi; Ohkubo, Toshiyuki
The automatic control rod operation system was developed to control the JOYO reactor power automatically in all operation modes(critical approach, cooling system heat up, power ascent, power descent), development began in 1989. Prior to applying the system, verification tests of the automatic control rod operation guide system was conducted during 32nd duty cycles of JOYO' from Dec. 1997 to Feb. 1998. The automatic control rod operation guide system consists of the control rod operation guide function and the plant operation guide function. The control rod operation guide function provides information on control rod movement and position, while the plant operation guide function provide guidance for plant operations corresponding to reactor power changes(power ascent or power descent). Control rod insertion or withdrawing are predicted by fuzzy algorithms. (J.P.N.)
Investigation of Continuous Gas Engine CHP Operation on Biomass Producer Gas
More than 2000 hours of gas engine operation with producer gas from biomass as fuel has been conducted on the gasification CHP demonstration and research plant, named "Viking� at the Technical University of Denmark. The gas engine is an integrated part of the entire gasification plant. The excess...... operates with varying excess of air due to variation in gas composition and thus stoichiometry, and a second where the excess of air in the exhaust gas is fixed and the flow rate of produced gas from the gasifier is varying. The interaction between the gas engine and the gasification system has been...... investigated. The engine and the plant are equipped with continuously data acquisition that monitors the operation including the composition of the producer gas and the flow. Producer gas properties and contaminations have been investigated. No detectable tar or particle content was observed...
Distributed Autonomous Control of Multiple Spacecraft During Close Proximity Operations
McCamish, Shawn B
This research contributes to multiple spacecraft control by developing an autonomous distributed control algorithm for close proximity operations of multiple spacecraft systems, including rendezvous...
Recommended radiological controls for tritium operations
Mansfield, G.
This informal report presents recommendations for an adequate radiological protection program for tritium operations. Topics include hazards analysis, facility design, personnel protection equipment, training, operational procedures, radiation monitoring, to include surface and airborne tritium contamination, and program management
Control room human engineering influences on operator performance
Finlayson, F.C.
Three general groups of factors influence operator performance in fulfilling their responsibilities in the control room: (1) control room and control system design, informational data displays (operator inputs) as well as control board design (for operator output); (2) operator characteristics, including those skills, mental, physical, and emotional qualities which are functions of operator selection, training, and motivation; (3) job performance guides, the prescribed operating procedures for normal and emergency operations. This paper presents some of the major results of an evaluation of the effect of human engineering on operator performance in the control room. Primary attention is given to discussion of control room and control system design influence on the operator. Brief observations on the influences of operator characteristics and job performance guides (operating procedures) on performance in the control room are also given. Under the objectives of the study, special emphasis was placed on the evaluation of the control room-operator relationships for severe emergency conditions in the power plant. Consequently, this presentation is restricted largely to material related to emergency conditions in the control room, though it is recognized that human engineering of control systems is of equal (or greater) importance for many other aspects of plant operation
Operation and control software for APNEA
McClelland, J.H.; Storm, B.H. Jr.; Ahearn, J. [Lockheed-Martin Specialty Components, Largo, FL (United States)] [and others
The human interface software for the Lockheed Martin Specialty Components (LMSC) Active/Passive Neutron Examination & Analysis System (APENA) provides a user friendly operating environment for the movement and analysis of waste drums. It is written in Microsoft Visual C++ on a Windows NT platform. Object oriented and multitasking techniques are used extensively to maximize the capability of the system. A waste drum is placed on a loading platform with a fork lift and then automatically moved into the APNEA chamber in preparation for analysis. A series of measurements is performed, controlled by menu commands to hardware components attached as peripheral devices, in order to create data files for analysis. The analysis routines use the files to identify the pertinent radioactive characteristics of the drum, including the type, location, and quantity of fissionable material. At the completion of the measurement process, the drum is automatically unloaded and the data are archived in preparation for storage as part of the drum`s data signature. 3 figs.
McClelland, J.H.; Storm, B.H. Jr.; Ahearn, J.
The human interface software for the Lockheed Martin Specialty Components (LMSC) Active/Passive Neutron Examination ampersand Analysis System (APENA) provides a user friendly operating environment for the movement and analysis of waste drums. It is written in Microsoft Visual C++ on a Windows NT platform. Object oriented and multitasking techniques are used extensively to maximize the capability of the system. A waste drum is placed on a loading platform with a fork lift and then automatically moved into the APNEA chamber in preparation for analysis. A series of measurements is performed, controlled by menu commands to hardware components attached as peripheral devices, in order to create data files for analysis. The analysis routines use the files to identify the pertinent radioactive characteristics of the drum, including the type, location, and quantity of fissionable material. At the completion of the measurement process, the drum is automatically unloaded and the data are archived in preparation for storage as part of the drum's data signature. 3 figs
An operating environment for control systems on transputer networks
Tillema, H.G.; Schoute, Albert L.; Wijbrans, K.C.J.; Wijbrans, K.C.J.
The article describes an operating environment for control systems. The environment contains the basic layers of a distributed operating system. The design of this operating environment is based on the requirements demanded by controllers which can be found in complex control systems. Due to the
Environmental impact report addendum for the continued operation of Lawrence Livermore National Laboratory
Weston, R. F.
An environmental impact statement/environmental impact report (ES/EIR) for the continued operation and management of Lawrence Livermore National Laboratory (LLNL) was prepared jointly by the U.S. Department of Energy (DOE) and the University of California (UC). The scope of the document included near-term (within 5-10 years) proposed projects. The UC Board of Regents, as state lead agency under the California Environmental Quality Act (CEQA), certified and adopted the EIR by issuing a Notice of Determination on November 20, 1992. The DOE, as the lead federal agency under the National Environmental Policy Act (NEPA), adopted a Record of Decision for the ES on January 27, 1993 (58 Federal Register [FR] 6268). The DOE proposed action was to continue operation of the facility, including near-term proposed projects. The specific project evaluated by UC was extension of the contract between UC and DOE for UC's continued operation and management of LLNL (both sites) from October 1, 1992, through September 30, 1997. The 1992 ES/EIR analyzed impacts through the year 2002. The 1992 ES/EIR comprehensively evaluated the potential environmental impacts of operation and management of LLNL within the near-term future. Activities evaluated included programmatic enhancements and modifications of facilities and programs at the LLNL Livermore site and at LLNL's Experimental Test Site (Site 300) in support of research and development missions 2048 established for LLNL by Congress and the President. The evaluation also considered the impacts of infrastructure and building maintenance, minor modifications to buildings, general landscaping, road maintenance, and similar routine support activities
Oak Ridge Toxic Substances Control Act (TSCA) Incinerator test bed for continuous emissions monitoring systems (CEMS)
Gibson, L.V. Jr.
The Toxic Substances Control Act (TSCA) Incinerator, located on the K-25 Site at Oak Ridge, Tennessee, continues to be the only operational incinerator in the country that can process hazardous and radioactively contaminated polychlorinated biphenyl (PCB) waste. During 1996, the US Department of Energy (DOE) Environmental Management Office of Science and Technology (EM-50) and Lockheed Martin Energy Systems established a continuous emissions monitoring systems (CEMS) test bed and began conducting evaluations of CEMS under development to measure contaminants from waste combustion and thermal treatment stacks. The program was envisioned to promote CEMS technologies meeting requirements of the recently issued Proposed Standards for Hazardous Waste Combustors as well as monitoring technologies that will allay public concerns about mixed waste thermal treatment and accelerate the development of innovative treatment technologies. Fully developed CEMS, as well as innovative continuous or semi-continuous sampling systems not yet interfaced with a pollutant analyzer, were considered as candidates for testing and evaluation. Complementary to other Environmental Protection Agency and DOE sponsored CEMS testing and within compliant operating conditions of the TSCA Incinerator, prioritization was given to multiple metals monitors also having potential to measure radionuclides associated with particulate emissions. In August 1996, developers of two multiple metals monitors participated in field activities at the incinerator and a commercially available radionuclide particulate monitor was acquired for modification and testing planned in 1997. This paper describes the CEMS test bed infrastructure and summarizes completed and planned activities
Application of fuzzy logic operation and control to BWRs
Junichi Tanji; Mitsuo Kinoshita; Takaharu Fukuzaki; Yasuhiro Kobayashi
Fuzzy logic control schemes employing linguistic decision rules for flexible operator control strategies have undergone application tests in dynamic systems. The advantages claimed for fuzzy logic control are its abilities: (a) to facilitate direct use of skillful operator know-how for automatic operation and control of the systems and (b) to provide robust multivariable control for complex plants. The authors have also studied applications of fuzzy logic control to automatic startup operations and load-following control in boiling water reactors, pursuing these same advantages
A Hierarchical structure of key performance indicators for operation management and continuous improvement in production systems.
Kang, Ningxuan; Zhao, Cong; Li, Jingshan; Horst, John A
Key performance indicators (KPIs) are critical for manufacturing operation management and continuous improvement (CI). In modern manufacturing systems, KPIs are defined as a set of metrics to reflect operation performance, such as efficiency, throughput, availability, from productivity, quality and maintenance perspectives. Through continuous monitoring and measurement of KPIs, meaningful quantification and identification of different aspects of operation activities can be obtained, which enable and direct CI efforts. A set of 34 KPIs has been introduced in ISO 22400. However, the KPIs in a manufacturing system are not independent, and they may have intrinsic mutual relationships. The goal of this paper is to introduce a multi-level structure for identification and analysis of KPIs and their intrinsic relationships in production systems. Specifically, through such a hierarchical structure, we define and layer KPIs into levels of basic KPIs, comprehensive KPIs and their supporting metrics, and use it to investigate the relationships and dependencies between KPIs. Such a study can provide a useful tool for manufacturing engineers and managers to measure and utilize KPIs for CI.
Design of 95 GHz gyrotron based on continuous operation copper solenoid with water cooling
Borodin, Dmitri; Ben-Moshe, Roey; Einat, Moshe
The design work for 2nd harmonic 95 GHz, 50 kW gyrotron based on continuous operation copper solenoid is presented. Thermionic magnetron injection gun specifications were calculated according to the linear trade off equation, and simulated with CST program. Numerical code is used for cavity design using the non-uniform string equation as well as particle motion in the "cold� cavity field. The mode TE02 with low Ohmic losses in the cavity walls was chosen as the operating mode. The Solenoid is designed to induce magnetic field of 1.8 T over a length of 40 mm in the interaction region with homogeneity of ±0.34%. The solenoid has six concentric cylindrical segments (and two correction segments) of copper foil windings separated by water channels for cooling. The predicted temperature in continuous operation is below 93 °C. The parameters of the design together with simulation results of the electromagnetic cavity field, magnetic field, electron trajectories, and thermal analyses are presented
Borodin, Dmitri; Ben-Moshe, Roey; Einat, Moshe [Department of Electrical and Electronic Engineering, Ariel University, Ariel 40700 (Israel)
The design work for 2nd harmonic 95 GHz, 50 kW gyrotron based on continuous operation copper solenoid is presented. Thermionic magnetron injection gun specifications were calculated according to the linear trade off equation, and simulated with CST program. Numerical code is used for cavity design using the non-uniform string equation as well as particle motion in the "cold� cavity field. The mode TE02 with low Ohmic losses in the cavity walls was chosen as the operating mode. The Solenoid is designed to induce magnetic field of 1.8 T over a length of 40 mm in the interaction region with homogeneity of ±0.34%. The solenoid has six concentric cylindrical segments (and two correction segments) of copper foil windings separated by water channels for cooling. The predicted temperature in continuous operation is below 93 °C. The parameters of the design together with simulation results of the electromagnetic cavity field, magnetic field, electron trajectories, and thermal analyses are presented.
Consideration of early closure or continued operation of a nuclear power plant
This publication provides information to management and executives of electrical utilities responsible for the operation of nuclear power plants who are tasked with decision making related to early closures or continued operation. This information is based on the experiences of a number of countries in addressing a spectrum of issues broader than only the economics of the operation of the plant itself. Any major decision involving changes in direction for a major investment such as a nuclear power plant has the potential to incur considerable additional costs for stakeholders. Major economic risks can be unexpectedly encountered when decisions based on a simplified economic understanding of energy options are successfully challenged on the grounds that choices and decisions have been made without accounting for some environmental, social or economic issues which are considered of prime significance to important stakeholders. Such risks include not only changes in project scope and delays in project implementation due to re-evaluations necessitated by such challenges, but risks related to the effectiveness, efficiency and safety of ongoing operations or shutdown maintenance of the nuclear power plant. Additional risks encountered at this stage are the adequacy of the decommissioning fund and the need to establish a process whereby the availability of adequate funds will be assured at the time of the final plant shutdown. This publication provides information on several of these additional issues important to key stakeholders, and on methods that allow for their assessment and consideration when developing recommendations related to early closures or continued operations of a NPP. This publication consists of two parts: Part I: Includes a discussion of the main issues for consideration, with emphasis on issues important to stakeholders in addition to plant owners. Part II: Provides an example of a basic analytical approach to the assessment of plant life cycle
Recollection is a continuous process: Evidence from plurality memory receiver operating characteristics.
Slotnick, Scott D; Jeye, Brittany M; Dodson, Chad S
Is recollection a continuous/graded process or a threshold/all-or-none process? Receiver operating characteristic (ROC) analysis can answer this question as the continuous model and the threshold model predict curved and linear recollection ROCs, respectively. As memory for plurality, an item's previous singular or plural form, is assumed to rely on recollection, the nature of recollection can be investigated by evaluating plurality memory ROCs. The present study consisted of four experiments. During encoding, words (singular or plural) or objects (single/singular or duplicate/plural) were presented. During retrieval, old items with the same plurality or different plurality were presented. For each item, participants made a confidence rating ranging from "very sure old", which was correct for same plurality items, to "very sure new", which was correct for different plurality items. Each plurality memory ROC was the proportion of same versus different plurality items classified as "old" (i.e., hits versus false alarms). Chi-squared analysis revealed that all of the plurality memory ROCs were adequately fit by the continuous unequal variance model, whereas none of the ROCs were adequately fit by the two-high threshold model. These plurality memory ROC results indicate recollection is a continuous process, which complements previous source memory and associative memory ROC findings.
Control of particle size distribution and agglomeration in continuous precipitators
Burkhart, L.; Hoyt, R.C.; Oolman, T.
Progress concerning a program to develop a scientific basis for preparing ceramic powders with reproducible properties which can be predicted from process operating conditions and which can be varied in a systematic fashion to facilitate research in sintering operations is reported
Thermodynamic analysis and theoretical study of a continuous operation solar-powered adsorption refrigeration system
Hassan, H.Z.; Mohamad, A.A.
Due to the intermittent nature of the solar radiation, the day-long continuous production of cold is a challenge for solar-driven adsorption cooling systems. In the present study, a developed solar-powered adsorption cooling system is introduced. The proposed system is able to produce cold continuously along the 24-h of the day. The theoretical thermodynamic operating cycle of the system is based on adsorption at constant temperature. Both the cooling system operating procedure as well as the theoretical thermodynamic cycle are described and explained. Moreover, a steady state differential thermodynamic analysis is performed for all components and processes of the introduced system. The analysis is based on the energy conservation principle and the equilibrium dynamics of the adsorption and desorption processes. The Dubinin–Astakhov adsorption equilibrium equation is used in this analysis. Furthermore, the thermodynamic properties of the refrigerant are calculated from its equation of state. The case studied represents a water chiller which uses activated carbon–methanol as the working pair. The chiller is found to produce a daily mass of 2.63 kg cold water at 0 °C from water at 25 °C per kg of adsorbent. Moreover, the proposed system attains a cooling coefficient of performance of 0.66. - Highlights: • A new continuous operation solar-driven adsorption refrigeration system is introduced. • The theoretical thermodynamic cycle is presented and explained. • A complete thermodynamic analysis is performed for all components and processes of the system. • Activated carbon–methanol is used as the working pair in the case study
Fixed points for some non-obviously contractive operators defined in a space of continuous functions
C. Avramescu; Cristian Vladimirescu
Let $X$ be an arbitrary (real or complex) Banach space, endowed with the norm $\\left| \\cdot \\right| .$ Consider the space of the continuous functions $C\\left( \\left[ 0,T\\right] ,X\\right) $ $\\left( T>0\\right) $, endowed with the usual topology, and let $M$ be a closed subset of it. One proves that each operator $A:M\\rightarrow M$ fulfilling for all $x,y\\in M$ and for all $t\\in \\left[ 0,T\\right] $ the condition \\begin{eqnarray*} \\left| \\left( Ax\\right) \\left( t\\right) -\\left( Ay\\right) \\l...
Deterministic Quantum Secure Direct Communication with Dense Coding and Continuous Variable Operations
Han Lianfang; Chen Yueming; Yuan Hao
We propose a deterministic quantum secure direct communication protocol by using dense coding. The two check photon sequences are used to check the securities of the channels between the message sender and the receiver. The continuous variable operations instead of the usual discrete unitary operations are performed on the travel photons so that the security of the present protocol can be enhanced. Therefore some specific attacks such as denial-of-service attack, intercept-measure-resend attack and invisible photon attack can be prevented in ideal quantum channel. In addition, the scheme is still secure in noise channel. Furthermore, this protocol has the advantage of high capacity and can be realized in the experiment. (general)
Continuous thickness control of extruded pipes with assistance of microcomputers
Breil, J.
Because of economic and quality securing reasons a constant wall thickness of extruded pipes in circumference and extrusion direction is an important production aim. Therefore a microcomputer controlled system was developed, which controls die centering with electric motors. The control of wall thickness distribution; was realized with two conceptions: a dead time subjected control with a rotating on line wall thickness measuring instrument and an adaptive control with sensors in the pipe die. With a PI-algorithm excentricities of 30% of the wall thickness could be controlled below a trigger level of 2% within three dead times. (orig.) [de
Use of the TACL [Thaumaturgic Automated Control Logic] system at CEBAF [Continuous Electron Beam Accelerator Facility] for control of the Cryogenic Test Facility
Navarro, E.; Keesee, M.; Bork, R.; Grubb, C.; Lahti, G.; Sage, J.
A logic-based control software system, called Thaumaturgic Automated Control Logic (TACL), is under development at the Continuous Electron Beam Accelerator Facility in Newport News, VA. The first version of the software was placed in service in November, 1987 for control of cryogenics during the first superconducting RF cavity tests at CEBAF. In August, 1988 the control system was installed at the Cryogenic Test Facility (CTF) at CEBAF. CTF generated liquid helium in September, 1988 and is now in full operation for the current round of cavity tests. TACL is providing a powerful and flexible controls environment for the operation of CTF. 3 refs
... emission monitoring systems are operating correctly? 60.1730 Section 60.1730 Protection of Environment... continuous emission monitoring systems are operating correctly? (a) Conduct initial, daily, quarterly, and annual evaluations of your continuous emission monitoring systems that measure oxygen (or carbon dioxide...
40 CFR 62.15185 - How do I make sure my continuous emission monitoring systems are operating correctly?
... emission monitoring systems are operating correctly? 62.15185 Section 62.15185 Protection of Environment... make sure my continuous emission monitoring systems are operating correctly? (a) Conduct initial, daily, quarterly, and annual evaluations of your continuous emission monitoring systems that measure oxygen (or...
40 CFR Table 9 to Subpart Eeee of... - Continuous Compliance With Operating Limits-High Throughput Transfer Racks
... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Continuous Compliance With Operating Limits-High Throughput Transfer Racks 9 Table 9 to Subpart EEEE of Part 63 Protection of Environment...—Continuous Compliance With Operating Limits—High Throughput Transfer Racks As stated in §§ 63.2378(a) and (b...
47 CFR 90.473 - Operation of internal transmitter control systems through licensed fixed control points.
... 47 Telecommunication 5 2010-10-01 2010-10-01 false Operation of internal transmitter control... Transmitter Control Internal Transmitter Control Systems § 90.473 Operation of internal transmitter control systems through licensed fixed control points. An internal transmitter control system may be operated...
Continuous use of an adaptive lung ventilation controller in critically ...
May 5, 1995 ... Adaptive lung ventilation (ALV) refers to closed-loop mechanical ventilation designed to work ... optimise the controller performance, the volume controller .... PawEE), vital capacity IYC), an index of airway resistance relative to ...
Control and Operation of Islanded Distribution System
Mahat, Pukar
deviation and real power shift. When a distribution system, with all its generators operating at maximum power, is islanded, the frequency will go down if the total load is more than the total generation. An under-frequency load shedding procedure for islanded distribution systems with DG unit(s) based...... states. Short circuit power also changes when some of the generators in the distribution system are disconnected. This may result in elongation of fault clearing time and hence disconnection of equipments (including generators) in the distribution system or unnecessary operation of protective devices...... operational challenges. But, on the other hand, it has also opened up some opportunities. One opportunity/challenge is an islanded operation of a distribution system with DG unit(s). Islanding is a situation in which a distribution system becomes electrically isolated from the remainder of the power system...
Improvements of PKU PMECRIS for continuous hundred hours CW proton beam operation
Peng, S. X.; Ren, H. T.; Zhang, T.; Zhang, J. F.; Xu, Y.; Guo, Z. Y.; Zhang, A. L.; Chen, J. E.
In order to improve the source stability, a long term continuous wave (CW) proton beam experiment has been carried out with Peking University compact permanent magnet 2.45 GHz ECR ion source (PKU PMECRIS). Before such an experiment a lot of improvements and modifications were completed on the source body, the Faraday cup and the PKU ion source test bench. At the beginning of 2015, a continuous operation of PKU PMECRIS for 306 h with more than 50 mA CW beam was carried out after success of many short term tests. No plasma generator failure or high voltage breakdown was observed during that running period and the proton source reliability is near 100%. Total beam availability, which is defined as 35-keV beam-on time divided by elapsed time, was higher than 99% [S. X. Peng et al., Chin. Phys. B 24(7), 075203 (2015)]. A re-inspection was performed after another additional 100 h operation (counting time) and no obvious sign of component failure was observed. Counting the previous source testing time together, this PMECRs longevity is now demonstrated to be greater than 460 h. This paper is mainly concentrated on the improvements for this long term experiment
Peng, S. X.; Zhang, A. L.; Ren, H. T.; Zhang, T.; Zhang, J. F.; Xu, Y.; Guo, Z. Y.; Chen, J. E.
In order to improve the source stability, a long term continuous wave (CW) proton beam experiment has been carried out with Peking University compact permanent magnet 2.45 GHz ECR ion source (PKU PMECRIS). Before such an experiment a lot of improvements and modifications were completed on the source body, the Faraday cup and the PKU ion source test bench. At the beginning of 2015, a continuous operation of PKU PMECRIS for 306 h with more than 50 mA CW beam was carried out after success of many short term tests. No plasma generator failure or high voltage breakdown was observed during that running period and the proton source reliability is near 100%. Total beam availability, which is defined as 35-keV beam-on time divided by elapsed time, was higher than 99% [S. X. Peng et al., Chin. Phys. B 24(7), 075203 (2015)]. A re-inspection was performed after another additional 100 h operation (counting time) and no obvious sign of component failure was observed. Counting the previous source testing time together, this PMECRs longevity is now demonstrated to be greater than 460 h. This paper is mainly concentrated on the improvements for this long term experiment.
Peng, S. X., E-mail: [email protected]; Ren, H. T.; Zhang, T.; Zhang, J. F.; Xu, Y.; Guo, Z. Y. [State Key Laboratory of Nuclear Physics and Technology and Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871 (China); Zhang, A. L.; Chen, J. E. [State Key Laboratory of Nuclear Physics and Technology and Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)
Remote control scanning electron microscope with Web operation
Yamada, A.; Hirahara, O.; Date, M.; Lozbin, V.; Tsuchida, T.; Sugano, N.
part of the SEM control can set up of the observation condition such as a usual acceleration voltage, magnification by the button choice in observation screen. You must do a stage movement even during the SEM control to do image observation specially, too. But, when a stage is moved via LAN, JOYSTICK operation is difficult for continuous movement because the refresh rate of image is delayed with traffic of LAN. Therefore, we don't use continues movement. We use stage centering method that the click the position to want to move it on the image and automatically move to center of the image. There are no relations in the traffic of the network circuit, and it can move it in the center of the observation screen securely through the place of the purpose by this method. We made the remote control function of SEM and displayed the SEM image on the WEB browser (Internet Explorer). In this format, any special software is not necessary, and the SEM can be operate on a general WEB browser. Therefore, it does not need to fix client PC. Copyright (2002) Australian Society for Electron Microscopy Inc
High-performance control of continuously variable transmissions
Meulen, van der S.H.
Nowadays, developments with respect to the pushbelt continuously variable transmission (CVT) are mainly directed towards a reduction of the fuel consumption of a vehicle. The fuel consumption of a vehicle is affected by the variator of the CVT, which transfers the torque and varies the transmission
Measuring the exhaust gas dew point of continuously operated combustion plants
Fehler, D.
Low waste-gas temperatures represent one means of minimizing the energy consumption of combustion facilities. However, condensation should be prevented to occur in the waste gas since this could result in a destruction of parts. Measuring the waste-gas dew point allows to control combustion parameters in such a way as to be able to operate at low temperatures without danger of condensation. Dew point sensors will provide an important signal for optimizing combustion facilities.
JWST Wavefront Sensing and Control: Operations Plans, Demonstrations, and Status
Perrin, Marshall; Acton, D. Scott; Lajoie, Charles-Philippe; Knight, J. Scott; Myers, Carey; Stark, Chris; JWST Wavefront Sensing & Control Team
After JWST launches and unfolds in space, its telescope optics will be aligned through a complex series of wavefront sensing and control (WFSC) steps to achieve diffraction-limited performance. This iterative process will comprise about half of the observatory commissioning time (~ 3 out of 6 months). We summarize the JWST WFSC process, schedule, and expectations for achieved performance, and discuss our team's activities to prepare for an effective & efficient telescope commissioning. During the recently-completed OTIS cryo test at NASA JSC, WFSC demonstrations showed the flight-like operation of the entire JWST active optics and WFSC system from end to end, including all hardware and software components. In parallel, the same test data were processed through the JWST Mission Operations Center at STScI to demonstrate the readiness of ground system components there (such as the flight operations system, data pipelines, archives, etc). Moreover, using the Astronomer's Proposal Tool (APT), the entire telescope commissioning program has been implemented, reviewed, and is ready for execution. Between now and launch our teams will continue preparations for JWST commissioning, including further rehearsals and testing, to ensure a successful alignment of JWST's telescope optics.
Control of three different continuous pharmaceutical manufacturing processes: Use of soft sensors.
Rehrl, Jakob; Karttunen, Anssi-Pekka; Nicolaï, Niels; Hörmann, Theresa; Horn, Martin; Korhonen, Ossi; Nopens, Ingmar; De Beer, Thomas; Khinast, Johannes G
One major advantage of continuous pharmaceutical manufacturing over traditional batch manufacturing is the possibility of enhanced in-process control, reducing out-of-specification and waste material by appropriate discharge strategies. The decision on material discharge can be based on the measurement of active pharmaceutical ingredient (API) concentration at specific locations in the production line via process analytic technology (PAT), e.g. near-infrared (NIR) spectrometers. The implementation of the PAT instruments is associated with monetary investment and the long term operation requires techniques avoiding sensor drifts. Therefore, our paper proposes a soft sensor approach for predicting the API concentration from the feeder data. In addition, this information can be used to detect sensor drift, or serve as a replacement/supplement of specific PAT equipment. The paper presents the experimental determination of the residence time distribution of selected unit operations in three different continuous processing lines (hot melt extrusion, direct compaction, wet granulation). The mathematical models describing the soft sensor are developed and parameterized. Finally, the suggested soft sensor approach is validated on the three mentioned, different continuous processing lines, demonstrating its versatility. Copyright © 2018 Elsevier B.V. All rights reserved.
The Use of Management Control Systems and Operations Management Techniques
Edelcio Koitiro Nisiyama
Full Text Available It is well known that both management control systems (MCSs and operations management (OM are related to firm performance; however, an integrated st udy that involves MCS and OM within the context of firm performance is still lacking. This research aimed to examine the relationships among the use of MCSs and OM techniques and firm performance in the Brazilian auto parts industry. Simons' levers of cont rol framework was used to characterise the uses of MCSs, and OM techniques, such as total quality management (TQM and continuous improvement programmes, were adopted. The results obtained through the structural equation modelling indicated that the diagno stic use of MCSs is positively associated with the goals of cost reduction. In addition, the interactive use of MCSs is positively associated with the objectives of introducing new products, which is consistent with previous research. Additionally, OM tech niques are positively related to cost reduction but have no direct relationship with the introduction of new products.
An electronic image processing device featuring continuously selectable two-dimensional bipolar filter functions and real-time operation
Charleston, B.D.; Beckman, F.H.; Franco, M.J.; Charleston, D.B.
A versatile electronic-analogue image processing system has been developed for use in improving the quality of various types of images with emphasis on those encountered in experimental and diagnostic medicine. The operational principle utilizes spatial filtering which selectively controls the contrast of an image according to the spatial frequency content of relevant and non-relevant features of the image. Noise can be reduced or eliminated by selectively lowering the contrast of information in the high spatial frequency range. Edge sharpness can be enhanced by accentuating the upper midrange spatial frequencies. Both methods of spatial frequency control may be adjusted continuously in the same image to obtain maximum visibility of the features of interest. A precision video camera is used to view medical diagnostic images, either prints, transparencies or CRT displays. The output of the camera provides the analogue input signal for both the electronic processing system and the video display of the unprocessed image. The video signal input to the electronic processing system is processed by a two-dimensional spatial convolution operation. The system employs charged-coupled devices (CCDs), both tapped analogue delay lines (TADs) and serial analogue delay lines (SADs), to store information in the form of analogue potentials which are constantly being updated as new sampled analogue data arrive at the input. This information is convolved with a programmed bipolar radially symmetrical hexagonal function which may be controlled and varied at each radius by the operator in real-time by adjusting a set of front panel controls or by a programmed microprocessor control. Two TV monitors are used, one for processed image display and the other for constant reference to the original image. The working prototype has a full-screen display matrix size of 200 picture elements per horizontal line by 240 lines. The matrix can be expanded vertically and horizontally for the
On the continuous spectral component of the Floquet operator for a periodically kicked quantum system
McCaw, James; McKellar, B.H.J.
By a straightforward generalization, we extend the work of Combescure [J. Stat. Phys. 59, 679 (1990)] from rank-1 to rank-N perturbations. The requirement for the Floquet operator to be pure point is established and compared to that in Combescure. The result matches that in McCaw and McKeller [J. Math. Phys. 46, 032108 (2005)]. The method here is an alternative to that work. We show that if the condition for the Floquet operator to be pure point is relaxed, then in the case of the δ-kicked Harmonic oscillator, a singularly continuous component of the Floquet operator spectrum exists. We also provide an in-depth discussion of the conjecture presented in the work of Combescure of the case where the unperturbed Hamiltonian is more general. We link the physics conjecture directly to a number-theoretic conjecture of Vinogradov [The Method of Trigonometrical Sums in the Theory of Numbers (Interscience, London, 1954)] and show that a solution of Vinogradov's conjecture solves the physics conjecture. The result is extended to the rank-N case. The relationship between our work and the work of Bourget [J. Math. Anal. Appl. 276, 28 (2002); 301, 65 (2005)], on the physics conjecture is discussed
Principles of control automation of soil compacting machine operating mechanism
Anatoly Fedorovich, Tikhonov; Drozdov, Anatoly
The relevance of the qualitative compaction of soil bases in the erection of embankment and foundations in building and structure construction is given.The quality of the compactible gravel and sandy soils provides the bearing capability and, accordingly, the strength and durability of constructed buildings.It has been established that the compaction quality depends on many external actions, such as surface roughness and soil moisture; granulometry, chemical composition and degree of elasticity of originalfilled soil for compaction.The analysis of technological processes of soil bases compaction of foreign and domestic information sources showed that the solution of such important problem as a continuous monitoring of soil compaction actual degree in the process of machine operation carry out only with the use of modern means of automation. An effective vibrodynamic method of gravel and sand material sealing for the building structure foundations for various applications was justified and suggested.The method of continuous monitoring the soil compaction by measurement of the amplitudes and frequencies of harmonic oscillations on the compactible surface was determined, which allowed to determine the basic elements of facilities of soil compacting machine monitoring system of operating, etc. mechanisms: an accelerometer, a bandpass filter, a vibro-harmonics, an on-board microcontroller. Adjustable parameters have been established to improve the soil compaction degree and the soil compacting machine performance, and the adjustable parameter dependences on the overall indexhave been experimentally determined, which is the soil compaction degree.A structural scheme of automatic control of the soil compacting machine control mechanism and theoperation algorithm has been developed.
Frequency Control for Island Operation of Bornholm Power System
Cha, Seung-Tae; Wu, Qiuwei; Zhao, Haoran
the primary frequency control and the DG units are used to provide the secondary frequency control. As such, the proposed control scheme can strike a balance of the frequency control speed and the energy used from the BESS for the frequency control support. The real-time model of the Bornholm power system......This paper presents a coordinated control strategy of a battery energy storage system (BESS) and distributed generation (DG) units for the island operation of the Danish island of Bornholm. The Bornholm power system is able to transit from the grid connected operation with the Nordic power system...... to the isolated island operation. In order to ensure the secure island operation, the coordinated control of the BESS and the DG has been proposed to stabilize the frequency of the system after the transition to the island operation. In the proposed coordinate control scheme, the BESS is used to provide...
Elements of an advanced integrated operator control station
Clarke, M.M.; Kreifeldt, J.G.
One of the critical determinants of performance for any remotely operated maintenance system is the compatibility achieved between elements of the man/machine interface (e.g., master manipulator controller, controls, displays) and the human operator. In the remote control engineering task of the Consolidated Fuel Reprocessing Program, considerable attention has been devoted to optimizing the man/machine interface of the operator control station. This system must be considered an integral element of the overall maintenance work system which includes transporters, manipulators, remote viewing, and other parts. The control station must reflect the integration of the operator team, control/display panels, manipulator master controllers, and remote viewing monitors. Human factors principles and experimentation have been used in the development of an advanced integrated operator control station designed for the advance servomanipulator. Key features of this next-generation design are summarized in this presentation. 7 references, 4 figures
One of the critical determinants of peformance for any remotely operated maintenance system is the compatibility achieved between elements of the man/machine interface (e.g., master manipulator controller, controls, displays, etc.) and the human operator. In the Remote Control Engineering task of the Consolidated Fuel Reprocessing Program, considerable attention has been devoted to optimizing the man/machine interface of the operator control station. This system must be considered an integral element of the overall maintenance work system which includes transporters, manipulators, remote viewing, and other parts. The control station must reflect the integration of the operator team, control/display panels, manipulator master controllers, and remote viewing monitors. Human factors principles and experimentation have been used in the development of an advanced integrated operator control station designed for the advance servomanipulator. Key features of this next-generation design are summarized in this presentation. 7 references, 4 figures
Contamination control plan for prelaunch operations
Austin, J. D.
A unified, systematic plan is presented for contamination control for space flight systems. Allowable contaminant quantities, or contamination budgets, are determined based on system performance margins and system-level allowable degradations. These contamination budgets are compared to contamination rates in ground environments to establish the controls required in each ground environment. The use of feedback from contamination monitoring and some contamination control procedures are discussed.
Operational Strategy of CBPs for load balancing of Operators in Advanced Main Control Room
Kim, Seunghwan; Kim, Yochan; Jung, Wondea
With the using of a computer-based control room in an APR1400 (Advanced Pressurized Reactor-1400), the operators' behaviors in the main control room had changed. However, though the working environment of operators has been changed a great deal, digitalized interfaces can also change the cognitive tasks or activities of operators. First, a shift supervisor (SS) can confirm/check the conduction of the procedures and the execution of actions of board operators (BOs) while confirming directly the operation variables without relying on the BOs. Second, all operators added to their work the use of a new CBP and Soft Controls, increasing their procedural workload. New operational control strategies of CBPs are necessary for load balancing of operator's task load in APR1400. In this paper, we compared the workloads of operators in an APR1400 who work with two different usages of the CBP. They are SS oriented usage and SS-BO collaborative usage. In this research, we evaluated the workloads of operators in an advanced main control room by the COCOA method. Two types of CBP usages were defined and the effects of these usages on the workloads were investigated. The obtained results showed that the workloads between operators in a control room can be balanced according to the CBP usages by assigning control authority to the operators
Kim, Seunghwan; Kim, Yochan; Jung, Wondea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
With the using of a computer-based control room in an APR1400 (Advanced Pressurized Reactor-1400), the operators' behaviors in the main control room had changed. However, though the working environment of operators has been changed a great deal, digitalized interfaces can also change the cognitive tasks or activities of operators. First, a shift supervisor (SS) can confirm/check the conduction of the procedures and the execution of actions of board operators (BOs) while confirming directly the operation variables without relying on the BOs. Second, all operators added to their work the use of a new CBP and Soft Controls, increasing their procedural workload. New operational control strategies of CBPs are necessary for load balancing of operator's task load in APR1400. In this paper, we compared the workloads of operators in an APR1400 who work with two different usages of the CBP. They are SS oriented usage and SS-BO collaborative usage. In this research, we evaluated the workloads of operators in an advanced main control room by the COCOA method. Two types of CBP usages were defined and the effects of these usages on the workloads were investigated. The obtained results showed that the workloads between operators in a control room can be balanced according to the CBP usages by assigning control authority to the operators.
Continuous residual reinforcement learning for traffic signal control optimization
Aslani, Mohammad; Seipel, Stefan; Wiering, Marco
Traffic signal control can be naturally regarded as a reinforcement learning problem. Unfortunately, it is one of the most difficult classes of reinforcement learning problems owing to its large state space. A straightforward approach to address this challenge is to control traffic signals based on
MODELLING AND CONTROL OF CONTINUOUS STIRRED TANK REACTOR WITH PID CONTROLLER
Artur Wodołażski
Full Text Available This paper presents a model of dynamics control for continuous stirred tank reactor (CSTR in methanol synthesis in a three-phase system. The reactor simulation was carried out for steady and transient state. Efficiency ratio to achieve maximum performance of the product per reactor unit volume was calculated. Reactor dynamics simulation in closed loop allowed to received data for tuning PID controller (proportional-integral-derivative. The results of the regulation process allow to receive data for optimum reactor production capacity, along with local hot spots eliminations or temperature runaway.
The Effect of Operations Control on Reliability
Van Oort, N.; Van Nes, R.
Zoetermeer in The Netherlands. During peak hours the frequency on some trajectories is about 24 vehicles an hour. Dealing with these high frequencies and offering travelers a high quality product, according to waiting times as well as the probability of getting a seat, the operator designed a three
Enhanced Engine Control for Emergency Operation
Litt, Jonathan S.
C-MAPSS40k engine simulation has been developed and is available to the public. The authenticity of the engine performance and controller enabled the development of realistic enhanced control modes through controller modification alone. Use of enhanced control modes improved stability and control of an impaired aircraft. - Fast Response is useful for manual manipulation of the throttles - Use of Fast Response improved stability as part of a yaw rate feedback system. - Use of Overthrust shortened takeoff distance, but was generally useful in flight, too. Initial lack of pilot familiarity resulted in discomfort, especially with yaw rate feedback, but that was the only drawback, overall the pilot found the enhanced modes very helpful.
Adaptive control of chaotic continuous-time systems with delay
Tian, Yu-Chu; Gao, Furong
A simple delay system governed by a first-order differential-delay equation may behave chaotically, but the conditions for the system to have such behaviors have not been well recognized. In this paper, a set of rules is postulated first for the conditions for the delay system to display chaos. A model-reference adaptive control scheme is then proposed to control the chaotic system state to converge to an arbitrarily given reference trajectory with certain and uncertain system parameters. Numerical examples are given to analyze the chaotic behaviors of the delay system and to demonstrate the effectiveness of the proposed adaptive control scheme.
Continuation calculus
Geron, B.; Geuvers, J.H.; de'Liguoro, U.; Saurin, A.
Programs with control are usually modeled using lambda calculus extended with control operators. Instead of modifying lambda calculus, we consider a different model of computation. We introduce continuation calculus, or CC, a deterministic model of computation that is evaluated using only head
Predicting core losses and efficiency of SRM in continuous current mode of operation using improved analytical technique
Parsapour, Amir; Dehkordi, Behzad Mirzaeian; Moallem, Mehdi
In applications in which the high torque per ampere at low speed and rated power at high speed are required, the continuous current method is the best solution. However, there is no report on calculating the core loss of SRM in continuous current mode of operation. Efficiency and iron loss calculation which are complex tasks in case of conventional mode of operation is even more involved in continuous current mode of operation. In this paper, the Switched Reluctance Motor (SRM) is modeled using finite element method and core loss and copper loss of SRM in discontinuous and continuous current modes of operation are calculated using improved analytical techniques to include the minor loop losses in continuous current mode of operation. Motor efficiency versus speed in both operation modes is obtained and compared. - Highlights: • Continuous current method for Switched Reluctance Motor (SRM) is explained. • An improved analytical technique is presented for SRM core loss calculation. • SRM losses in discontinuous and continuous current operation modes are presented. • Effect of mutual inductances on SRM performance is investigated
Parsapour, Amir, E-mail: [email protected] [Department of Electrical Engineering, University of Isfahan, Isfahan (Iran, Islamic Republic of); Dehkordi, Behzad Mirzaeian, E-mail: [email protected] [Department of Electrical Engineering, University of Isfahan, Isfahan (Iran, Islamic Republic of); Moallem, Mehdi, E-mail: [email protected] [Department of Electrical Engineering, Isfahan University of Technology, Isfahan (Iran, Islamic Republic of)
In applications in which the high torque per ampere at low speed and rated power at high speed are required, the continuous current method is the best solution. However, there is no report on calculating the core loss of SRM in continuous current mode of operation. Efficiency and iron loss calculation which are complex tasks in case of conventional mode of operation is even more involved in continuous current mode of operation. In this paper, the Switched Reluctance Motor (SRM) is modeled using finite element method and core loss and copper loss of SRM in discontinuous and continuous current modes of operation are calculated using improved analytical techniques to include the minor loop losses in continuous current mode of operation. Motor efficiency versus speed in both operation modes is obtained and compared. - Highlights: • Continuous current method for Switched Reluctance Motor (SRM) is explained. • An improved analytical technique is presented for SRM core loss calculation. • SRM losses in discontinuous and continuous current operation modes are presented. • Effect of mutual inductances on SRM performance is investigated.
Removal of heavy metals using a microbial active, continuously operated sand filter
Ebner, C.
Heavy metals play an important role within the spectrum of the various pollutants, emitted into the environment via human activities. In contrast to most organic pollutants, heavy metal can not be degraded. Many soils, lakes and rivers show a high contamination with heavy metals due to the enrichment of these pollutants. In addition to existing chemical-physical and biological technologies for the treatment of heavy metal containing waste waters a demand for new, efficient and low-cost cleaning technologies exists, particularly for high volumes of weakly contaminated waters. Such a technology was developed within the framework of a scientific project of the European Union. The approach makes use of a continuously operated, moving-bed Astrasand filter, which has been operated as a continuous biofilm reactor. By inoculation of the reactor with bacteria providing different, defined mechanisms of metal immobilization, and by continuous supply of suitable nutrients, a metal-immobilizing biofilm is built up and regenerated continuously. Metal-enriched biomass is removed continuously from the system, and the contained metals can be recycled by pyrometallurgical treatment of the biomass. The subjects of the present work were the optimization of the nutrient supply for the process of metal removal, the investigation of the toxicity of different waste waters, the optimization of inoculation and biofilm formation, set-up and operation of a lab scale sand filter and the operation of a pilot scale sand filter treating rinsing water of a chemical nickel plating plant. First, basic parameters like toxicity of heavy metal-containing waste waters and the influence of the nutrition of bacteria on biosorption and total metal removal were examined, using freely suspended bacteria in batch culture. Concerning toxicity great differences could be found within the spectrum of heavy metal-containing waste waters tested. Some waters completely inhibited growth, while others did not
Development of control system in abdominal operating ROV
ZHANG Weikang; WANG Guanxue; XU Guohua; LIU Chang; SHEN Xiong
In order to satisfy all the requirements of Unmanned Underwater Vehicle(UUV)recovery tasks, a new type of abdominal operating Remote Operated Vehicle(ROV) was developed. The abdominal operating ROV is different from the general ROV which works by a manipulator, as it completes the docking and recovery tasks of UUVs with its abdominal operating mechanism. In this paper, the system composition and principles of the abdominal operating ROV are presented. We then propose a framework for a control...
Active control of continuous air jet with bifurcated synthetic jets
Dan�ová Petra
Full Text Available The synthetic jets (SJs have many significant applications and the number of applications is increasing all the time. In this research the main focus is on the primary flow control which can be used effectively for the heat transfer increasing. This paper deals with the experimental research of the effect of two SJs worked in the bifurcated mode used for control of an axisymmetric air jet. First, the control synthetic jets were measured alone. After an adjustment, the primary axisymmetric jet was added in to the system. For comparison, the primary flow without synthetic jets control was also measured. All experiments were performed using PIV method whereby the synchronization between synthetic jets and PIV system was necessary to do.
Boundary Control of Linear Evolution PDEs - Continuous and Discrete
Rasmussen, Jan Marthedal
Consider a partial di erential equation (PDE) of evolution type, such as the wave equation or the heat equation. Assume now that you can influence the behavior of the solution by setting the boundary conditions as you please. This is boundary control in a broad sense. A substantial amount...... of literature exists in the area of theoretical results concerning control of partial differential equations. The results have included existence and uniqueness of controls, minimum time requirements, regularity of domains, and many others. Another huge research field is that of control theory for ordinary di...... erential equations. This field has mostly concerned engineers and others with practical applications in mind. This thesis makes an attempt to bridge the two research areas. More specifically, we make finite dimensional approximations to certain evolution PDEs, and analyze how properties of the discrete...
Turnpike theory of continuous-time linear optimal control problems
Zaslavski, Alexander J
Individual turnpike results are of great interest due to their numerous applications in engineering and in economic theory; in this book the study is focused on new results of turnpike phenomenon in linear optimal control problems. The book is intended for engineers as well as for mathematicians interested in the calculus of variations, optimal control, and in applied functional analysis. Two large classes of problems are studied in more depth. The first class studied in Chapter 2 consists of linear control problems with periodic nonsmooth convex integrands. Chapters 3-5 consist of linear control problems with autonomous nonconvex and nonsmooth integrands. Chapter 6 discusses a turnpike property for dynamic zero-sum games with linear constraints. Chapter 7 examines genericity results. In Chapter 8, the description of structure of variational problems with extended-valued integrands is obtained. Chapter 9 ends the exposition with a study of turnpike phenomenon for dynamic games with extended value integran...
Implementation of an operator model with error mechanisms for nuclear power plant control room operation
Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek
SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation
Continuous-Wave Operation of a Frequency-Tunable 460-GHz Second-Harmonic Gyrotron for Enhanced Nuclear Magnetic Resonance
Torrezan, Antonio C.; Han, Seong-Tae; Mastovsky, Ivan; Shapiro, Michael A.; Sirigiri, Jagadishwar R.; Temkin, Richard J.; Griffin, Robert G.; Barnes, Alexander B.
The design, operation, and characterization of a continuous-wave (CW) tunable second-harmonic 460-GHz gyrotron are reported. The gyrotron is intended to be used as a submillimeter-wave source for 700-MHz nuclear magnetic resonance experiments with sensitivity enhanced by dynamic nuclear polarization. The gyrotron operates in the whispering-gallery mode TE11,2 and has generated 16 W of output power with a 13-kV 100-mA electron beam. The start oscillation current measured over a range of magnetic field values is in good agreement with theoretical start currents obtained from linear theory for successive high-order axial modes TE11,2,q. The minimum start current is 27 mA. Power and frequency tuning measurements as a function of the electron cyclotron frequency have also been carried out. A smooth frequency tuning range of 1 GHz was obtained for the operating second-harmonic mode either by magnetic field tuning or beam voltage tuning. Long-term CW operation was evaluated during an uninterrupted period of 48 h, where the gyrotron output power and frequency were kept stable to within ±0.7% and ±6 ppm, respectively, by a computerized control system. Proper operation of an internal quasi-optical mode converter implemented to transform the operating whispering-gallery mode to a Gaussian-like beam was also verified. Based on the images of the gyrotron output beam taken with a pyroelectric camera, the Gaussian-like mode content of the output beam was computed to be 92% with an ellipticity of 12%. PMID:23761938
Decontamination and recovery of a nuclear facility to allow continued operation
Cavaghan, Josh
A power supply failure caused a loss of power to key ventilation systems in an operating nuclear facility. The in-cell depression was lost, which led to an egress of activity through prepared areas and into the normal operating areas. After an initial programme of radiological monitoring to quantify and categorise the activity in the operating areas, a plan was developed for the decontamination and remediation of the plant. The scope of the recovery plan was substantial and featured several key stages. The contamination was almost entirely "1"3"7Cs, reflecting the α:β/γ ratio for the facility. In addition to the physical remediation work, several administrative controls were introduced such as new local rules, safety signage to indicate abnormal radiological conditions in certain areas and training of the decontamination teams. All areas of plant, which were contaminated, were returned to normal access arrangements and the plant was successfully returned to full operational capability, <12 months from the date of the event. (authors)
Influence of Insulation Monitoring Devices on the Operation of DC Control Circuits
Olszowiec, Piotr, E-mail: [email protected] [Erea Polaniec (Poland)
The insulation level of DC control circuits is an important safety-critical factor and, thus, should be subject to continuous and periodic monitoring. The methods used for monitoring the insulation in live circuits may, however, disturb the reliable operation of control relays. The risks of misoperation and failure to reset of relays posed by the operation of various insulation monitoring and fault location systems are evaluated.
Operators manual for a computer controlled impedance measurement system
Gordon, J.
Operating instructions of a computer controlled impedance measurement system based in Hewlett Packard instrumentation are given. Hardware details, program listings, flowcharts and a practical application are included.
Seismic qualification program plan for continued operation at DOE-SRS nuclear material processing facilities
Talukdar, B.K.; Kennedy, W.N.
The Savannah River Facilities for the most part were constructed and maintained to standards that were developed by Du Pont and are not rigorously in compliance with the current General Design Criteria (GDC); DOE Order 6430.IA requirements. In addition, many of the facilities were built more than 30 years ago, well before DOE standards for design were issued. The Westinghouse Savannah River Company (WSRC) his developed a program to address the evaluation of the Nuclear Material Processing (NMP) facilities to GDC requirements. The program includes a facility base-line review, assessment of areas that are not in compliance with the GDC requirements, planned corrective actions or exemptions to address the requirements, and a safety assessment. The authors from their direct involvement with the Program, describe the program plan for seismic qualification including other natural phenomena hazards,for existing NMP facility structures to continue operation Professionals involved in similar effort at other DOE facilities may find the program useful
[Controlling and operation management in hospitals].
Vagts, Dierk A
The economical pressure on the health system and especially on hospitals is growing rapidly. Hence, economical knowledge for people in medical executive positions becomes imperative. In advanced and forward-looking hospitals controlling is gaining more and more weight, because it takes over a coordinative responsibility. Ideally controlling is navigating the teamwork of managers (CEOs) and medical executives by weighing medical necessities and economical framework. Controlling is contributing to evaluate an optimal efficiency of a hospital in a highly competitive surrounding by providing medical and economical data on a regular basis. A close, open-minded and trusting cooperation between all people, who are involved, is imperative. Hence, controlling in the proper meaning of the word can not flourish in dominant and hierarchic hospital structures. Georg Thieme Verlag Stuttgart * New York.
Command and Control for Joint Air Operations
systems, to include collaborative air planning tools such as the theater battle management core system ( TBMCS ). Operational level air planning occurs in...sight communications and data exchange equipment in order to respond to joint force requirements. For example, the TBMCS is often used. The use of ATO...generation and dissemination software portions of TBMCS has been standardized. This ATO feature allows the JAOC to be interoperable with other
Successful continuous injection of coal into gasification and PFBC system operating pressures exceeding 500 psi - DOE funded program results
Saunders, T.; Aldred, D.; Rutkowski, M. [Stamet Inc., North Holywood, CA (United States)
The current US energy program is focussed towards commercialisation of coal-based power and IGCC technologies that offer significant improvements in efficiency and reductions in emissions. For gasification and pressurised fluidized bed combustors to be widely accepted, certain operational components need to be significantly improved. One of the most pressing is provision of reliable, controlled and cost-effective solid fuel feeding into the pressure environment. The US Department of Energy has funded research to develop the unique Stamet 'Posimetric{reg_sign} Solids Pump' to be capable of feeding coal into current gasification and PFBC operating pressures. The research objective is a mechanical rotary device able to continuously feed and meter coal into pressured environments of at least 34 bar (500 psi). The research program comprised an initial design and testing phase to feed coal into 20 bar (300 psi) and a second phase for feeding into 34 bar (500 psi). The first phase target was achieved in December 2003. Following modification and optimization, in January 2005, the Stamet Pump achieved a world-record pressure level for continuous injection of coal of 38 bar (560 psi). Research is now targeting 69 bar (1000 psi). The paper reviews the successful pump design, optimisations and results of the testing. 16 figs., 2 tabs.
Design of Air Traffic Control Operation System
Gabriela STROE
Full Text Available This paper presents a numerical simulation for a different aircraft, based on the specific aircraft data that can be incorporated in the model and the equations of motions which can be consequently solved. The aircraft flight design involves various technical steps and requires the use of sophisticated software having modeling and simulation capabilities. Within the flight simulation model, the aerodynamic model can be regarded as the most complex and most important. With appropriate aerodynamic modeling the aerodynamic forces and moments acting on the aircraft's center of gravity can be numerically solved with accuracy. These forces and moments are further used to solve the equations of motion. The development of control and computing technology makes it possible for advanced flight control strategy. The advanced control techniques tend to make the control design and their implementation much more complicated with more control loops or channels; in this line, the autopilot of modern aircrafts includes a variety of automatic control systems that aid and support the flight navigation, flight management, and perform the enhancing and/or augmenting of the stability characteristics of the airplane. Therefore in this context it is very important to choose the dynamic that will satisfy the performance and robustness specifications.
Synthetic olive mill wastewater treatment by Fenton's process in batch and continuous reactors operation.
Esteves, Bruno M; Rodrigues, Carmen S D; Madeira, Luís M
Degradation of total phenol (TPh) and organic matter, (expressed as total organic carbon TOC), of a simulated olive mill wastewater was evaluated by the Fenton oxidation process under batch and continuous mode conditions. A mixture of six phenolic acids usually found in these agro-industrial wastewaters was used for this purpose. The study focused on the optimization of key operational parameters of the Fenton process in a batch reactor, namely Fe 2+ dosage, hydrogen peroxide concentration, pH, and reaction temperature. On the assessment of the process efficiency, > 99% of TPh and > 56% of TOC removal were attained when [Fe 2+ ] = 100 ppm, [H 2 O 2 ] = 2.0 g/L, T = 30 °C, and initial pH = 5.0, after 300 min of reaction. Under those operational conditions, experiments on a continuous stirred-tank reactor (CSTR) were performed for different space-time values (τ). TOC and TPh removals of 47.5 and 96.9%, respectively, were reached at steady-state (for τ = 120 min). High removal of COD (> 75%) and BOD 5 (> 70%) was achieved for both batch and CSTR optimum conditions; analysis of the BOD 5 /COD ratio also revealed an increase in the effluent's biodegradability. Despite the high removal of lumped parameters, the treated effluent did not met the Portuguese legal limits for direct discharge of wastewaters into water bodies, which indicates that coupled chemical-biological process may be the best solution for real olive mill wastewater treatment.
Performance of a continuously operated flocculent sludge UASB reactor with slaughterhouse wastewater
Sayed, S.; Zeeuw, W. de
This investigation was carried out to assess the performance of a continuously operated, one-stage, flocculent sludge upflow anaerobic sludge blanket (UASB) reactor treating slaughterhouse wastewater at a process temperature of 30/sup 0/C. The results indicate that the type of substrate ingredients, coarse suspended solids, colloidal and soluble compounds in the wastewater, affect the performance of the reactor because of different mechanisms involved in their removal and their subsequent conversion into methane. Two different mechanisms are distinguished. An entrapment mechanism prevails for the elimination of coarse suspended solids while an adsorption mechanism is involved in the removal of the colloidal and soluble fractions of the wastewater. The results obtained lead to the conclusion that the system can satisfactorily handle organic space loads up to 5 kg COD m/sup -3/ day/sup -1/ at 30/sup 0/C. The data indicate, however, that continuing heavy accumulation of substrate components in the reactor is detrimental to the stability of the anaerobic treatment process as the accumulation can lead to sludge flotation and consequently to a complete loss of the active biomass from the reactor.
Removal of triazine herbicides from aqueous systems by a biofilm reactor continuously or intermittently operated.
Sánchez-Sánchez, R; Ahuatzi-Chacón, D; Galíndez-Mayer, J; Ruiz-Ordaz, N; Salmerón-Alcocer, A
The impact of pesticide movement via overland flow or tile drainage water on the quality of receiving water bodies has been a serious concern in the last decades; thus, for remediation of water contaminated with herbicides, bioreaction systems designed to retain biomass have been proposed. In this context, the aim of this study was to evaluate the atrazine and terbutryn biodegradation capacity of a microbial consortium, immobilized in a biofilm reactor (PBR), packed with fragments of porous volcanic stone. The microbial consortium, constituted by four predominant bacterial strains, was used to degrade a commercial formulation of atrazine and terbutryn in the biofilm reactor, intermittently or continuously operated at volumetric loading rates ranging from 44 to 306 mg L(-1) d(-1). The complete removal of both herbicides was achieved in both systems; however, higher volumetric removal rates were obtained in the continuous system. It was demonstrated that the adjuvants of the commercial formulation of the herbicide significantly enhanced the removal of atrazine and terbutryn. Copyright © 2013 Elsevier Ltd. All rights reserved.
Continuous, saturation, and discontinuous tokamak plasma vertical position control systems
Mitrishkin, Yuri V., E-mail: [email protected] [M. V. Lomonosov Moscow State University, Faculty of Physics, Moscow 119991 (Russian Federation); Pavlova, Evgeniia A., E-mail: [email protected] [M. V. Lomonosov Moscow State University, Faculty of Physics, Moscow 119991 (Russian Federation); Kuznetsov, Evgenii A., E-mail: [email protected] [Troitsk Institute for Innovation and Fusion Research, Moscow 142190 (Russian Federation); Gaydamaka, Kirill I., E-mail: [email protected] [V. A. Trapeznikov Institute of Control Sciences of the Russian Academy of Sciences, Moscow 117997 (Russian Federation)
Highlights: • Robust new linear state feedback control system for tokamak plasma vertical position. • Plasma vertical position relay control system with voltage inverter in sliding mode. • Design of full models of multiphase rectifier and voltage inverter. • First-order unit approximation of full multiphase rectifier model with high accuracy. • Wider range of unstable plant parameters of stable control system with multiphase rectifier. - Abstract: This paper is devoted to the design and comparison of unstable plasma vertical position control systems in the T-15 tokamak with the application of two types of actuators: a multiphase thyristor rectifier and a transistor voltage inverter. An unstable dynamic element obtained by the identification of plasma-physical DINA code was used as the plasma model. The simplest static feedback state space control law was synthesized as a linear combination of signals accessible to physical measurements, namely the plasma vertical displacement, the current, and the voltage in a horizontal field coil, to solve the pole placement problem for a closed-loop system. Only one system distinctive parameter was used to optimize the performance of the feedback system, viz., a multiple real pole. A first-order inertial unit was used as the rectifier model in the feedback. A system with a complete rectifier model was investigated as well. A system with the voltage inverter model and static linear controller was brought into a sliding mode. As this takes place, real time delays were taken into account in the discontinuous voltage inverter model. The comparison of the linear and sliding mode systems showed that the linear system enjoyed an essentially wider range of the plant model parameters where the feedback system was stable.
Mitrishkin, Yuri V.; Pavlova, Evgeniia A.; Kuznetsov, Evgenii A.; Gaydamaka, Kirill I.
Puerto Rico Seismic Network Operations During and After the Hurricane Maria: Response, Continuity of Operations, and Experiences
Vanacore, E. A.; Baez-Sanchez, G.; Huerfano, V.; Lopez, A. M.; Lugo, J.
The Puerto Rico Seismic Network (PRSN) is an integral part of earthquake and tsunami monitoring in Puerto Rico and the Virgin Islands. The PRSN conducts scientific research as part of the University of Puerto Rico Mayaguez, conducts the earthquake monitoring for the region, runs extensive earthquake and tsunami education and outreach programs, and acts as a Tsunami Warning Focal Point Alternate for Puerto Rico. During and in the immediate aftermath of Hurricane Maria, the PRSN duties and responsibilities evolved from a seismic network to a major information and communications center for the western side of Puerto Rico. Hurricane Maria effectively destroyed most communications on island, critically between the eastern side of the island where Puerto Rico's Emergency Management's (PREMA) main office and the National Weather Service (NWS) is based and the western side of the island. Additionally, many local emergency management agencies on the western side of the island lost a satellite based emergency management information system called EMWIN which provides critical tsunami and weather information. PRSN's EMWIN system remained functional and consequently via this system and radio communications PRSN became the only information source for NWS warnings and bulletins, tsunami alerts, and earthquake information for western Puerto Rico. Additionally, given the functional radio and geographic location of the PRSN, the network became a critical communications relay for local emergency management. Here we will present the PRSN response in relation to Hurricane Maria including the activation of the PRSN devolution plan, adoption of duties, experiences and lessons learned for continuity of operations and adoption of responsibilities during future catastrophic events.
Supervisory Control and Diagnostics System Distributed Operating System
McGoldrick, P.R.
This paper contains a description of the Supervisory Control and Diagnostics System (SCDS) Distributed Operating System. The SCDS consists of nine 32-bit minicomputers with shared memory. The system's main purpose is to control a large Mirror Fusion Test Facility
Continuous Solidification of Immiscible Alloys and Microstructure Control
Jiang, Hongxiang; Zhao, Jiuzhou
Immiscible alloys have aroused considerable interest in last few decades due to their excellent physical and mechanical characteristics as well as potential industrial applications. Up to date, plenty of researches have been carried out to investigate the solidification of immiscible alloys on the ground or in space and great progress has been made. It is demonstrated that the continuous solidification technique have great future in the manufacturing of immiscible alloys, it also indicates that the addition of surface active micro-alloying or inoculants for the nucleation of the minority phase droplets and proper application of external fields, e.g., static magnetic field, electric current, microgravity field, etc. may promote the formation of immiscible alloys with an expected microstructure. The objective of this article is to review the research work in this field.
Database mirroring in fault-tolerant continuous technological process control
R. Danel
Full Text Available This paper describes the implementations of mirroring technology of the selected database systems – Microsoft SQL Server, MySQL and Caché. By simulating critical failures the systems behavior and their resilience against failure were tested. The aim was to determine whether the database mirroring is suitable to use in continuous metallurgical processes for ensuring the fault-tolerant solution at affordable cost. The present day database systems are characterized by high robustness and are resistant to sudden system failure. Database mirroring technologies are reliable and even low-budget projects can be provided with a decent fault-tolerant solution. The database system technologies available for low-budget projects are not suitable for use in real-time systems.
Control of algal production in a high rate algal pond: investigation through batch and continuous experiments.
Derabe Maobe, H; Onodera, M; Takahashi, M; Satoh, H; Fukazawa, T
For decades, arid and semi-arid regions in Africa have faced issues related to water availability for drinking, irrigation and livestock purposes. To tackle these issues, a laboratory scale greywater treatment system based on high rate algal pond (HRAP) technology was investigated in order to guide the operation of the pilot plant implemented in the 2iE campus in Ouagadougou (Burkina Faso). Because of the high suspended solids concentration generally found in effluents of this system, the aim of this study is to improve the performance of HRAPs in term of algal productivity and removal. To determine the selection mechanism of self-flocculated algae, three sets of sequencing batch reactors (SBRs) and three sets of continuous flow reactors (CFRs) were operated. Despite operation with the same solids retention time and the similarity of the algal growth rate found in these reactors, the algal productivity was higher in the SBRs owing to the short hydraulic retention time of 10 days in these reactors. By using a volume of CFR with twice the volume of our experimental CFRs, the algal concentration can be controlled during operation under similar physical conditions in both reactors.
Recommendations to ASME for code guidelines and criteria for continued operation of equipment
Harvey, J.F.
In May 1988, the American Society of Mechanical Engineers, ASME, asked the Pressure Vessel Research Council, PVRC, to review the part it should play in the continued operation of equipment originally designed and fabricated to the ASME codes and rules. This was prompted solely by an economic opportunity in which the capital expenditures to replace plants was far more costly than evaluating, repairing, and extending the nominal design life of the individual component. For instance, nuclear plants are normally designed for a life of 40 years, while fossil fired facilities may have been designed for other time lives, yet at the end of their original design life may actually have many useful years remaining. While this action was economically prompted, it inherently involved a two-fold one; namely, (1) safety, (2) legal. There is no question of safety to operating personnel. While codes for fossil components do not specify design lives, their adoption by many states provides a legal means of procedure in event of a mishap. This recognizes a cradle-to-grave safety responsibility. It is toward maintaining ASMEs leadership as a code authority that this report has been prepared
Continuously Operating Biosensor and Its Integration into a Hermetically Sealed Medical Implant
Mario Birkholz
Full Text Available An integration concept for an implantable biosensor for the continuous monitoring of blood sugar levels is presented. The system architecture is based on technical modules used in cardiovascular implants in order to minimize legal certification efforts for its perspective usage in medical applications. The sensor chip operates via the principle of affinity viscometry, which is realized by a fully embedded biomedical microelectromechanical systems (BioMEMS prepared in 0.25-µm complementary metal–oxide–semiconductor (CMOS/BiCMOS technology. Communication with a base station is established in the 402–405 MHz band used for medical implant communication services (MICS. The implant shall operate within the interstitial tissue, and the hermetical sealing of the electronic system against interaction with the body fluid is established using titanium housing. Only the sensor chip and the antenna are encapsulated in an epoxy header closely connected to the metallic housing. The study demonstrates that biosensor implants for the sensing of low-molecular-weight metabolites in the interstitial may successfully rely on components already established in cardiovascular implantology.
Identification of Barriers Towards Change and Proposal to Institutionalize Continuous Improvement Programs in Manufacturing Operations
Alvair Silveira Torres Jr.
Full Text Available A multi case research unfolded into a study in a sample of Brazilian manufacturing companies concerning their Continuous Improvement (CI program in manufacturing operations. Stakeholders interviews and performance analyses were conducted. The study aims to analyze the existence or absence of the institutionalization of a CI culture in manufacturing operations, identify barriers and difficulties within the process and propose a model for change. As a result of the research, it was observed that despite the considerable motivation of staff, rapid gains of the company and superior results during the early phases of the CI program, time and again such results were either not upheld or faded out over time, delivering no significant mid-term or long term results, due to poor management of changes. This happened mainly as a result of lack of strategic alignment at all levels of the organization, translated in measureable activities and projects, coached and mentored by the middle and upper management throughout the implementation and maintenance of the program. The selected cases showed a declining in performance after two years of CI program start up. Learning, union and process ownership among participants by means of interactions, are necessary to absorb and incorporate changes, instead of merely "smart words" .
Room temperature continuous wave mid-infrared VCSEL operating at 3.35 μm
Jayaraman, V.; Segal, S.; Lascola, K.; Burgner, C.; Towner, F.; Cazabat, A.; Cole, G. D.; Follman, D.; Heu, P.; Deutsch, C.
Tunable vertical cavity surface emitting lasers (VCSELs) offer a potentially low cost tunable optical source in the 3-5 μm range that will enable commercial spectroscopic sensing of numerous environmentally and industrially important gases including methane, ethane, nitrous oxide, and carbon monoxide. Thus far, achieving room temperature continuous wave (RTCW) VCSEL operation at wavelengths beyond 3 μm has remained an elusive goal. In this paper, we introduce a new device structure that has enabled RTCW VCSEL operation near the methane absorption lines at 3.35 μm. This device structure employs two GaAs/AlGaAs mirrors wafer-bonded to an optically pumped active region comprising compressively strained type-I InGaAsSb quantum wells grown on a GaSb substrate. This substrate is removed in processing, as is one of the GaAs mirror substrates. The VCSEL structure is optically pumped at room temperature with a CW 1550 nm laser through the GaAs substrate, while the emitted 3.3 μm light is captured out of the top of the device. Power and spectrum shape measured as a function of pump power exhibit clear threshold behavior and robust singlemode spectra.
Operation control equipment for BWR type reactor
Izumi, Masayuki; Takeda, Renzo.
Purpose: To improve the temperature balance in a feedwater heater by obtaining the objective value of a feedwater enthalpy upon calculation of respective measured values and controlling the opening or closing of an extraction valve so that the objective value may coincide with the measured value, thereby averaging the axial power distribution. Constitution: A plurality of stages of extraction lines are connected to a turbine, and extraction valves are respectively provided at the lines. By calculating the measured values of ractor pressure, reactor core flow rate, vapor flow rate and reactor core inlet enthalpy determined to predetermined value using heat balance the objective feedwater enthalpy is obtained, is fed as an extraction valve opening or closing signal from a control equipment, the extraction stages of the turbine extraction are altered in accordance with this signal, and the feedwater enthalpy is controlled. (Sekiya, K.)
On Discrete Time Control of Continuous Time Systems
Poulsen, Niels Kjølstad
This report is meant as a supplement or an extension to the material used in connection to or after the courses Stochastic Adaptive Control (02421) and Static and Dynamic Optimization (02711) given at the department Department of Informatics and Mathematical Modelling, The Technical University...
Analysis and Improvement of Control Algorithm for Operation Mode Transition due to Input Channel Trouble in Control Systems
Ahn, Myunghoon; Kim, Woogoon; Yim, Hyeongsoon
The PI (Proportional plus Integral) controller, which is the essential functional block in control systems, can automatically perform the stable control of an important plant process while reducing the steady state error and improving the transient response. However, if the received input PV (Process Variable) is not normal due to input channel trouble, it will be difficult to control the system automatically. For this reason, many control systems are implemented to change the operation mode from automatic to manual mode in the PI controller when the failed input PV is detected. If the PI controller is in automatic mode for all the time, the control signal varies as the change of the input PV is continuously reflected in the control algorithm. In the other cases, since the controller changes into the manual mode at t=0, the control signal is fixed at the last PI controller output and thus the feedback control is not performed anymore until the operator takes an action such as the operation mode change. As a result of analysis and simulations for the controller's operation modes in all the cases of input channel trouble, we discovered that it is more appropriate to maintain the automatic mode despite the bad quality in the PV. Therefore, we improved the control system algorithm reflecting the analysis results for the operator's convenience and the stability of a control system
Ahn, Myunghoon; Kim, Woogoon; Yim, Hyeongsoon [KEPCO Engineering and Construction Co., Deajeon (Korea, Republic of)
The PI (Proportional plus Integral) controller, which is the essential functional block in control systems, can automatically perform the stable control of an important plant process while reducing the steady state error and improving the transient response. However, if the received input PV (Process Variable) is not normal due to input channel trouble, it will be difficult to control the system automatically. For this reason, many control systems are implemented to change the operation mode from automatic to manual mode in the PI controller when the failed input PV is detected. If the PI controller is in automatic mode for all the time, the control signal varies as the change of the input PV is continuously reflected in the control algorithm. In the other cases, since the controller changes into the manual mode at t=0, the control signal is fixed at the last PI controller output and thus the feedback control is not performed anymore until the operator takes an action such as the operation mode change. As a result of analysis and simulations for the controller's operation modes in all the cases of input channel trouble, we discovered that it is more appropriate to maintain the automatic mode despite the bad quality in the PV. Therefore, we improved the control system algorithm reflecting the analysis results for the operator's convenience and the stability of a control system.
Continuous bilateral thoracic paravertebral blockade for analgesia after cardiac surgery: a randomised, controlled trial.
Lockwood, Geoff G; Cabreros, Leilani; Banach, Dorota; Punjabi, Prakash P
Continuous bilateral thoracic paravertebral blockade has been used for analgesia after cardiac surgery, but its efficacy has never been formally tested. Fifty adult patients were enrolled in a double-blind, randomised, controlled study of continuous bilateral thoracic paravertebral infusion of 0.5% lidocaine (1 mg.kg -1 .hr -1 ) for analgesia after coronary surgery. Control patients received a subcutaneous infusion of lidocaine at the same rate through catheters inserted at the same locations as the study group. The primary outcome was morphine consumption at 48 hours using patient-controlled analgesia (PCA). Secondary outcomes included pain, respiratory function, nausea and vomiting. Serum lidocaine concentrations were measured on the first two post-operative days. There was no difference in morphine consumption or in any other outcome measure between the groups. Serum lidocaine concentrations increased during the study, with a maximum of 5.9 mg.l -1 . There were no adverse events as a consequence of the study. Bilateral paravertebral infusion of lidocaine confers no advantage over systemic lidocaine infusion after cardiac surgery. ISRCTN13424423 ( https://www.isrctn.com ).
Method for evaluating operator inputs to digital controllers
Venhuizen, J.R.
Most industrial processes employ operator-interactive control systems. The performance of these control systems is influenced by the choice of control station (device through which operator enters control commands). While the importance of proper control-station selection is widely accepted, standard and simple selection methods are not available for the control station using color-graphics terminals. This paper describes a unique facility for evaluating the effectiveness of various control stations. In the facility, a process is simulated on a hybrid computer, color-graphics display terminals provide information to the operator, and different control stations accept input commands to control the simulation. Tests are being conducted to evaluate a keyboard, a graphics tablet, and a CRT touch panel for use as control stations on a nuclear power plant. Preliminary results indicate that our facility can be used to determine those situations where each type of station is advantageous
Expert operator preferences in remote manipulator control systems
Sundstrom, E.; Draper, J.V.; Fausz, A.; Woods, H.
This report describes a survey of expert remote manipulator operators designed to identify features of control systems related to operator efficiency and comfort. It provides information for designing the control center for the Single-Shell Tank Waste Retrieval Manipulator System (TWRMS) Test Bed, described in a separate report. Research questions concerned preferred modes of control, optimum work sessions, sources of operator fatigue, importance of control system design features, and desired changes in control rooms. Participants comprised four expert remote manipulator operators at Oak Ridge National Laboratory, who individually have from 9 to 20 years of experience using teleoperators. The operators had all used rate and position control, and all preferred bilateral (force-reflecting) position control. They reported spending an average of 2.75 h in control of a teleoperator system during a typical shift. All were accustomed to working in a crew of two and alternating control and support roles in 2-h rotations in an 8-h shift. Operators reported that fatigue in using remote manipulator systems came mainly from watching TV monitors and making repetitive motions. Three of four experienced symptoms, including headaches and sore eyes, wrists, and back. Of 17 features of control rooms rated on importance, highest ratings went to comfort and support provided by the operator chair, location of controls, location of video monitors, video image clarity, types of controls, and control modes. When asked what they wanted to change, operators said work stations designed for comfort; simpler, lighter hand-controls; separate controls for each camera; better placement of remote camera; color monitors; and control room layouts that support crew interaction. Results of this small survey reinforced the importance of ergonomic factors in remote manipulation
Impacts of Continuous Electron Beam Accelerator Facility operations on groundwater and surface water: Appendix 9
Lee, D.W.
The operation of the proposed Continuous Electron Beam Accelerator Facility (CEBAF) at Newport News, Virginia, is expected to result in the activation and subsequent contamination of water resources in the vicinity of the accelerator. Since the proposed site is located in the headwaters of the watershed supplying Big Bethel Reservoir, concern has been expressed about possible contamination of water resources used for consumption. Data characterizing the surface water and groundwater regime in the site area are limited. A preliminary geotechnical investigation of the site has been completed (LAW 1985). This investigation concluded that groundwater flow is generally towards the southeast at an estimated velocity of 2.5 m/y. This conclusion is based on groundwater and soil boring data and is very preliminary in nature. This analysis makes use of the data and conclusions developed during the preliminary geotechnical investigation to provide an upper-bound assessment of radioactive contamination from CEBAF operations. A site water balance was prepared to describe the behavior of the hydrologic environment that is in close agreement with the observed data. The transport of contamination in the groundwater regime is assessed using a one-dimensional model. The groundwater model includes the mechanisms of groundwater flow, groundwater recharge, radioactive decay, and groundwater activation. The model formulation results in a closed-form, exact, analytic solution of the concentration of contamination in the groundwater. The groundwater solution is used to provide a source term for a surface-water analysis. The surface-water and groundwater models are prepared for steady state conditions such that they represent conservative evaluations of CEBAF operations
Assessment of the operating conditions of coordinated Q-V controller within secondary voltage control system
Arnautović Dušan
Full Text Available The paper, discusses the possibility to use coordinated Q-V controller (CQVC to perform secondary voltage control at the power plant level. The CQVC performs the coordination of the synchronous generators' (SG reactive power outputs in order to maintain the same total reactive power delivered by the steam power plant (SPP, while at the same time maintaining a constant voltage with programmed reactive droop characteristic at the SPP HV busbar. This busbar is the natural pilot node for secondary voltage control at HV level as the node with maximum power production and maximum power consumption. In addition to voltage control, the CQVC maintains the uniform allocation of reactive power reserves at all SGs in the power plant. This is accomplished by setting the reactive power of each SG at given operating point in accordance to the available reactive power of the same SG at that point. Different limitations imposed by unit's and plant equipment are superimposed on original SG operating chart (provided by the manufacturer in order to establish realistic limits of SG operation at given operating point. The CQVC facilitates: i practical implementation of secondary voltage control in power system, as it is capable of ensuring delivery of reactive power as requested by regional/voltage control while maintaining voltage at system pilot node, ii the full deployment of available reactive power of SGs which in turn contributes to system stability, iii assessment of the reactive power impact/contribution of each generator in providing voltage control as ancillary service. Furthermore, it is also possible to use CQVC to pricing reactive power production cost at each SG involved and to design reactive power bidding structure for transmission network devices by using recorded data. Practical exploitation experience acquired during CQVC continuous operation for over two years enabled implementation of the optimal setting of reference voltage and droop on daily
Low-level wastewater treatment facility process control operational test report
Bergquist, G.G.
This test report documents the results obtained while conducting operational testing of a new TK 102 level controller and total outflow integrator added to the NHCON software that controls the Low-Level Wastewater Treatment Facility (LLWTF). The test was performed with WHC-SD-CP-OTP 154, PFP Low-Level Wastewater Treatment Facility Process Control Operational Test. A complete test copy is included in appendix A. The new TK 102 level controller provides a signal, hereafter referred to its cascade mode, to the treatment train flow controller which enables the water treatment process to run for long periods without continuous operator monitoring. The test successfully demonstrated the functionality of the new controller under standard and abnormal conditions expected from the LLWTF operation. In addition, a flow totalizer is now displayed on the LLWTF outlet MICON screen which tallies the process output in gallons. This feature substantially improves the ability to retrieve daily process volumes for maintaining accurate material balances
Fluid logic control circuit operates nutator actuator motor
Fluid logic control circuit operates a pneumatic nutator actuator motor. It has no moving parts and consists of connected fluid interaction devices. The operation of this circuit demonstrates the ability of fluid interaction devices to operate in a complex combination of series and parallel logic sequence.
Startup and operation of a plant-scale continuous glass melter for vitrification of Savannah River Plant simulated waste
Willis, T.A.
The reference process for disposal of radioactive waste from the Savannah River Plant is vitrification of the waste in borosilicate glass in a continuous glass melter. Design, startup, and operation of a plant-scale developmental melter system are discussed
Operation and Control of Enzymatic Biodiesel Production
Price, Jason Anthony; Huusom, Jakob Kjøbsted; Nordblad, Mathias
This work explores the control of biodiesel production via an enzymatic catalyst. The process involves the transesterification of oils/fats with an alcohol (usually methanol or ethanol), using enzymatic catalysts to generate mono-alkyl esters (the basis of biodiesel) and glycerol as by......-product. Current literature indicates that enzymatic processing of oils and fats to produce biodiesel is technically feasible and developments in immobilization technology indicate that enzyme catalysts can become cost effective compared to chemical processing. However, with very few exceptions, enzyme technology...... is not currently used in commercial-scale biodiesel production. This is mainly due to non-optimized process designs, which do not use the full potential of the catalysts in a cost-efficient way. Furthermore is it unclear what process variables need to be monitored and controlled to ensure optimal economics...
Induction thermoelastic actuator with controllable operation regime
Doležel, Ivo; Kotlan, V.; Krónerová, E.; Ulrych, B.
Ro�. 29, �. 4 (2010), s. 1004-1014 ISSN 0332-1649 R&D Projects: GA ČR GA102/09/1305 Institutional research plan: CEZ:AV0Z20570509 Keywords : control of position * thermoelastic actuator * electromagnetic field Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 0.386, year: 2010 www.emeraldinsight.com/compel.htm
Control valve friction operational experience at Darlington NGD
Speer, B.
Proper installation of valve packing is an important part of ensuring that control valves operate as intended. Darlington NGD has developed a Valve Packing Program. This program combined with valve diagnostics has enabled the station to ensure that the operability of control valves is maintained after repacking. This paper outlines the process that is used for this. (author)
SSS-A attitude control prelaunch analysis and operations plan
Werking, R. D.; Beck, J.; Gardner, D.; Moyer, P.; Plett, M.
A description of the attitude control support being supplied by the Mission and Data Operations Directorate is presented. Descriptions of the computer programs being used to support the mission for attitude determination, prediction, control, and definitive attitude processing are included. In addition, descriptions of the operating procedures which will be used to accomplish mission objectives are provided.
Electroremediation of air pollution control residues in a continuous reactor
Jensen, Pernille Erland; Ferreira, Célia M. D.; Hansen, Henrik K.
Air pollution control (APC) residue from municipal solid waste incineration is considered hazardous waste due to its alkalinity and high content of salts and mobile heavy metals. Various solutions for the handling of APC-residue exist, however most commercial solutions involve landfilling. A demand...... were made with raw residue, water-washed residue, acid washed residue and acid-treated residue with emphasis on reduction of heavy metal mobility. Main results indicate that the reactor successfully removes toxic elements lead, copper, cadmium and zinc from the feed stream, suggesting...
Interval-Valued Hesitant Fuzzy Multiattribute Group Decision Making Based on Improved Hamacher Aggregation Operators and Continuous Entropy
Jun Liu
Full Text Available Under the interval-valued hesitant fuzzy information environment, we investigate a multiattribute group decision making (MAGDM method with continuous entropy weights and improved Hamacher information aggregation operators. Firstly, we introduce the axiomatic definition of entropy for interval-valued hesitant fuzzy elements (IVHFEs and construct a continuous entropy formula on the basis of the continuous ordered weighted averaging (COWA operator. Then, based on the Hamacher t-norm and t-conorm, the adjusted operational laws for IVHFEs are defined. In order to aggregate interval-valued hesitant fuzzy information, some new improved interval-valued hesitant fuzzy Hamacher aggregation operators are investigated, including the improved interval-valued hesitant fuzzy Hamacher ordered weighted averaging (I-IVHFHOWA operator and the improved interval-valued hesitant fuzzy Hamacher ordered weighted geometric (I-IVHFHOWG operator, the desirable properties of which are discussed. In addition, the relationship among these proposed operators is analyzed in detail. Applying the continuous entropy and the proposed operators, an approach to MAGDM is developed. Finally, a numerical example for emergency operating center (EOC selection is provided, and comparative analyses with existing methods are performed to demonstrate that the proposed approach is both valid and practical to deal with group decision making problems.
Continuous integration and quality control for scientific software
Neidhardt, A.; Ettl, M.; Brisken, W.; Dassing, R.
Modern software has to be stable, portable, fast and reliable. This is going to be also more and more important for scientific software. But this requires a sophisticated way to inspect, check and evaluate the quality of source code with a suitable, automated infrastructure. A centralized server with a software repository and a version control system is one essential part, to manage the code basis and to control the different development versions. While each project can be compiled separately, the whole code basis can also be compiled with one central "Makefile�. This is used to create automated, nightly builds. Additionally all sources are inspected automatically with static code analysis and inspection tools, which check well-none error situations, memory and resource leaks, performance issues, or style issues. In combination with an automatic documentation generator it is possible to create the developer documentation directly from the code and the inline comments. All reports and generated information are presented as HTML page on a Web server. Because this environment increased the stability and quality of the software of the Geodetic Observatory Wettzell tremendously, it is now also available for scientific communities. One regular customer is already the developer group of the DiFX software correlator project.
Singular Perturbation for the Discounted Continuous Control of Piecewise Deterministic Markov Processes
Costa, O. L. V.; Dufour, F.
This paper deals with the expected discounted continuous control of piecewise deterministic Markov processes (PDMP's) using a singular perturbation approach for dealing with rapidly oscillating parameters. The state space of the PDMP is written as the product of a finite set and a subset of the Euclidean space � n . The discrete part of the state, called the regime, characterizes the mode of operation of the physical system under consideration, and is supposed to have a fast (associated to a small parameter ε>0) and a slow behavior. By using a similar approach as developed in Yin and Zhang (Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Applications of Mathematics, vol. 37, Springer, New York, 1998, Chaps. 1 and 3) the idea in this paper is to reduce the number of regimes by considering an averaged model in which the regimes within the same class are aggregated through the quasi-stationary distribution so that the different states in this class are replaced by a single one. The main goal is to show that the value function of the control problem for the system driven by the perturbed Markov chain converges to the value function of this limit control problem as ε goes to zero. This convergence is obtained by, roughly speaking, showing that the infimum and supremum limits of the value functions satisfy two optimality inequalities as ε goes to zero. This enables us to show the result by invoking a uniqueness argument, without needing any kind of Lipschitz continuity condition.
Upgrading instrumentation and control systems for plant safety and operation
Martin, M.; Prehler, H.J.; Schramm, W.
Upgrading the electrical systems and instrumentation and control systems has become increasingly more important in the past few years for nuclear power plants currently in operation. As the requirements to be met in terms of plant safety and availability have become more stringent in the past few years, Western plants built in the sixties and seventies have been the subject of manifold backfitting and upgrading measures in the past. In the meantime, however, various nuclear power plants are facing much more thorough upgrading phases because of the difficulties in obtaining spare parts for older equipment systems. As digital technology has become widespread in many areas because of its advantages, and as applications are continuously expanding, conventional equipment and systems are losing more and more ground as a consequence of decreasing demand. Merely because of the pronounced decline in demand for conventional electronic components it is possible for equipment manufacturers to guarantee spare parts deliveries for older systems only for specific future periods of time. In addition, one-off manufacture entails high costs in purchases of spare parts. As a consequence of current thinking more and more focusing on availability and economy, upgrading of electrical systems and instrumentation and control systems is becoming a more and more topical question, for older plants even to ensure completion of full service life. (orig.) [de
Intelligent control system for continuous technological process of alkylation
Gebel, E. S.; Hakimov, R. A.
Relevance of intelligent control for complex dynamic objects and processes are shown in this paper. The model of a virtual analyzer based on a neural network is proposed. Comparative analysis of mathematical models implemented in MathLab software showed that the most effective from the point of view of the reproducibility of the result is the model with seven neurons in the hidden layer, the training of which was performed using the method of scaled coupled gradients. Comparison of the data from the laboratory analysis and the theoretical model are showed that the root-mean-square error does not exceed 3.5, and the calculated value of the correlation coefficient corresponds to a "strong" connection between the values.
Rotor experiments in controlled conditions continued: New Mexico
Boorsma, K.; Schepers, J. G.
To validate and reduce the large uncertainty associated with rotor aerodynamic and acoustic models, there is a need for detailed force, noise and surrounding flow velocity measurements on wind turbines under controlled conditions. However, high quality wind tunnel campaigns on horizontal axis wind turbine models are scarce due to the large wind tunnel size needed and consequently high associated costs. To serve this purpose an experiment using the Mexico turbine was set-up in the large low speed facility of the DNW wind tunnel. An overview of the experiments is given including a selection of results. A comparison of calculations to measurements for design conditions shows a satisfactory agreement. In summary, after years of preparation, ECN and partners have performed very successful aerodynamic experiments in the largest wind tunnel in Europe. The comprehensive high quality database that has been obtained will be used in the international Mexnext consortium to further develop wind energy aerodynamic and acoustic modeling.
Offset-Free Direct Power Control of DFIG Under Continuous-Time Model Predictive Control
Errouissi, Rachid; Al-Durra, Ahmed; Muyeen, S.M.
This paper presents a robust continuous-time model predictive direct power control for doubly fed induction generator (DFIG). The proposed approach uses Taylor series expansion to predict the stator current in the synchronous reference frame over a finite time horizon. The predicted stator current...... is directly used to compute the required rotor voltage in order to minimize the difference between the actual stator currents and their references over the predictive time. However, as the proposed strategy is sensitive to parameter variations and external disturbances, a disturbance observer is embedded...... into the control loop to remove the steady-state error of the stator current. It turns out that the steady-state and the transient performances can be identified by simple design parameters. In this paper, the reference of the stator current is directly calculated from the desired stator active and reactive powers...
Use of technology to support information needs for continuity of operations planning in public health: a systematic review.
Reeder, Blaine; Turner, Anne; Demiris, George
Continuity of operations planning focuses on an organization's ability to deliver essential services before, during and after an emergency. Public health leaders must make decisions based on information from many sources and their information needs are often facilitated or hindered by technology. The aim of this study is to provide a systematic review of studies of technology projects that address public health continuity of operations planning information needs and to discuss patterns, themes, and challenges to inform the design of public health continuity of operations information systems. To return a comprehensive results set in an under-explored area, we searched broadly in the Medline and EBSCOHost bibliographic databases using terms from prior work in public health emergency management and continuity of operations planning in other domains. In addition, we manually searched the citation lists of publications included for review. A total of 320 publications were reviewed. Twenty studies were identified for inclusion (twelve risk assessment decision support tools, six network and communications-enabled decision support tools, one training tool and one dedicated video-conferencing tool). Levels of implementation for information systems in the included studies range from proposed frameworks to operational systems. There is a general lack of documented efforts in the scientific literature for technology projects about public health continuity of operations planning. Available information about operational information systems suggest inclusion of public health practitioners in the design process as a factor in system success.
Operating Characteristics of a Continuous Two-Stage Bubbling Fluidized-Bed Process
Youn, Pil-Sang; Choi, Jeong-Hoo
Flow characteristics and the operating range of gas velocity was investigated for a two-stage bubbling fluidized-bed (0.1 m-i.d., 1.2 m-high) that had continuous solids feed and discharge. Solids were fed in to the upper fluidized-bed and overflowed into the bed section of the lower fluidized-bed through a standpipe (0.025 m-i.d.). The standpipe was simply a dense solids bed with no mechanical or non-mechanical valves. The solids overflowed the lower bed for discharge. The fluidizing gas was fed to the lower fluidized-bed and the exit gas was also used to fluidize the upper bed. Air was used as fluidizing gas and mixture of coarse (<1000 μm in diameter and 3090 kg/m 3 in apparent density) and fine (<100 μm in diameter and 4400 kg/m 3 in apparent density) particles were used as bed materials. The proportion of fine particles was employed as the experimental variable. The gas velocity of the lower fluidized-bed was defined as collapse velocity in the condition that the standpipe was emptied by upflow gas bypassing from the lower fluidized-bed. It could be used as the maximum operating velocity of the present process. The collapse velocity decreased after an initial increase as the proportion of fine particles increased. The maximum took place at the proportion of fine particles 30%. The trend of the collapse velocity was similar with that of standpipe pressure drop. The collapse velocity was expressed as a function of bulk density of particles and voidage of static bed. It increased with an increase of bulk density, however, decreased with an increase of voidage of static bed
Continuous Photo-Oxidation in a Vortex Reactor: Efficient Operations Using Air Drawn from the Laboratory.
Lee, Darren S; Amara, Zacharias; Clark, Charlotte A; Xu, Zeyuan; Kakimpa, Bruce; Morvan, Herve P; Pickering, Stephen J; Poliakoff, Martyn; George, Michael W
We report the construction and use of a vortex reactor which uses a rapidly rotating cylinder to generate Taylor vortices for continuous flow thermal and photochemical reactions. The reactor is designed to operate under conditions required for vortex generation. The flow pattern of the vortices has been represented using computational fluid dynamics, and the presence of the vortices can be easily visualized by observing streams of bubbles within the reactor. This approach presents certain advantages for reactions with added gases. For reactions with oxygen, the reactor offers an alternative to traditional setups as it efficiently draws in air from the lab without the need specifically to pressurize with oxygen. The rapid mixing generated by the vortices enables rapid mass transfer between the gas and the liquid phases allowing for a high efficiency dissolution of gases. The reactor has been applied to several photochemical reactions involving singlet oxygen ( 1 O 2 ) including the photo-oxidations of α-terpinene and furfuryl alcohol and the photodeborylation of phenyl boronic acid. The rotation speed of the cylinder proved to be key for reaction efficiency, and in the operation we found that the uptake of air was highest at 4000 rpm. The reactor has also been successfully applied to the synthesis of artemisinin, a potent antimalarial compound; and this three-step synthesis involving a Schenk-ene reaction with 1 O 2 , Hock cleavage with H + , and an oxidative cyclization cascade with triplet oxygen ( 3 O 2 ), from dihydroartemisinic acid was carried out as a single process in the vortex reactor.
Operational test procedure for pumping and instrumentation control skid SALW-6001B monitor and control system
Garcia, M.F.
This OTP shall verify and document that the monitor and control system comprised of PICS SALW-6001B PLC, 242S PLC, Operator Control Station, and communication network is functioning per operational requirements
Projection Operator: A Step Towards Certification of Adaptive Controllers
Larchev, Gregory V.; Campbell, Stefan F.; Kaneshige, John T.
One of the major barriers to wider use of adaptive controllers in commercial aviation is the lack of appropriate certification procedures. In order to be certified by the Federal Aviation Administration (FAA), an aircraft controller is expected to meet a set of guidelines on functionality and reliability while not negatively impacting other systems or safety of aircraft operations. Due to their inherent time-variant and non-linear behavior, adaptive controllers cannot be certified via the metrics used for linear conventional controllers, such as gain and phase margin. Projection Operator is a robustness augmentation technique that bounds the output of a non-linear adaptive controller while conforming to the Lyapunov stability rules. It can also be used to limit the control authority of the adaptive component so that the said control authority can be arbitrarily close to that of a linear controller. In this paper we will present the results of applying the Projection Operator to a Model-Reference Adaptive Controller (MRAC), varying the amount of control authority, and comparing controller s performance and stability characteristics with those of a linear controller. We will also show how adjusting Projection Operator parameters can make it easier for the controller to satisfy the certification guidelines by enabling a tradeoff between controller s performance and robustness.
Test, Control and Monitor System (TCMS) operations plan
Macfarlane, C. K.; Conroy, M. P.
The purpose is to provide a clear understanding of the Test, Control and Monitor System (TCMS) operating environment and to describe the method of operations for TCMS. TCMS is a complex and sophisticated checkout system focused on support of the Space Station Freedom Program (SSFP) and related activities. An understanding of the TCMS operating environment is provided and operational responsibilities are defined. NASA and the Payload Ground Operations Contractor (PGOC) will use it as a guide to manage the operation of the TCMS computer systems and associated networks and workstations. All TCMS operational functions are examined. Other plans and detailed operating procedures relating to an individual operational function are referenced within this plan. This plan augments existing Technical Support Management Directives (TSMD's), Standard Practices, and other management documentation which will be followed where applicable.
The Movement Control Battalions Role in Airfield Operations
November–December 2015 Army Sustainment44 The 53rd Transportation Bat-talion ( Movement Control) (MCB) arrived in Liberia in support of the...over Internet Pro- tocol. The MCT did not have these capabilities. During the deployment, the 53rd MCB consisted of the headquarters The Movement ...Control Battalion's Role in Airfield Operations The 53rd Transportation Battalion ( Movement Control) assumed responsibility for airfield operations
40 CFR 63.7335 - How do I demonstrate continuous compliance with the operation and maintenance requirements that...
... corrective action is completed. (c) To demonstrate continuous compliance with the operation and maintenance... compliance with the operation and maintenance requirements that apply to me? 63.7335 Section 63.7335... maintenance requirements that apply to me? (a) For each by-product coke oven battery, you must demonstrate...
... emission monitoring systems are operating correctly? 60.2940 Section 60.2940 Protection of Environment... monitoring systems are operating correctly? (a) Conduct initial, daily, quarterly, and annual evaluations of your continuous emission monitoring systems that measure carbon monoxide and oxygen. (b) Complete your...
... emission monitoring systems are operating correctly? 60.3039 Section 60.3039 Protection of Environment... emission monitoring systems are operating correctly? (a) Conduct initial, daily, quarterly, and annual evaluations of your continuous emission monitoring systems that measure carbon monoxide and oxygen. (b...
21 CFR 111.117 - What quality control operations are required for equipment, instruments, and controls?
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What quality control operations are required for equipment, instruments, and controls? 111.117 Section 111.117 Food and Drugs FOOD AND DRUG ADMINISTRATION... and Process Control System: Requirements for Quality Control § 111.117 What quality control operations...
Modeling and Control for Islanding Operation of Active Distribution Systems
Cha, Seung-Tae; Wu, Qiuwei; Saleem, Arshad
to stabilize the frequency. Different agents are defined to represent different resources in the distribution systems. A test platform with a real time digital simulator (RTDS), an OPen Connectivity (OPC) protocol server and the multi-agent based intelligent controller is established to test the proposed multi......Along with the increasing penetration of distributed generation (DG) in distribution systems, there are more resources for system operators to improve the operation and control of the whole system and enhance the reliability of electricity supply to customers. The distribution systems with DG...... are able to operate in is-landing operation mode intentionally or unintentionally. In order to smooth the transition from grid connected operation to islanding operation for distribution systems with DG, a multi-agent based controller is proposed to utilize different re-sources in the distribution systems...
Study on the Correlation between PSR and Korean Stress Test for Continued Operation of Aging NPP
Chung, June Ho; Kim, Tae Ryong
Nuclear Safety and Security Commission (NSSC), Korean nuclear regulatory authority established the stress test guideline based on the EU stress test, and KHNP prepared the execution plan in response to the guideline for the CO of Kori Unit 1 and Wolsong Unit 1. PSR is a comprehensive safety review program for long term operation of NPP, which was developed by IAEA. Korea adopted PSR in 1999 as the regulatory requirement for CO of NPP. The IAEA standard guideline for PSR program was updated in 2003. However, the Korean PSR has not been revised yet to apply the new IAEA guidelines. Additionally, national legal systems and guidelines associated with the adoption of stress tests are urgently required as well. These revisions are imperative in order to ensure the reliability of NPPs, and to promote public acceptance and understanding. This study presents the technical basis and proposals for review actions necessary to address the issues and controversies surrounding the continued operation and decommissioning of aging NPPs in Korea. As discussed earlier in characteristics of Korean Stress Test, it is more comprehensive than the EU Stress Test in terms of its multilateral evaluation which includes equipment durability, plant operation, human factors, and safety margins, hence substantially raising the significance and value of the evaluation process. Thus, the addition of Korean Stress Test to the existing Korean Evaluation of CO is expected to greatly increase the quality of safety assessment of aging NPPs in Korea due to its stricter safety policies, hence providing a more meaningful evaluation process. However, a one-time application of the Korean Stress Test to only Kori Unit 1 and Wolsong Unit 1 would be a waste of the great effort that has been done thus far to improve the Korean Evaluation of CO and develop the Korean Stress Test. By extending the Korean Stress Test to all NPPs in Korea would maintain and ensure the reliability of NPPs as well as public
What makes a control system usable? An operational viewpoint
Clay, M.
This report discusses the generally accepted successes and shortcomings of the various computer and hardware-based control systems at the Los Alamos Meson Physics Facility (LAMPF) from an operator's standpoint. LAMPF currently utilizes three separate control rooms that, although critically co-dependent, use distinct operating methods. The first, the Injector Control Room, which is responsible for the operation of the three ion sources, the 750 keV transport lines and the 201.25 MHz portion of the linac, uses a predominantly hardware-based control system. The second, the LANSCE Control Room, which is responsible for the operation of the Los Alamos Neutron Scattering Center, uses a graphical touch-panel interface with single-application screens as its control system. The third, the LAMPF Central Control Room, which is responsible for the overall operation of LAMPF, primarily uses a text-oriented keyboard interface with multiple applications per screen. Though each system provides generally reliable human interfacing to the enormously complex and diverse machine known as LAMPF, the operational requirements of speed, usability, and reliability are increasingly necessitating the use of a standard control system that incorporates the positive aspects of all three control systems. (orig.)
Power scaling and experimentally fitted model for broad area quantum cascade lasers in continuous wave operation
Suttinger, Matthew; Go, Rowel; Figueiredo, Pedro; Todi, Ankesh; Shu, Hong; Leshin, Jason; Lyakh, Arkadiy
Experimental and model results for 15-stage broad area quantum cascade lasers (QCLs) are presented. Continuous wave (CW) power scaling from 1.62 to 2.34 W has been experimentally demonstrated for 3.15-mm long, high reflection-coated QCLs for an active region width increased from 10 to 20 μm. A semiempirical model for broad area devices operating in CW mode is presented. The model uses measured pulsed transparency current, injection efficiency, waveguide losses, and differential gain as input parameters. It also takes into account active region self-heating and sublinearity of pulsed power versus current laser characteristic. The model predicts that an 11% improvement in maximum CW power and increased wall-plug efficiency can be achieved from 3.15 mm×25 μm devices with 21 stages of the same design, but half doping in the active region. For a 16-stage design with a reduced stage thickness of 300 Å, pulsed rollover current density of 6 kA/cm2, and InGaAs waveguide layers, an optical power increase of 41% is projected. Finally, the model projects that power level can be increased to ˜4.5 W from 3.15 mm×31 μm devices with the baseline configuration with T0 increased from 140 K for the present design to 250 K.
Fessenheim 2: ASN's green light for continuing operation - Beginning of the works for unit 1
Every 10 years a nuclear power plant operator has to make a re-assessment of the nuclear safety standard of his plant. This re-assessment is made of 2 parts: first the review of the safety conformity and secondly a thorough re-examination of the safety that takes into account today's safety standards and feedback experience from similar plants. This detailed assessment of the safety aims at checking that the consequences of the different aging phenomena are well mastered for the next 10 years at least. At the end of this re-assessment, the ASN (French Nuclear Safety Authorities) decide or not the continuation of plant activity or can prescribe safety improvements. In the case of the unit 2 of the Fessenheim plant that has just finished its third decennial safety re-assessment, the ASN has prescribed the same improvements as for the unit 1, that is to say the reinforcement of the resistance to corium of the foundation raft and an improvement on the emergency cooling system. The works on the unit 1 have begun despite contestation from anti-nuclear associations that question the cost of the safety upgrading (20 to 30 millions euros) while the unit is expected to be decommissioned by end 2016. (A.C.)
Water-cooled U-tube grids for continuously operated neutral-beam injectors
Hoffman, M.A.; Duffy, T.J.
A design for water-cooled extractor grids for long-pulse and continuously operated ion sources for neutral-beam injectors is described. The most serious design problem encountered is that of minimizing the thermal deformation (bowing) of these slender grid rails, which have typical overall spans of 150 mm and diameters on the order of 1 mm. A unique U-tube design is proposed that offers the possibility of keeping the thermal bowing down to about 0.05 mm (about 2.0 mils). However, the design requires high-velocity cooling water at a Reynolds number of about 3 x 10 4 and an inlet pressure on the order of 4.67 x 10 6 Pa (677 psia) in order to keep the axial and circumferential temperature differences small enough to achieve the desired small thermal bowing. It appears possible to fabricate and assemble these U-tube grids out of molybdenum with high precision and with a reasonably small number of brazes
Characterization of wastewater treatment by two microbial fuel cells in continuous flow operation.
Kubota, Keiichi; Watanabe, Tomohide; Yamaguchi, Takashi; Syutsubo, Kazuaki
A two serially connected single-chamber microbial fuel cell (MFC) was applied to the treatment of diluted molasses wastewater in a continuous operation mode. In addition, the effect of series and parallel connection between the anodes and the cathode on power generation was investigated experimentally. The two serially connected MFC process achieved 79.8% of chemical oxygen demand removal and 11.6% of Coulombic efficiency when the hydraulic retention time of the whole process was 26 h. The power densities were 0.54, 0.34 and 0.40 W m(-3) when electrodes were in individual connection, serial connection and parallel connection modes, respectively. A high open circuit voltage was obtained in the serial connection. Power density decreased at low organic loading rates (OLR) due to the shortage of organic matter. Power generation efficiency tended to decrease as a result of enhancement of methane fermentation at high OLRs. Therefore, high power density and efficiency can be achieved by using a suitable OLR range.
Empirical investigation of workloads of operators in advanced control rooms
Kim, Yochan; Jung, Wondea; Kim, Seunghwan
This paper compares the workloads of operators in a computer-based control room of an advanced power reactor (APR 1400) nuclear power plant to investigate the effects from the changes in the interfaces in the control room. The cognitive-communicative-operative activity framework was employed to evaluate the workloads of the operator's roles during emergency operations. The related data were obtained by analyzing the tasks written in the procedures and observing the speech and behaviors of the reserved operators in a full-scope dynamic simulator for an APR 1400. The data were analyzed using an F-test and a Duncan test. It was found that the workloads of the shift supervisors (SSs) were larger than other operators and the operative activities of the SSs increased owing to the computer-based procedure. From these findings, methods to reduce the workloads of the SSs that arise from the computer-based procedure are discussed. (author)
Heavy impurity confinement in hybrid operation scenario plasmas with a rotating 1/1 continuous mode
Raghunathan, M.; Graves, J. P.; Nicolas, T.; Cooper, W. A.; Garbet, X.; Pfefferlé, D.
In future tokamaks like ITER with tungsten walls, it is imperative to control tungsten accumulation in the core of operational plasmas, especially since tungsten accumulation can lead to radiative collapse and disruption. We investigate the behavior of tungsten trace impurities in a JET-like hybrid scenario with both axisymmetric and saturated 1/1 ideal helical core in the presence of strong plasma rotation. For this purpose, we obtain the equilibria from VMEC and use VENUS-LEVIS, a guiding-center orbit-following code, to follow heavy impurity particles. In this work, VENUS-LEVIS has been modified to account for strong plasma flows with associated neoclassical effects arising from such flows. We find that the combination of helical core and plasma rotation augments the standard neoclassical inward pinch compared to axisymmetry, and leads to a strong inward pinch of impurities towards the magnetic axis despite the strong outward diffusion provided by the centrifugal force, as frequently observed in experiments.
Automation inflicted differences on operator performance in nuclear power plant control rooms
Andersson, Jonas; Osvalder, A.L.
Today it is possible to automate almost any function in a human-machine system. Therefore it is important to find a balance between automation level and the prerequisites for the operator to maintain safe operation. Different human factors evaluation methods can be used to find differences between automatic and manual operations that have an effect on operator performance; e.g. Predictive Human Error Analysis (PHEA), NASA Task Load Index (NASA-TLX), Halden Questionnaire, and Human Error Assessment and Reduction Technique (HEART). Results from an empirical study concerning automation levels, made at Ringhals power plant, showed that factors as time pressure and criticality of the work situation influenced the operator's performance and mental workload more than differences in level of automation. The results indicate that the operator's attention strategies differ between the manual and automatic sequences. Independently of level of automation, it is essential that the operator retains control and situational understanding. When performing a manual task, the operator is 'closer' to the process and in control with sufficient situational understanding. When the level of automation increases, the demands on information presentation increase to ensure safe plant operation. The need for control can be met by introducing 'control gates' where the operator has to accept that the automatic procedures are continuing as expected. Situational understanding can be established by clear information about process status and by continuous feedback. A conclusion of the study was that a collaborative control room environment is important. Rather than allocating functions to either the operator or the system, a complementary strategy should be used. Key parameters to consider when planning the work in the control room are time constraints and task criticality and how they affect the performance of the joint cognitive system.However, the examined working situations were too different
User Control Interface for W7-X Plasma Operation
Spring, A.; Laqua, H.; Schacht, J.
The WENDELSTEIN 7-X fusion experiment will be a highly complex device operated by a likewise complex control system. The fundamental configuration of the W7-X control system follows two major design principles: It reflects the strict hierarchy of the machine set-up with a set of subordinated components, which in turn can be run autonomously during commissioning and testing. Secondly, it links the basic machine operation (mainly given by the infrastructure status and the components readiness) and the physics program execution (i.e. plasma operation) on each hierarchy level and on different time scales. The complexity of the control system implies great demands on appropriate user interfaces: specialized tools for specific control tasks allowing a dedicated view on the subject to be controlled, hiding complexity wherever possible and reasonable, providing similar operation methods on each hierarchy level and both manual interaction possibilities and a high degree of intelligent automation. The contribution will describe the operation interface for experiment control including the necessary links to the machine operation. The users of ' Xcontrol ' will be both the W7-X session leaders during plasma discharge experiments and the components' or diagnostics' operators during autonomous mode or even laboratory experiments. The main ' Xcontrol ' features, such as program composition and validation, manual and automatic control instruments, resource survey, and process monitoring, will be presented. The implementation principles and the underlying communication will be discussed. (author)
Optimal control of operation efficiency of belt conveyor systems
Zhang, Shirong; Xia, Xiaohua
The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment or operation levels. Switching control and variable speed control are proposed in literature to improve energy efficiency of belt conveyors. The current implementations mostly focus on lower level control loops or an individual belt conveyor without operational considerations at the system level. In this paper, an optimal switching control and a variable speed drive (VSD) based optimal control are proposed to improve the energy efficiency of belt conveyor systems at the operational level, where time-of-use (TOU) tariff, ramp rate of belt speed and other system constraints are considered. A coal conveying system in a coal-fired power plant is taken as a case study, where great saving of energy cost is achieved by the two optimal control strategies. Moreover, considerable energy saving resulting from VSD based optimal control is also proved by the case study.
Zhang, Shirong [Department of Automation, Wuhan University, Wuhan 430072 (China); Xia, Xiaohua [Department of Electrical, Electronic and Computer Engineering, University of Pretoria, Pretoria 0002 (South Africa)
The improvement of the energy efficiency of belt conveyor systems can be achieved at equipment or operation levels. Switching control and variable speed control are proposed in literature to improve energy efficiency of belt conveyors. The current implementations mostly focus on lower level control loops or an individual belt conveyor without operational considerations at the system level. In this paper, an optimal switching control and a variable speed drive (VSD) based optimal control are proposed to improve the energy efficiency of belt conveyor systems at the operational level, where time-of-use (TOU) tariff, ramp rate of belt speed and other system constraints are considered. A coal conveying system in a coal-fired power plant is taken as a case study, where great saving of energy cost is achieved by the two optimal control strategies. Moreover, considerable energy saving resulting from VSD based optimal control is also proved by the case study. (author)
A Hybrid MPC-PID Control System Design for the Continuous Purification and Processing of Active Pharmaceutical Ingredients
Maitraye Sen
Full Text Available In this work, a hybrid MPC (model predictive control-PID (proportional-integral-derivative control system has been designed for the continuous purification and processing framework of active pharmaceutical ingredients (APIs. The specific unit operations associated with the purification and processing of API have been developed from first-principles and connected in a continuous framework in the form of a flowsheet model. These integrated unit operations are highly interactive along with the presence of process delays. Therefore, a hybrid MPC-PID is a promising alternative to achieve the desired control loop performance as mandated by the regulatory authorities. The integrated flowsheet model has been simulated in gPROMSTM (Process System Enterprise, London, UK. This flowsheet model has been linearized in order to design the control scheme. The ability to track the set point and reject disturbances has been evaluated. A comparative study between the performance of the hybrid MPC-PID and a PID-only control scheme has been presented. The results show that an enhanced control loop performance can be obtained under the hybrid control scheme and demonstrate that such a scheme has high potential in improving the efficiency of pharmaceutical manufacturing operations.
Definition and dynamic control of a continuous chromatography process independent of cell culture titer and impurities.
Chmielowski, Rebecca A; Mathiasson, Linda; Blom, Hans; Go, Daniel; Ehring, Hanno; Khan, Heera; Li, Hong; Cutler, Collette; Lacki, Karol; Tugcu, Nihal; Roush, David
the HCCF and ∼45-70% for titers of up to 10g/L independent of UV absorbance of the HCCF. The strategy and results presented in this paper show column loading in a continuous chromatography step can be dynamically controlled independent of the cell culture feedstock and titer, and allow for enhanced process control built into the downstream continuous operations. Copyright © 2017 Elsevier B.V. All rights reserved.
Attitude Control Optimization for ROCSAT-2 Operation
Chern, Jeng-Shing; Wu, A.-M.
one revolution. The purpose of this paper is to present the attitude control design optimization such that the maximum solar energy is ingested while minimum maneuvering energy is dissipated. The strategy includes the maneuvering sequence design, the minimization of angular path, the sizing of three magnetic torquers, and the trade-off of the size, number and orientations arrangement of momentum wheels.
Decomposing Objectives and Functions in Power System Operation and Control
Heussen, Kai; Lind, Morten
mix of challenges posed by renewable energy sources, demand response technologies and smartgrid concepts, affecting all areas of power system operation. Both, new control modes and changes in market design are required. This paper presents a mean-ends perspective to the analysis of the control......The introduction of many new energy solutions requires the adaptation of classical operation paradigms in power systems. In the standard paradigm, a power system is some equivalent of a synchronous generators, a power line and an uncontrollable load. This paradigm has been challenged by a diverse...... structures and operation paradigms in present power systems. In a top-down approach, traditional frequency- and area-control mechanisms are formalized. It is demonstrated that future power system operation paradigms with different generation control modes and controllable demand can be modeled in a coherent...
Design of automatic control and measurement software for radioactive aerosol continuity monitor
Mao Yong; Li Aiwu
The radioactive aerosol continuity measurement is very important for the development of nuclear industry, and it is the major method to measure and find out the leakage of radioactive material. Radioactive aerosol continuity monitor is the advanced method for the radioactive aerosol continuity measurement. With the development of nuclear industry and nuclear power station, it is necessary to design and automatic continuity measurement device. Because of this reason, the authors developed the first unit of radioactive aerosol continuity monitor and adopted the ministry appraisal. The design idea and method of automatic control and measurement for radioactive aerosol continuity monitor are discussed
INTEGRATED ROBOT-HUMAN CONTROL IN MINING OPERATIONS
George Danko
This report contains a detailed description of the work conducted in the first year of the project on Integrated Robot-Human Control in Mining Operations at University of Nevada, Reno. This project combines human operator control with robotic control concepts to create a hybrid control architecture, in which the strengths of each control method are combined to increase machine efficiency and reduce operator fatigue. The kinematics reconfiguration type differential control of the excavator implemented with a variety of ''software machine kinematics'' is the key feature of the project. This software re-configured excavator is more desirable to execute a given digging task. The human operator retains the master control of the main motion parameters, while the computer coordinates the repetitive movement patterns of the machine links. These repetitive movements may be selected from a pre-defined family of trajectories with different transformations. The operator can make adjustments to this pattern in real time, as needed, to accommodate rapidly-changing environmental conditions. A Bobcat{reg_sign} 435 excavator was retrofitted with electro-hydraulic control valve elements. The modular electronic control was tested and the basic valve characteristics were measured for each valve at the Robotics Laboratory at UNR. Position sensors were added to the individual joint control actuators, and the sensors were calibrated. An electronic central control system consisting of a portable computer, converters and electronic driver components was interfaced to the electro-hydraulic valves and position sensors. The machine is operational with or without the computer control system depending on whether the computer interface is on or off. In preparation for emulated mining tasks tests, typical, repetitive tool trajectories during surface mining operations were recorded at the Newmont Mining Corporation's ''Lone Tree'' mine in Nevada.
New oil condition monitoring system, Wearsens® enables continuous, online detection of critical operating conditions and wear damage
Manfred Mauntz
Full Text Available A new oil sensor system is presented for the continuous, online measurement of the wear in turbines, industrial gears, generators, hydraulic systems and transformers. Detection of change is much earlier than existing technologies such as particle counting, vibration measurement or recording temperature. Thus targeted, corrective procedures and/or maintenance can be carried out before actual damage occurs. Efficient machine utilization, accurately timed preventive maintenance, increased service life and a reduction of downtime can all be achieved. The presented sensor system effectively controls the proper operation conditions of bearings and cogwheels in gears. The online diagnostics system measures components of the specific complex impedance of oils. For instance, metal abrasion due to wear debris, broken oil molecules, forming acids or oil soaps, result in an increase of the electrical conductivity, which directly correlates with the degree of contamination of the oil. For additivated lubricants, the stage of degradation of the additives can also be derived from changes in the dielectric constant. The determination of impurities or reduction in the quality of the oil and the quasi continuous evaluation of wear and chemical aging follow the holistic approach of a real-time monitoring of an alteration in the condition of the oil-machine system. Once the oil condition monitoring sensors are installed on the wind turbine, industrial gearbox and test stands, the measuring data can be displayed and evaluated elsewhere. The signals are transmitted to a web-based condition monitoring system via LAN, WLAN or serial interfaces of the sensor unit. Monitoring of the damage mechanisms during proper operation below the tolerance limits of the components enables specific preventive maintenance independent of rigid inspection intervals.
High-power LED light sources for optical measurement systems operated in continuous and overdriven pulsed modes
Stasicki, Bolesław; Schröder, Andreas; Boden, Fritz; Ludwikowski, Krzysztof
The rapid progress of light emitting diode (LED) technology has recently resulted in the availability of high power devices with unprecedented light emission intensities comparable to those of visible laser light sources. On this basis two versatile devices have been developed, constructed and tested. The first one is a high-power, single-LED illuminator equipped with exchangeable projection lenses providing a homogenous light spot of defined diameter. The second device is a multi-LED illuminator array consisting of a number of high-power LEDs, each integrated with a separate collimating lens. These devices can emit R, G, CG, B, UV or white light and can be operated in pulsed or continuous wave (CW) mode. Using an external trigger signal they can be easily synchronized with cameras or other devices. The mode of operation and all parameters can be controlled by software. Various experiments have shown that these devices have become a versatile and competitive alternative to laser and xenon lamp based light sources. The principle, design, achieved performances and application examples are given in this paper.
Operation control device for a nuclear reactor fuel exchanger
Aida, Takashi.
Purpose: To provide a operation control device for a nuclear reactor fuel exchanger with reduced size and weight capable of optionally meeting the complicated and versatile mode of the operation scope. Constitution: The operation range of a fuel exchanger is finely divided so as to attain the state capable of discriminating between operation-allowable range and operation-inhibitive range, which are stored in a memory circuit. Upon operating the fuel exchanger, the position is detected and a divided range data corresponding to the present position is taken out from the memory circuit so as to determine whether the fuel exchanger is to be run or stopped. Use of reduced size and compact IC circuits (calculation circuit, memory circuit, data latch circuit) and input/output interface circuits or the likes contributes to the size reduction of the exchanger control system to enlarge the floor maintenance space. (Moriyama, K.)
Biological Assessment of the Continued Operation of Los Alamos National Laboratory on Federally Listed Threatened and Endangered Species
Hansen, Leslie A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Ecology and Air Quality Group
This biological assessment considers the effects of continuing to operate Los Alamos National Laboratory on Federally listed threatened or endangered species, based on current and future operations identified in the 2006 Site-wide Environmental Impact Statement for the Continued Operation of Los Alamos National Laboratory (SWEIS; DOE In Prep.). We reviewed 40 projects analyzed in the SWEIS as well as two aspects on ongoing operations to determine if these actions had the potential to affect Federally listed species. Eighteen projects that had not already received U.S. Fish and Wildlife Service (USFWS) consultation and concurrence, as well as the two aspects of ongoing operations, ecological risk from legacy contaminants and the Outfall Reduction Project, were determined to have the potential to affect threatened or endangered species. Cumulative impacts were also analyzed.
Operation condition for continuous anti-solvent crystallization of CBZ-SAC cocrystal considering deposition risk of undesired crystals
Nishimaru, Momoko; Nakasa, Miku; Kudo, Shoji; Takiyama, Hiroshi
Crystallization operation of cocrystal production has deposition risk of undesired crystals. Simultaneously, continuous manufacturing processes are focused on. In this study, conditions for continuous cocrystallization considering risk reduction of undesired crystals deposition were investigated on the view point of thermodynamics and kinetics. The anti-solvent cocrystallization was carried out in four-component system of carbamazepine, saccharin, methanol and water. From the preliminary batch experiment, the relationships among undesired crystal deposition, solution composition decided by mixing ratio of solutions, and residence time for the crystals were considered, and then the conditions of continuous experiment were decided. Under these conditions, the continuous experiment was carried out. The XRD patterns of obtained crystals in the continuous experiment showed that desired cocrystals were obtained without undesired crystals. This experimental result was evaluated by using multi-component phase diagrams from the view point of the operation point's movement. From the evaluation, it was found that there is a certain operation condition which the operation point is fixed with time in the specific domain without the deposition risk of undesired single component crystals. It means the possibility of continuous production of cocrystals without deposition risk of undesired crystals was confirmed by using multi-component phase diagrams.
A system approach to controlling semiconductor manufacturing operations
Σταυ�άκης, Γιώ�γος Δ.
Semicoductor manufacturers, faced with stiffening competition in both product cost and quality, require improved utilization of their development and manufacturing resources. Manufacturing philosophy must be changed, from focusing on short term results, to support continuous improvements in both output and quality. Such improvements demand better information management to monitor and control the manufacturing process. From these considerations, a process control methodology was develope...
Economics of tobacco control research initiative: Operating costs for ...
International Development Research Centre (IDRC) Digital Library (Canada)
Economics of tobacco control research initiative: Operating costs for capacity building ... (but misinformed) beliefs about the economic benefits of the tobacco industry ... Nutrition, health policy, and ethics in the age of public-private partnerships.
Control and operation cost optimization of the HISS cryogenic system
Porter, J.; Anderson, D.; Bieser, F.
This chapter describes a control strategy for the Heavy Ion Spectrometer System (HISS), which relies upon superconducting coils of cryostable design to provide a particle bending field of 3 tesla. The control strategy has allowed full time unattended operation and significant operating cost reductions. Microprocessor control of flash boiling style LIN circuits has been successful. It is determined that the overall operating cost of most cryogenic systems using closed loop helium systems can be minimized by properly balancing the total heat load between the helium and nitrogen circuits to take advantage of the non-linearity which exists in the power input to 4K refrigeration characteristic. Variable throughput compressors have the advantage of turndown capability at steady state. It is concluded that a hybrid system using digital and analog input for control, data display and alarms enables full time unattended operation
Data mining of air traffic control operational errors
In this paper we present the results of : applying data mining techniques to identify patterns and : anomalies in air traffic control operational errors (OEs). : Reducing the OE rate is of high importance and remains a : challenge in the aviation saf...
Neutron field control cybernetics model of RBMK reactor operator
Polyakov, V.V.; Postnikov, V.V.; Sviridenkov, A.N.
Results on parameter optimization for cybernetics model of RBMK reactor operator by power release control function are presented. Convolutions of various criteria applied previously in algorithms of the program 'Adviser to reactor operator' formed the basis of the model. 7 refs.; 4 figs
Quench monitoring and control system and method of operating same
Ryan, David Thomas; Laskaris, Evangelos Trifon; Huang, Xianrui
A rotating machine comprising a superconductive coil and a temperature sensor operable to provide a signal representative of superconductive coil temperature. The rotating machine may comprise a control system communicatively coupled to the temperature sensor. The control system may be operable to reduce electric current in the superconductive coil when a signal representative of a defined superconducting coil temperature is received from the temperature sensor.
Purpose and benefit of control system training for operators
Zimoch, E.; Luedeke, A.
The complexity of accelerators is ever increasing and today it is typical that a large number of feedback loops are implemented, based on sophisticated models which describe the underlying physics. Despite this increased complexity the machine operators must still effectively monitor and supervise the desired behavior of the accelerator. This is not alone sufficient; additionally, the correct operation of the control system must also be verified. This is not always easy since the structure, design, and performance of the control system is usually not visualized and is often hidden to the operator. To better deal with this situation operators need some knowledge of the control system in order to react properly in the case of problems. In fact operators need mental models of the control system to recognize fault states and react appropriate to errors and misbehavior of both, the accelerator and the control system itself. Mental models gained only on infrequent experience can be imprecise or plain wrong in worst case. Control system training can provide a foundation to build better mental models and therefore help to enhance operator responses and machine availability. For a refinement of the mental model repeated experience is needed. This can be provided by training sessions at the real accelerator
Economic Benefit from Progressive Integration of Scheduling and Control for Continuous Chemical Processes
Logan D. R. Beal
Full Text Available Performance of integrated production scheduling and advanced process control with disturbances is summarized and reviewed with four progressive stages of scheduling and control integration and responsiveness to disturbances: open-loop segregated scheduling and control, closed-loop segregated scheduling and control, open-loop scheduling with consideration of process dynamics, and closed-loop integrated scheduling and control responsive to process disturbances and market fluctuations. Progressive economic benefit from dynamic rescheduling and integrating scheduling and control is shown on a continuously stirred tank reactor (CSTR benchmark application in closed-loop simulations over 24 h. A fixed horizon integrated scheduling and control formulation for multi-product, continuous chemical processes is utilized, in which nonlinear model predictive control (NMPC and continuous-time scheduling are combined.
Robust control system for belt continuously variable transmission; Robust seigyo wo tekiyoshita mudan hensokuki no hensokuhi servo kei no kaihatsu
Adachi, K; Wakahara, T; Shimanaka, S; Yamamoto, M; Oshidari, T [Nissan Motor Co. Ltd., Tokyo (Japan)
The continuously variable transmission control system consists of generation of a desired gear ratio and a servo gear ratio system. The servo gear ratio system must provide the desired response at all times without being influenced by external disturbances. These include oil pressure as well as variation in performance due to operating conditions or changes occurring with us. We have developed the servo gear ratio system incorporating a robust model matching method, which enables the belt continuously variable transmission to satisfy this performance requirement. 2 refs., 9 figs.
Review of selected cost drivers for decisions on continued operation of older nuclear reactors. Safety upgrades, lifetime extension, decommissioning
Lately, the approach to the operation of relatively old NPPs has become an important issue for the nuclear industry for several reasons. First, a large part of operating NPPs will reach the planned end of their lives relatively soon. Replacing these capacities can involve significant investment for the concerned countries and utilities. Second, many operating NPPs while about 30 years old are still in very good condition. Their continued safe operation appears possible and may bring about essential economic gains. Finally, with the costs of new NPPs being rather high at present, continued operation of existing plants and eventually their lifetime extension are viable options for supporting the nuclear share in power generation. This is becoming especially important in view of the growing attention to the issue of global warming and the role of nuclear energy in greenhouse gas mitigation. This report is a review of information related to three cost categories that are part of such cost-benefit analysis: costs of safety upgrades for continued operation of a nuclear unit, costs of lifetime extension and costs of decommissioning. It can serve as a useful reference source for experts and decision makers involved in the economics of operating NPPs
Integrating an incident management system within a continuity of operations programme: case study of the Bank of Canada.
Loop, Carole
Carrying out critical business functions without interruption requires a resilient and robust business continuity framework. By embedding an industry-standard incident management system within its business continuity structure, the Bank of Canada strengthened its response plan by enabling timely response to incidents while maintaining a strong focus on business continuity. A total programme approach, integrating the two disciplines, provided for enhanced recovery capabilities. While the value of an effective and efficient response organisation is clear, as demonstrated by emergency events around the world, incident response structures based on normal operating hierarchy can experience unique challenges. The internationally-recognised Incident Command System (ICS) model addresses these issues and reflects the five primary incident management functions, each contributing to the overall strength and effectiveness of the response organisation. The paper focuses on the Bank of Canada's successful implementation of the ICS model as its incident management and continuity of operations programmes evolved to reflect current best practices.
Mitigating the Long term Operating Extreme Load through Active Control
Koukoura, Christina; Natarajan, Anand
blade azimuth location are shown to affect the extreme blade load magnitude during operation in normal turbulence wind input. The simultaneously controlled operation of generator torque variation and pitch variation at low blade pitch angles is detected to be responsible for very high loads acting...... on the blades. Through gain scheduling of the controller (modifications of the proportional Kp and the integral Ki gains) the extreme loads are mitigated, ensuring minimum instantaneous variations in the power production for operation above rated wind speed. The response of the blade load is examined...
Post-operative pain control after tonsillectomy: dexametasone vs tramadol.
Topal, Kubra; Aktan, Bulent; Sakat, Muhammed Sedat; Kilic, Korhan; Gozeler, Mustafa Sitki
Tramadol was found to be more effective than dexamethasone in post-operative pain control, with long-lasting relief of pain. This study aimed to compare the effects of pre-operative local injections of tramadol and dexamethasone on post-operative pain, nausea and vomiting in patients who underwent tonsillectomy. Sixty patients between 3-13 years of age who were planned for tonsillectomy were included in the study. Patients were divided into three groups. Group 1 was the control group. Patients in Group 2 received 0.3 mg/kg Dexamethasone and Group 3 received 0.1 mg/kg Tramadol injection to the peritonsillary space just before the operation. Patients were evaluated for nausea, vomiting, and pain. When the control and the dexamethasone groups were compared; there were statistically significant differences in pain scores at post-operative 15 and 30 min, whereas there was no statistically significant difference in pain scores at other hours. When the control and tramadol groups were compared, there was a statistically significant difference in pain scores at all intervals. When tramadol and dexamethasone groups were compared, there was no statistically significant difference in pain scores at post-operative 15 and 30 min, 1 and 2 h, whereas there was a statistically significant difference in pain scores at post-operative 6 and 24 h.
Control Theory Perspective of Effects-Based Thinking and Operations: Modelling "Operations" as a Feedback Control System
Farrell, Philip S
This paper explores operations that involve effects-based thinking (EBT) using Control Theory techniques in order to highlight the concept's fundamental characteristics in a simple and straightforward manner...
Post-operative bilateral continuous ultrasound-guided transversus abdominis plane block versus continuous local anaesthetic wound infusion in patients undergoing abdominoplasty
Eman Ramadan Salama
Full Text Available Background and Aims: Transversus abdominis plane (TAP block and continuous local anaesthetic wound infusion are used as part of multimodal analgesia to treat postoperative pain after lower abdominal surgeries. The aim of this randomised controlled study was to assess the efficacy of the two techniques and compare the two in patients undergoing abdominoplasty. Methods: Ninety female patients undergoing abdominoplasty were allocated to receive continuous wound infusion with saline (control group, GC, n = 30, continuous bilateral TAP block with 0.25% levobupivacaine (group GT, n = 30, or continuous wound infusion with 0.25% levobupivacaine (group GW, n = 30. The primary end-point was morphine requirement in the first 48 h. Numerical rating scale (NRS at rest and during movement, time to first morphine dose and time to first ambulation were recorded. Results: Morphine requirement in the first 48 h was significantly higher in GC than GW and GT (61.9 ± 12.8, 21.5 ± 9.5, and 18.9 ± 8.1 mg, respectively; P = 0.001, but GW and GT were comparable (P = 0.259. NRS was significantly higher in GC during movement in the first 24 h. GW and GT showed significantly longer time to first morphine dose (6.5 ± 1.7 and 8.9 ± 1.4 h, respectively, vs. 1.2 ± 0.3 h in GC and significantly shorter time to first ambulation (7.8 ± 3.1 and 6.9 ± 3.4 h, respectively, vs. 13.2 ± 4.9 h in GC (P = 0.001. Conclusion: Continuous bilateral ultrasound-guided TAP block and continuous local anaesthetic wound infusion significantly decreased total morphine consumption in the first 48 h compared to placebo; however, both treatment techniques were comparable.
Independent assessment to continue improvement: Implementing statistical process control at the Hanford Site
Hu, T.A.; Lo, J.C.
A Quality Assurance independent assessment has brought about continued improvement in the PUREX Plant surveillance program at the Department of Energy's Hanford Site. After the independent assessment, Quality Assurance personnel were closely involved in improving the surveillance program, specifically regarding storage tank monitoring. The independent assessment activities included reviewing procedures, analyzing surveillance data, conducting personnel interviews, and communicating with management. Process improvement efforts included: (1) designing data collection methods; (2) gaining concurrence between engineering and management, (3) revising procedures; and (4) interfacing with shift surveillance crews. Through this process, Statistical Process Control (SPC) was successfully implemented and surveillance management was improved. The independent assessment identified several deficiencies within the surveillance system. These deficiencies can be grouped into two areas: (1) data recording and analysis and (2) handling off-normal conditions. By using several independent assessment techniques, Quality Assurance was able to point out program weakness to senior management and present suggestions for improvements. SPC charting, as implemented by Quality Assurance, is an excellent tool for diagnosing the process, improving communication between the team members, and providing a scientific database for management decisions. In addition, the surveillance procedure was substantially revised. The goals of this revision were to (1) strengthen the role of surveillance management, engineering and operators and (2) emphasize the importance of teamwork for each individual who performs a task. In this instance we believe that the value independent assessment adds to the system is the continuous improvement activities that follow the independent assessment. Excellence in teamwork between the independent assessment organization and the auditee is the key to continuing improvement
Research on Integrated Control of Microgrid Operation Mode
Cheng, ZhiPing; Gao, JinFeng; Li, HangYu
The mode switching control of microgrid is the focus of its system control. According to the characteristics of different control, an integrated control system is put forward according to the detecting voltage and frequency deviation after switching of microgrid operating mode. This control system employs master-slave and peer-to-peer control. Wind turbine and photovoltaic(PV) adopt P/Q control, so the maximum power output can be achieved. The energy storage will work under the droop control if the system is grid-connected. When the system is off-grid, whether to employ droop control or P/f control is determined by system status. The simulation has been done and the system performance can meet the requirement.
Design and operation of a filter reactor for continuous production of a selected pharmaceutical intermediate
Christensen, Kim Müller; Pedersen, Michael Jønch; Dam-Johansen, Kim
in tetrahydrofuran solvent. The use of the filter reactor design was explored by examining the transferability of a synthesis step in a present full-scale semi-batch pharmaceutical production into continuous processing. The main advantages of the new continuous minireactor system, compared to the conventional semi...
Continuous improvement in teams : The (mis)fit between improvement and operational activities of improvement teams
Ros, D.J.
Since the 1970s and 1980s, increasing attention has been paid to the Japanese ways of organising production. One of the subjects often discussed is the importance of continuous incremental improvements. Nowadays, for many organisations, continuous improvement has become an important topic; many
Thermionic gun control system for the CEBAF [Continuous Electron Beam Accelerator Facility] injector
Pico, R.; Diamond, B.; Fugitt, J.; Bork, R.
The injector for the CEBAF accelerator must produce a high-quality electron beam to meet the overall accelerator specifications. A Hermosa electron gun with a 2 mm-diameter cathode and a control aperture has been chosen as the electron source. This must be controlled over a wide range of operating conditions to meet the beam specifications and to provide flexibility for accelerator commissioning. The gun is controlled using Computer Automated Measurement and Control (CAMAC IEEE-583) technology. The system employs the CAMAC-based control architecture developed at CEBAF. The control system has been tested, and early operating data on the electron gun and the injector beam transport system has been obtained. This system also allows gun parameters to be stored at the operator location, without paralyzing operation. This paper describes the use of this computer system in the control of the CEBAF electron gun. 2 refs., 6 figs., 1 tab
Operation safety of control systems. Principles and methods
Aubry, J.F.; Chatelet, E.
This article presents the main operation safety methods that can be implemented to design safe control systems taking into account the behaviour of the different components with each other (binary 'operation/failure' behaviours, non-consistent behaviours and 'hidden' failures, dynamical behaviours and temporal aspects etc). To take into account these different behaviours, advanced qualitative and quantitative methods have to be used which are described in this article: 1 - qualitative methods of analysis: functional analysis, preliminary risk analysis, failure mode and failure effects analyses; 2 - quantitative study of systems operation safety: binary representation models, state space-based methods, event space-based methods; 3 - application to the design of control systems: safe specifications of a control system, qualitative analysis of operation safety, quantitative analysis, example of application; 4 - conclusion. (J.S.)
Startup and long term operation of enhanced biological phosphorus removal in continuous-flow reactor with granules.
Li, Dong; Lv, Yufeng; Zeng, Huiping; Zhang, Jie
The startup and long term operation of enhanced biological phosphorus removal (EBPR) in a continuous-flow reactor (CFR) with granules were investigated in this study. Through reducing the settling time from 9min to 3min gradually, the startup of EBPR in a CFR with granules was successfully realized in 16days. Under continuous-flow operation, the granules with good phosphorus and COD removal performance were stably operated for more than 6months. And the granules were characterized with particle size of around 960μm, loose structure and good settling ability. During the startup phase, polysaccharides (PS) was secreted excessively by microorganisms to resist the influence from the variation of operational mode. Results of relative quantitative PCR indicated that granules dominated by polyphosphate-accumulating organisms (PAOs) were easier accumulated in the CFR because more excellent settling ability was needed in the system. Copyright © 2016 Elsevier Ltd. All rights reserved.
Green operations of belt conveyors by means of speed control
Belt conveyors can be partially loaded due to the variation of bulk material flow loaded onto the conveyor. Speed control attempts to reduce the belt conveyor energy consumption and to enable the green operations of belt conveyors. Current research of speed control rarely takes the conveyor dynamics
On the unboundedness of control operators for bilinear systems ...
The aim of this work is to study the classes of unbounded linear control operators which ensure the existence and uniqueness of the mild and strong solutions of certain bilinear control systems. By an abstract approach, similar to that adopted by Weiss [18], we obtain a connection between these classes and those ...
Analysis of Access Control Policies in Operating Systems
Chen, Hong
Operating systems rely heavily on access control mechanisms to achieve security goals and defend against remote and local attacks. The complexities of modern access control mechanisms and the scale of policy configurations are often overwhelming to system administrators and software developers. Therefore, mis-configurations are common, and the…
Multilayer control for inverters in parallel operation without signal interconnection
Hua, Ming; Hu, Haibing; Xing, Yan
A multilayer control is proposed for inverters with wireless parallel operation in this paper. The control is embedded in every inverter respectively and consists of three layers. The first layer is based on an improved droop method, which shares the active and reactive power in each module...
Multilayer Control for Inverters in Parallel Operation without Intercommunications
In this paper, a multilayer control is proposed for inverters able to operate in parallel without intercommunications. The first control layer is an improved droop method that introduces power proportional terms into the conventional droop scheme, letting both active and reactive power to be shared...
Post operative pain control in inguinal hernia repair: comparison of ...
Background: Post-operative pain control is a key factor in surgery. It greatly increases patient satisfaction, and influences the hospital stay period. Local wound infiltration has often been used to control postoperative pain following hernia surgery, with the use of the conventional local anesthetics like Lidocaine or ...
Novel operation and control modes for series-resonant converters
Haan, de S.W.H.; Huisman, H.
A series-resonant converter (SRC) able to generate an output voltage either lower or higher than the source voltage is described. Moreover, a novel control scheme is presented which renders two degrees of freedom for control and which guarantees symmetrical steady-state waveforms in all operation
IMP-J attitude control prelaunch analysis and operations plan
Hooper, H. L.; Mckendrew, J. B.; Repass, G. D.
A description of the attitude control support being supplied for the Explorer 50 mission is given. Included in the document are descriptions of the computer programs being used to support attitude determination, prediction, and control for the mission and descriptions of the operating procedures that will be used to accomplish mission objectives.
Multi-rate h2 tracking control with mixed continuous-discrete performance criteria
Kahane, A.C.; Palmor, Z.J.; Mirkin, L.
Control goals defined both in continuous and discrete time arise naturally in many sampled-data tracking control problems. The design methods found in the literature deal with each kind of those control goals separately, over-emphasizing one kind at the expense of the other. We formulate and solve these tracking control problems as an H2 optimization problem with a mixed continuous/discrete performance criterion. It is argued that the proposed setup enables tradeoff between the various control goals in a natural manner and thus leads to better tracking characteristics
Concluding from operating experience to instrumentation and control systems
Pleger, H.; Heinsohn, H.
Where conclusions are drawn from operating experience to instrumentation and control systems, two general statements should be made. First: There have been braekdowns, there have also been deficiencies, but in principle operating experience with the instrumentation and control systems of German nuclear power plants has been good. With respect to the debates about the use of modern digital instrumentation and control systems it is safe to say, secondly, that the instrumentation and control systems currently in use are working reliably. Hence, there is no need at present to replace existing systems for reasons of technical safety. However, that time will come. It is a good thing, therefore, that the use of modern digital instrumentation and control systems is to begin in the field of limiting devices. The operating experience which will thus be accumulated will benefit digital instrumentation and control systems in their qualification process for more demanding applications. This makes proper logging of operating experience an important function, even if it cannot be transferred in every respect. All parties involved therefore should see to it that this operating experience is collected in accordance with criteria agreed upon so as to prevent unwanted surprises later on. (orig.) [de
Aiding operator performance at low power feedwater control
Woods, D.D.
Control of the feedwater system during low power operations (approximately 2% to 30% power) is a difficult task where poor performance (excessive trips) has a high cost to utilities. This paper describes several efforts in the human factors aspects of this task that are underway to improve feedwater control. A variety of knowledge acquisition techniques have been used to understand the details of what makes feedwater control at low power difficult and what knowledge and skill distinguishes expert operators at this task from less experienced ones. The results indicate that there are multiple factors that contribute to task difficulty
SMES application for frequency control during islanded microgrid operation
Kim, A-Rong; Kim, Gyeong-Hun; Heo, Serim; Park, Minwon; Yu, In-Keun; Kim, Hak-Man
Highlights: â–º The operating characteristics of SMES for the frequency control of an islanded microgrid were investigated. â–º The SMES contributes well for frequency control in the islanded operation. â–º A dual and a single magnet type of SMES have been compared to demonstrate the performances. -- Abstract: This paper analyzes the operating characteristics of a superconducting magnetic energy storage (SMES) for the frequency control of an islanded microgrid operation. In the grid-connected mode of a microgrid, an imbalance between power supply and demand is solved by a power trade with the upstream power grid. The difference in the islanded mode is a critical problem because the microgrid is isolated from any power grid. For this reason, the frequency control during islanded microgrid operation is a challenging issue. A test microgrid in this paper consisted of a wind power generator, a PV generation system, a diesel generator and a load to test the feasibility of the SMES for controlling frequency during islanded operation as well as the transient state varying from the grid-connected mode to the islanded mode. The results show that the SMES contributes well for frequency control in the islanded operation. In addition, a dual and a single magnet type of SMES have been compared to demonstrate the control performance. The dual magnet has the same energy capacity as the single magnet, but there are two superconducting coils and each coil has half inductance of the single magnet. The effectiveness of the SMES application with the simulation results is discussed in detail
Kim, A-Rong, E-mail: [email protected] [Changwon National University, Sarim-dong, Changwon 641-773 (Korea, Republic of); Kim, Gyeong-Hun; Heo, Serim; Park, Minwon [Changwon National University, Sarim-dong, Changwon 641-773 (Korea, Republic of); Yu, In-Keun, E-mail: [email protected] [Changwon National University, Sarim-dong, Changwon 641-773 (Korea, Republic of); Kim, Hak-Man [University of Incheon, Songdo-dong, Incheon 406-772 (Korea, Republic of)
Highlights: â–º The operating characteristics of SMES for the frequency control of an islanded microgrid were investigated. â–º The SMES contributes well for frequency control in the islanded operation. â–º A dual and a single magnet type of SMES have been compared to demonstrate the performances. -- Abstract: This paper analyzes the operating characteristics of a superconducting magnetic energy storage (SMES) for the frequency control of an islanded microgrid operation. In the grid-connected mode of a microgrid, an imbalance between power supply and demand is solved by a power trade with the upstream power grid. The difference in the islanded mode is a critical problem because the microgrid is isolated from any power grid. For this reason, the frequency control during islanded microgrid operation is a challenging issue. A test microgrid in this paper consisted of a wind power generator, a PV generation system, a diesel generator and a load to test the feasibility of the SMES for controlling frequency during islanded operation as well as the transient state varying from the grid-connected mode to the islanded mode. The results show that the SMES contributes well for frequency control in the islanded operation. In addition, a dual and a single magnet type of SMES have been compared to demonstrate the control performance. The dual magnet has the same energy capacity as the single magnet, but there are two superconducting coils and each coil has half inductance of the single magnet. The effectiveness of the SMES application with the simulation results is discussed in detail.
Multilayer robust control for safety enhancement of reactor operations
Edwards, R.M.; Lee, K.Y.; Ray, A.
A novel concept of reactor power and temperature control has been recently reported in which a conventional output feedback controller is embedded within a state feedback setting. The embedded output feedback controller at the inner layer largely compensates for plant modeling uncertainties and external disturbances, and the outer layer generates an optimal control signal via feedback of the estimated plant states. A major advantage of this embedded architecture is the robustness of the control system relative to parametric and nonparametric uncertainties and thus the opportunity for designing fault-accommodating control algorithms to improve reactor operations and plant safety. The paper illustrates the architecture of the state-feedback-assisted classical (SFAC) control, which utilizes an embedded output feedback controller designed via classical techniques. It demonstrates the difference between the performance of conventional state feedback control and SFAC by examining the sensitivity of the dominant eigenvalues of the individual closed-loop systems
Controlled Quantum Operations of a Semiconductor Three-Qubit System
Li, Hai-Ou; Cao, Gang; Yu, Guo-Dong; Xiao, Ming; Guo, Guang-Can; Jiang, Hong-Wen; Guo, Guo-Ping
In a specially designed semiconductor device consisting of three capacitively coupled double quantum dots, we achieve strong and tunable coupling between a target qubit and two control qubits. We demonstrate how to completely switch on and off the target qubit's coherent rotations by presetting two control qubits' states. A Toffoli gate is, therefore, possible based on these control effects. This research paves a way for realizing full quantum-logic operations in semiconductor multiqubit systems.
ADVANCED COMPRESSOR ENGINE CONTROLS TO ENHANCE OPERATION, RELIABILITY AND INTEGRITY
Gary D. Bourn; Jess W. Gingrich; Jack A. Smith
This document is the final report for the ''Advanced Compressor Engine Controls to Enhance Operation, Reliability, and Integrity'' project. SwRI conducted this project for DOE in conjunction with Cooper Compression, under DOE contract number DE-FC26-03NT41859. This report addresses an investigation of engine controls for integral compressor engines and the development of control strategies that implement closed-loop NOX emissions feedback.
Control system for several rotating mirror camera synchronization operation
Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.
Localization for off-diagonal disorder and for continuous Schroedinger operators
Delyon, F.; Souillard, B.; Simon, B.
We extend the proof of localization by Delyon, Levy, and Souillard to accommodate the Anderson model with off-diagonal disorder and the continuous Schroedinger equation with a random potential. (orig.)
78 FR 41993 - Transport Handling Specialists, Inc.-Continuance in Control Exemption-RSL Railroad, LLC
... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [Docket No. FD 35726] Transport Handling Specialists, Inc.--Continuance in Control Exemption--RSL Railroad, LLC Transport Handling Specialists, Inc. (THS), has filed a verified notice of exemption (Notice) under 49 CFR 1180.2(d)(2) to continue in...
Temperature control in a continuously mixed bioreactor for solid-state fermentation
Nagel, F.J.J.I.; Tramper, J.; Bakker, M.S.N.; Rinzema, A.
A continuously mixed, aseptic paddle mixer was used successfully for solid-state fermentation (SSF) with Aspergillus oryzae on whole wheat kernels. Continuous mixing improved temperature control and prevented inhomogeneities in the bed. Respiration rates found in this system were comparable to those
Predicting bulk powder flow dynamics in a continuous mixer operating in transitory regimes
Ammarcha , Chawki; Gatumel , Cendrine; Dirion , Jean-Louis; Cabassud , Michel; Mizonov , Vadim; Berthiaux , Henri
International audience; Over recent years there has been increasing interest in continuous powder mixing processes, due mainly to the development of on-line measurement techniques. However, our understanding of these processes remains limited, particularly with regard to their flow and mixing dynamics. In the present work, we study the behaviour of a pilot-scale continuous mixer during transitory regimes, in terms of hold-up weight and outflow changes. We present and discuss experimental resu...
Method for automatic control rod operation using rule-based control
Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi
An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)
ZHANG Weikang
Full Text Available In order to satisfy all the requirements of Unmanned Underwater Vehicle(UUVrecovery tasks, a new type of abdominal operating Remote Operated Vehicle(ROV was developed. The abdominal operating ROV is different from the general ROV which works by a manipulator, as it completes the docking and recovery tasks of UUVs with its abdominal operating mechanism. In this paper, the system composition and principles of the abdominal operating ROV are presented. We then propose a framework for a control system in which the integrated industrial reinforced computer acts as a surface monitor unit, while the PC104 embedded industrial computer acts as the underwater master control unit and the other drive boards act as the driver unit. In addition, the dynamics model and a robust H-infinity controller for automatic orientation in the horizontal plane were designed and built. Single tests, system tests and underwater tests show that this control system has good real-time performance and reliability, and it can complete the recovery task of a UUV. The presented structure and algorithm could have reference significance to the control system development of mobile robots, drones, and biomimetic robot.
Kim, A.-Rong; Kim, Gyeong-Hun; Heo, Serim; Park, Minwon; Yu, In-Keun; Kim, Hak-Man
This paper analyzes the operating characteristics of a superconducting magnetic energy storage (SMES) for the frequency control of an islanded microgrid operation. In the grid-connected mode of a microgrid, an imbalance between power supply and demand is solved by a power trade with the upstream power grid. The difference in the islanded mode is a critical problem because the microgrid is isolated from any power grid. For this reason, the frequency control during islanded microgrid operation is a challenging issue. A test microgrid in this paper consisted of a wind power generator, a PV generation system, a diesel generator and a load to test the feasibility of the SMES for controlling frequency during islanded operation as well as the transient state varying from the grid-connected mode to the islanded mode. The results show that the SMES contributes well for frequency control in the islanded operation. In addition, a dual and a single magnet type of SMES have been compared to demonstrate the control performance. The dual magnet has the same energy capacity as the single magnet, but there are two superconducting coils and each coil has half inductance of the single magnet. The effectiveness of the SMES application with the simulation results is discussed in detail.
This report contains a detailed description of the work conducted for the project on Integrated Robot-Human Control in Mining Operations at University of Nevada, Reno. This project combines human operator control with robotic control concepts to create a hybrid control architecture, in which the strengths of each control method are combined to increase machine efficiency and reduce operator fatigue. The kinematics reconfiguration type differential control of the excavator implemented with a variety of 'software machine kinematics' is the key feature of the project. This software re-configured excavator is more desirable to execute a given digging task. The human operator retains the master control of the main motion parameters, while the computer coordinates the repetitive movement patterns of the machine links. These repetitive movements may be selected from a pre-defined family of trajectories with different transformations. The operator can make adjustments to this pattern in real time, as needed, to accommodate rapidly-changing environmental conditions. A working prototype has been developed using a Bobcat 435 excavator. The machine is operational with or without the computer control system depending on whether the computer interface is on or off. In preparation for emulated mining tasks tests, typical, repetitive tool trajectories during surface mining operations were recorded at the Newmont Mining Corporation's 'Lone Tree' mine in Nevada. Analysis of these working trajectories has been completed. The motion patterns, when transformed into a family of curves, may serve as the basis for software-controlled machine kinematics transformation in the new human-robot control system. A Cartesian control example has been developed and tested both in simulation and on the experimental excavator. Open-loop control is robustly stable and free of short-term dynamic problems, but it allows for drifting away from the desired motion kinematics of the
Experiments in nonlinear dynamics using control-based continuation: Tracking stable and unstable response curves
Bureau, Emil; Schilder, Frank; Santos, Ilmar
We show how to implement control-based continuation in a nonlinear experiment using existing and freely available software. We demonstrate that it is possible to track the complete frequency response, including the unstable branches, for a harmonically forced impact oscillator.......We show how to implement control-based continuation in a nonlinear experiment using existing and freely available software. We demonstrate that it is possible to track the complete frequency response, including the unstable branches, for a harmonically forced impact oscillator....
77 FR 25129 - Environmental Impact Statement for Issuance of a Special Use Permit for the Continued Operation...
... Use Permit for the Continued Operation of the Winchester Canyon Gun Club; Los Padres National Forest... environmental impact statement (EIS). SUMMARY: The USDA, Forest Service, Los Padres National Forest, gives...: Send written comments to: Los Padres National Forest, 6755 Hollister Avenue, Suite 150, Goleta, CA...
Absolutely Continuous Spectrum for Random Schrödinger Operators on the Fibonacci and Similar Tree-strips
Sadel, Christian, E-mail: [email protected] [University of British Columbia, Mathematics Department (Canada)
We consider cross products of finite graphs with a class of trees that have arbitrarily but finitely long line segments, such as the Fibonacci tree. Such cross products are called tree-strips. We prove that for small disorder random Schrödinger operators on such tree-strips have purely absolutely continuous spectrum in a certain set.
21 CFR 111.110 - What quality control operations are required for laboratory operations associated with the...
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What quality control operations are required for laboratory operations associated with the production and process control system? 111.110 Section 111.110 Food... § 111.110 What quality control operations are required for laboratory operations associated with the...
Human error mode identification for NPP main control room operations using soft controls
Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol
The operation environment of main control rooms (MCRs) in modern nuclear power plants (NPPs) has considerably changed over the years. Advanced MCRs, which have been designed by adapting digital and computer technologies, have simpler interfaces using large display panels, computerized displays, soft controls, computerized procedure systems, and so on. The actions for the NPP operations are performed using soft controls in advanced MCRs. Soft controls have different features from conventional controls. Operators need to navigate the screens to find indicators and controls and manipulate controls using a mouse, touch screens, and so on. Due to these different interfaces, different human errors should be considered in the human reliability analysis (HRA) for advanced MCRs. In this work, human errors that could occur during operation executions using soft controls were analyzed. This work classified the human errors in soft controls into six types, and the reasons that affect the occurrence of the human errors were also analyzed. (author)
Replacement of the computerized control system at NPP under operation
Ermolaev, A.D.; Rakitin, I.D.
Reasons and preconditions for replacement of the computerized control systems (CCS) at NPP under operation are consi-- dered. Problems dealing with management of CCS replacement, maintenance of a new CCS as well as NPP personnel training for the new system maintenance are discussed. A necessity of NPP personnel participation in these works in order to adapt CCS to requirements of NPP operation personnel and to initiate the training process is underlined. Replacement of CCS at NPP under operation is associated, as a rule, with obsolescence of old systems not ensuring growing requirements to NPP workability and safety. Principles observed at CCS replacement are reduced, mainly, to the following; maximum utilizatian of existing equipment, metal strUctures, cables, instruments, power supplies, ventilation system minimum of construction works and new communications; the least change of acting panels and boxes; changes in control desks should be introduced on the basis of the analysis of operator actions '
The development of control technologies applied to waste processing operations
Grasz, E.; Baker, S.; Couture, S.; Dennison, D.; Holliday, M.; Hurd, R.; Kettering, B.; Merrill, R.; Wilhelmson, K.
Typical waste and residue processes involve some level of human interaction. The risk of exposure to unknown hazardous materials and the potential for radiation contamination provide the impetus for physically separating or removing operators from such processing steps. Technologies that facilitate separation of the operator from potential contamination include glove box robotics; modular systems for remote and automated servicing; and interactive controls that minimize human intervention. Lawrence Livermore National Laboratory (LLNL) is developing an automated system which by design will supplant the operator for glove box tasks, thus affording protection from the risk of radiation exposure and minimizing operator associated waste.This paper describes recent accomplishments in technology development and integration, and outlines the future goals at LLNL for achieving this integrated, interactive control capability
The approach associated with the continued operation of the Calder Hall and Chapelcross nuclear power stations to 50 years
Ayres, G.
Calder Hall was the world's first commercial nuclear power station, commencing operation in 1956, and with its sister station at Chapelcross has operated successfully, with consistently high load factors, for approximately 40 years. The first part of this paper reviews the operating history of the stations. Secondly, the paper will briefly describe both the work carried out under the Long Term Safety Review which has supported operation to 40 years and the work being carried out as part of a Periodic Safety Review to support continued operation of both stations to 50 years. The commercial improvements, some of which, of course, do have some nuclear safety significance, will be briefly described in the context of operating within what is increasingly becoming a demanding privatized electricity market in the United Kingdom. Finally, potential life limiting features will be identified and the monitoring programmes described leading to the conclusion that there is no reason why the stations should not continue to operate to at least 50 years. (author). 4 refs
Developing operator capacity estimates for supervisory control of autonomous vehicles.
Cummings, M L; Guerlain, Stephanie
This study examined operators' capacity to successfully reallocate highly autonomous in-flight missiles to time-sensitive targets while performing secondary tasks of varying complexity. Regardless of the level of autonomy for unmanned systems, humans will be necessarily involved in the mission planning, higher level operation, and contingency interventions, otherwise known as human supervisory control. As a result, more research is needed that addresses the impact of dynamic decision support systems that support rapid planning and replanning in time-pressured scenarios, particularly on operator workload. A dual screen simulation that allows a single operator the ability to monitor and control 8, 12, or 16 missiles through high level replanning was tested on 42 U.S. Navy personnel. The most significant finding was that when attempting to control 16 missiles, participants' performance on three separate objective performance metrics and their situation awareness were significantly degraded. These results mirror studies of air traffic control that demonstrate a similar decline in performance for controllers managing 17 aircraft as compared with those managing only 10 to 11 aircraft. Moreover, the results suggest that a 70% utilization (percentage busy time) score is a valid threshold for predicting significant performance decay and could be a generalizable metric that can aid in manning predictions. This research is relevant to human supervisory control of networked military and commercial unmanned vehicles in the air, on the ground, and on and under the water.
Material operating behaviour of ABB BWR control rods
Rebensdorff, B.; Bart, G.
The BWR control rods made by ABB use boron carbide (B 4 C and hafnium as absorber material within a cladding of stainless steel. The general behaviour under operation has proven to be very good. ABB and many of their control rod customers have performed extensive inspection programs of control rod behaviour. However, due to changes in the material properties under fast and thermal neutron irradiation defects may occur in the control rods at high neutron fluences. Examinations of irradiated control rod materials have been performed in hot cell laboratories. The examinations have revealed the defect mechanism Irradiation Assisted Stress Corrosion Cracking (IASCC) to appear in the stainless steel cladding. For IASCC to occur three factors have to act simultaneously. Stress, material sensitization and an oxidising environment. Stress may be obtained from boron carbide swelling due to irradiation. Stainless steel may be sensitized to intergranular stress corrosion cracking under irradiation. Normally the reactor environment in a BWR is oxidising. The presentation focuses on findings from hot cell laboratory work on irradiated ABB BWR control rods and studies of irradiated control rod materials in the hot cells at PSI. Apart from physical, mechanical and microstructural examinations, isotope analyses were performed to describe the local isotopic burnup of boron. Consequences (such as possible B 4 C washout) of a under operation in a ABB BWR, after the occurrence of a crack is discussed based on neutron radiographic examinations of control rods operated with cracks. (author)
Sprag solenoid brake. [development and operations of electrically controlled brake
Dane, D. H. (Inventor)
The development and characteristics of an electrically operated brake are discussed. The action of the brake depends on energizing a solenoid which causes internally spaced sprockets to contact the inner surface of the housing. A spring forces the control member to move to the braking position when the electrical function is interrupted. A diagram of the device is provided and detailed operating principles are explained.
Utilizing Robot Operating System (ROS) in Robot Vision and Control
Palmer, "Development of a navigation system for semi-autonomous operation of wheelchairs,� in Proc. of the 8th IEEE/ASME Int. Conf. on Mechatronic ...and Embedded Systems and Applications, Suzhou, China, 2012, pp. 257-262. [30] G. Grisetti, C. Stachniss, and W. Burgard, "Improving grid-based SLAM...OPERATING SYSTEM (ROS) IN ROBOT VISION AND CONTROL by Joshua S. Lum September 2015 Thesis Advisor: Xiaoping Yun Co-Advisor: Zac Staples
Feasibility of touch-less control of operating room lights.
Hartmann, Florian; Schlaefer, Alexander
Today's highly technical operating rooms lead to fairly complex surgical workflows where the surgeon has to interact with a number of devices, including the operating room light. Hence, ideally, the surgeon could direct the light without major disruption of his work. We studied whether a gesture tracking-based control of an automated operating room light is feasible. So far, there has been little research on control approaches for operating lights. We have implemented an exemplary setup to mimic an automated light controlled by a gesture tracking system. The setup includes a articulated arm to position the light source and an off-the-shelf RGBD camera to detect the user interaction. We assessed the tracking performance using a robot-mounted hand phantom and ran a number of tests with 18 volunteers to evaluate the potential of touch-less light control. All test persons were comfortable with using the gesture-based system and quickly learned how to move a light spot on flat surface. The hand tracking error is direction-dependent and in the range of several centimeters, with a standard deviation of less than 1Â mm and up to 3.5Â mm orthogonal and parallel to the finger orientation, respectively. However, the subjects had no problems following even more complex paths with a width of less than 10Â cm. The average speed was 0.15 m/s, and even initially slow subjects improved over time. Gestures to initiate control can be performed in approximately 2 s. Two-thirds of the subjects considered gesture control to be simple, and a majority considered it to be rather efficient. Implementation of an automated operating room light and touch-less control using an RGBD camera for gesture tracking is feasible. The remaining tracking error does not affect smooth control, and the use of the system is intuitive even for inexperienced users.
Numerical simulation of manual operation at MID stand control room
Doca, C.; Dobre, A.; Predescu, D.; Mielcioiu, A.
Since 2000 at INR Pitesti a package of software products devoted to numerical simulation of manual operations at fueling machine control room was developed. So far, specified, designed, worked out and implemented was the PUPITRU code. The following issues were solved: graphical aspects of specific computer - human operator interface; functional and graphical simulation of the whole associated equipment of the control desk components; implementation of the main notation as used in the automated schemes of the control desk in view of the fast identification of the switches, lamps, instrumentation, etc.; implementation within PUPITRU code of the entire data base used in the frame of MID tests; implementation of a number of about 1000 numerical simulation equations describing specific operational MID testing situations
Operational and Strategic Controlling Tools in Microenterprises - Case Study
Konsek-Ciechońska, Justyna
Globalisation and increasing requirements of the environment cause the executives and supervisors to search for more and more perfect solutions, allowing them to streamline and improve the effectiveness of company operations. One of such tools, used more and more often, is controlling, the role of which has substantially increased in the recent years. It is already implemented not only in large companies with foreign capital, but also in increasingly smaller entities, which begin to notice the positive effects of the implications of the principles and tools of controlling - both operational and strategic. The purpose of the article is to demonstrate the practical side of controlling tools that can be used for the purposes of operations conducted by microenterprises.
Operational Stress Control and Readiness (OSCAR): The United States Marine Corps Initiative to Deliver Mental Health Services to Operating Forces
Nash, William P
Combat/operational stress control, defined as programs and policies to prevent, identify, and manage adverse combat/operational stress reactions, is the primary responsibility of military commanders...
Configuration control during plant outages. A review of operating experience
Peinador Veira, Miguel; El Kanbi, Semir [European Commission Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport; Stephan, Jean-Luc [Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-Roses (France); Martens, Johannes [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Koeln (Germany)
After the occurrence of several significant events in nuclear power plants during shut-down modes of operation in the eighties, and from the results of probabilistic safety assessments completed in the nineties, it was clear that risk from low power and shutdown operational modes could not be neglected and had to be addressed by appropriate safety programs. A comprehensive review of operating experience from the last ten years has been conducted by the Joint Research Centre with the objective of deriving lessons learned and recommendations useful for nuclear regulatory bodies and utilities alike. This paper is focused on one particular challenge that any nuclear plant faces whenever it plans its next outage period: how to manage the configuration of all systems under a complex environment involving numerous concurrent activities, and how to make sure that systems are returned to their valid configuration before the plant resumes power operation. This study highlights the importance of conveying accurate but synthesized information on the status of the plant to the operators in the main control room. Many of the lessons learned are related to the alarm display in the control room and to the use of check lists to control the status of systems. Members of the industry and safety authorities may now use these recommendations and lessons learned to feed their own operating experience feedback programs, and check their applicability for specific sites.
TEMPERATURE CONTROL OF A CONTINUOUS STIRRED TANK REACTOR BY MEANS OF TWO DIFFERENT INTELLIGENT STRATEGIES
Rahmat, Mohd Fua'ad; Yazdani, Amir Mehdi; Movahed, Mohammad Ahmadi; Mahmoudzadeh, Somaiyeh
Continues Stirred Tank Reactor (CSTR) is an important subject in chemical process and offering a diverse range of researches in the area of the chemical and control engineering. Various control approaches have been applied on CSTR to control its parameters. This paper presents two different control strategies based on the combination of a novel socio-political optimization algorithm, called Imperialist Competitive Algorithm (ICA), and concept of the gain scheduling performed by means of the l...
76 FR 37100 - Intent To Prepare an Environmental Impact Statement; Continued Operation of the Department of...
..., hazardous and radioactive material transportation, energy efficiency and renewable energy, nuclear energy, fossil energy, magnetic fusion, basic energy sciences, supercomputing, and biological and environmental.... Further, an updated evaluation of SNL/NM operational and transportation accident analyses and a new...
Critical Knowledge Gaps Concerning Pharmacological Fatigue Countermeasures for Sustained and Continuous Aviation Operations
Saltzgaber, Lee
...: dextroamphetamine, modafinil, caffeine, temazepam, zolpidem, zaleplon, and melatonin. Thirty-four operationally relevant terms and phrases, such as -acceleration, memory, computational performance, and predisposition to heat injury, were used...
Qualified operator training in the simulated control room environment
Ionescu, Teodor; Studineanu, Emil; Radulescu, Catalina; Bolocan, Gabriel
Full text: Mainly designed for the training of the Cernavoda NPP Unit 2 operators, the virtual simulated environment allows the training of the already qualified operators for Cernavoda NPP Unit 1, adding to the already trained knowledge, the differences which has occurred in the Unit 2 design. Using state-of-the-art computers and displays and qualified software, the virtual simulated panels could offer a viable alternative to classic hardware-based training. This approach allows quick training of the new procedures required by the new configuration of the re-designed operator panels in the main control room of Cernavoda NPP Unit 2. (authors)
Mainly designed for the training of the Cernavoda NPP Unit 2 operators, the virtual simulated environment allows the training of the already qualified operators for Cernavoda NPP Unit 1, adding to the already trained knowledge, the differences which have occurred in the Unit 2 design. Using state-of-the-art computers and displays and qualified software, the virtual simulated panels could offer a viable alternative to classic hardware-based training. This approach allows quick training of the new procedures required by the new configuration of the re-designed operator panels in the main control room of Cernavoda NPP Unit 2. (authors)
Issues and approaches in control for autonomous reactor operation
Vilim, R. B.; Khalil, H. S.; Wei, T. Y. C.
A capability for autonomous and passively safe operation is one of the goals of the NERI funded development of Generation IV nuclear plants. An approach is described for evaluating the effect of increasing autonomy on safety margins and load behavior and for examining issues that arise with increasing autonomy and their potential impact on performance. The method provides a formal approach to the process of exploiting the innate self-regulating property of a reactor to make it less dependent on operator action and less vulnerable to automatic control system fault and/or operator error. Some preliminary results are given
Pure Absolutely Continuous Spectrum for Random Operators on $l^2(Z^d)$ at Low Disorder
Grinshpun, V
Absence of singular continuous component, with probability one, in the spectra of random perturbations of multidimensional finite-difference Hamiltonians, is for the first time rigorously established under certain conditions ensuring either absence of point component, or absence of absolutely continuous component in the corresponding regions of spectra. The main technical tool involved is the rank-one perturbation theory of singular spectra. The respective new result (the non-mixing property) is applied to establish existence and bounds of the (non-empty) pure absolutely continuous component in the spectrum of the Anderson model with bounded random potential in dimension d=2 at low disorder (similar proof holds for d>4). The new result implies, via the trace-class perturbation analysis, Anderson model with the unbounded random potential having only pure point spectrum (complete system of localized wave-functions) with probability one in arbitrary dimension. The basic idea is to establish absence of the mixed,...
The LG-bank control concept: An improved method for PWR load-following operation
Park, Won Seok; Christenson, J.M.
In this paper the authors present the results of an investigation of a new pressurized water reactor load-following control concept that utilizes light gray (LG) banks in combination with a single high-worth bank. The investigation determined a control strategy and a set of nuclear design parameters for the control banks that permits unrestricted load-following operation over a wide power range at both beginning-of-cycle and end-of-cycle conditions. Advantages of the LG-bank control concept are that flexible load-following maneuvers can be performed without either making changes in the boron concentration or requiring the continuous insertion of a high-worth control bank. These features remove both of the disadvantages of current gray-bank load-following designs, which generally require the continuous insertion of a high-worth bank and in some cases also involve changes in the boron concentration
Integrating UF6 Cylinder RF Tracking With Continuous Load Cell Monitoring for Verifying Declared UF6 Feed and Withdrawal Operations Verifying Declared UF6 Feed and Withdrawal Operations
Krichinsky, Alan M.; Miller, Paul; Pickett, Chris A.; Richardson, Dave; Rowe, Nathan C.; Whitaker, J. Michael; Younkin, James R.
Oak Ridge National Laboratory is demonstrating the integration of UF6 cylinder tracking, using RF technology, with continuous load cell monitoring (CLCM) at mock UF6 feed and withdrawal (F and W) stations. CLCM and cylinder tracking are two of several continuous-monitoring technologies that show promise in providing integrated safeguards of F and W operations at enrichment plants. Integrating different monitoring technologies allows advanced, automated event processing to screen innocuous events thereby minimizing false alerts to independent inspectors. Traditionally, international inspectors rely on batch verification of material inputs and outputs derived from operator declarations and periodic on-site inspections at uranium enrichment plants or other nuclear processing facilities. Continuously monitoring F and W activities between inspections while providing filtered alerts of significant operational events will substantially increase the amount of valuable information available to inspectors thereby promising to enhance the effectiveness of safeguards and to improve efficiency in conducting on-site inspections especially at large plants for ensuring that all operations are declared.
The operation characteristics of biohydrogen production in continuous stirred tank reactor with molasses
Hong, C.; Wei, H.; Jie-xuan, D.; Xin, Y.; Chuan-ping, Y. [Northeast Forestry Univ., Harbin (China). School of Forestry; Li, Y.F. [Northeast Forestry Univ., Harbin (China). School of Forestry; Shanghai Univ. Engineering, Shanghai (China). College of Chemistry and Chemical Engineering
The anaerobic fermentation biohydrogen production in a continuous stirred tank reactor (CSTR) was investigated as a means for treating molasses wastewater. The research demonstrated that the reactor has the capacity of continuously producing hydrogen in an initial biomass (as volatile suspension solids) of 17.74 g/L, temperature of approximately 35 degrees Celsius, hydraulic retention time of 6 hours. The reactor could begin the ethanol-type fermentation in 12 days and realize stable hydrogen production. The study also showed that the CSTR reactor has a favourable stability even with an organic shock loading. The hydrogen yield and chemical oxygen demand (COD) increased, as did the hydrogen content.
Role Allocations and Communications of Operators during Emergency Operation in Advanced Main Control Rooms
Lee, June Seung
The advanced main control room (MCR) in GEN III + nuclear power plants has been designed by adapting modern digital I and C techniques and an advanced man machine interface system (MMIS). Large Display Panels (LDPs) and computer based workstations are installed in the MCR. A Computerized Procedure System (CPS) and Computerized Operation Support System (COSS) with high degrees of automation are supplied to operators. Therefore, it is necessary to set up new operation concepts in advanced MCRs that are different from those applied in conventional MCRs regarding role allocations and communications of operators. The following presents a discussion of the main differences between advanced MCRs and conventional MCRs from the viewpoint of role allocations and communications. Efficient models are then proposed on the basis of a task analysis on a series of emergency operation steps
A remotely controlled CCTV system for nuclear reactor retube operations
Stovman, J.A.
This paper describes the CCTV Vault Observation Subsystem (VOS) under development for Ontario Hydro for the Pickering 'A' Nuclear Power Plant Large Scale Retubing program. This subsystem will be used by a supervisor and several operators to observe fuel channel replacement operations following plant shutdown and removal of the fuel bundles. VOS basically comprises 23 monochrome television camera driven circuits, a matrix switcher, 15 monitors, 9 tape recorders and 4 microphone driven sound circuits. Remote control of the camera's zoom lenses and mounts is via a digitally multiplexed control system. Design considerations include viewing requirements, reliability, radiation, redundance, and economic factors
Nuclear electric power safety, operation, and control aspects
Knowles, J Brian
Assesses the engineering of renewable sources for commercial power generation and discusses the safety, operation, and control aspects of nuclear electric power From an expert who advised the European Commission and UK government in the aftermath of Three Mile Island and Chernobyl comes a book that contains experienced engineering assessments of the options for replacing the existing, aged, fossil-fired power stations with renewable, gas-fired, or nuclear plants. From geothermal, solar, and wind to tidal and hydro generation, Nuclear Electric Power: Safety, Operation, and Control Aspects ass
Operating control systems in advanced types of nuclear power plants
Jeannot, A.; Quittet, Y.; Bonnemort, P.
The report presented first gives a general description of operating control of the PHENIX reactor, covering the level of automaticity and the methods of data perception. The authors then describe the control of the core, the supervision of cooling and the detection of cladding rupture. A summary description is given of the evolution of the SUPER-PHENIX reactor from its PHENIX predecessor. As regards high temperature reactors, the report discusses control rods, the regulation of the flow of coolant gas, the system of emergency stoppage and the general systems for safety and output limitation, with special attention being paid to particular aspects of some of the control systems
Note: Wide-operating-range control for thermoelectric coolers
Peronio, P.; Labanca, I.; Ghioni, M.; Rech, I.
A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.
System for controlling the operating temperature of a fuel cell
Fabis, Thomas R.; Makiel, Joseph M.; Veyo, Stephen E.
A method and system are provided for improved control of the operating temperature of a fuel cell (32) utilizing an improved temperature control system (30) that varies the flow rate of inlet air entering the fuel cell (32) in response to changes in the operating temperature of the fuel cell (32). Consistent with the invention an improved temperature control system (30) is provided that includes a controller (37) that receives an indication of the temperature of the inlet air from a temperature sensor (39) and varies the heat output by at least one heat source (34, 36) to maintain the temperature of the inlet air at a set-point T.sub.inset. The controller (37) also receives an indication of the operating temperature of the fuel cell (32) and varies the flow output by an adjustable air mover (33), within a predetermined range around a set-point F.sub.set, in order to maintain the operating temperature of the fuel cell (32) at a set-point T.sub.opset.
System-wide hybrid MPC-PID control of a continuous pharmaceutical tablet manufacturing process via direct compaction.
Singh, Ravendra; Ierapetritou, Marianthi; Ramachandran, Rohit
The next generation of QbD based pharmaceutical products will be manufactured through continuous processing. This will allow the integration of online/inline monitoring tools, coupled with an efficient advanced model-based feedback control systems, to achieve precise control of process variables, so that the predefined product quality can be achieved consistently. The direct compaction process considered in this study is highly interactive and involves time delays for a number of process variables due to sensor placements, process equipment dimensions, and the flow characteristics of the solid material. A simple feedback regulatory control system (e.g., PI(D)) by itself may not be sufficient to achieve the tight process control that is mandated by regulatory authorities. The process presented herein comprises of coupled dynamics involving slow and fast responses, indicating the requirement of a hybrid control scheme such as a combined MPC-PID control scheme. In this manuscript, an efficient system-wide hybrid control strategy for an integrated continuous pharmaceutical tablet manufacturing process via direct compaction has been designed. The designed control system is a hybrid scheme of MPC-PID control. An effective controller parameter tuning strategy involving an ITAE method coupled with an optimization strategy has been used for tuning of both MPC and PID parameters. The designed hybrid control system has been implemented in a first-principles model-based flowsheet that was simulated in gPROMS (Process System Enterprise). Results demonstrate enhanced performance of critical quality attributes (CQAs) under the hybrid control scheme compared to only PID or MPC control schemes, illustrating the potential of a hybrid control scheme in improving pharmaceutical manufacturing operations. Copyright © 2013 Elsevier B.V. All rights reserved.
Assessing the operational life of flexible printed boards intended for continuous flexing applications : a case study.
Beck, David Franklin
Through the vehicle of a case study, this paper describes in detail how the guidance found in the suite of IPC (Association Connecting Electronics Industries) publications can be applied to develop a high level of design assurance that flexible printed boards intended for continuous flexing applications will satisfy specified lifetime requirements.
Design of a continuously operated 1-keV deuterium-ion extractor
Fink, J.H.
A novel grid structure that is cooled only by radiation and conduction is shown to be capable of continuously extracting 2.5 kA.m -2 of 1-keV positive deuterium ions while dissipating a power loading of 0.4 MW.m -2
Automatic Learning of Fine Operating Rules for Online Power System Security Control.
Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis
Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.
[Three methods for controlling presacral massive bleeding during pelvic operations].
Wang, Xiaoxue; Liu, Zhimin; Xie, Shangkui; Ren, Donglin; Wu, Yin'ai
To evaluate three different methods for controlling presacral massive bleeding during pelvic operations. Clinical data of 11 patients with presacral massive bleeding during pelvic operation at The Sixth Affiliated Hospital of Sun Yat-sen University and 157 Branch Hospital of Guangzhou General Hospital of Guangzhou Military Command from January 2001 to January 2016 were analyzed retrospectively. Hemostasis methods for presacral massive bleeding during operation included gauze packing (whole pressure), drawing pin (local pressure) and absorbable gauze (absorbable gauze was adhered to bleeding position with medical glue after local pressure). Efficacy of these 3 methods for controlling bleeding was evaluated and compared. Ten patients were male and 1 was female with average age of 65.2 (40 to 79) years old. Eight cases were rectal cancer, 2 were presacral malignancies and 1 was rectal benign lesion. Bleeding volume during operation was 300 to 2 500 (median 800) ml. From 2001 to 2012, 4 cases received gauze packing, of whom, 3 cases were scheduled Dixon resection before operation and then had to be referred to Hartman resection; 3 cases died of systemic failure due to postoperative chronic errhysis and infection, and 1 underwent re-operation. At the same time from 2001 to 2012, 5 cases received drawing pin, of whom, bleeding of 3 cases was successfully controlled and Dixon resection was completed. In other 2 cases with hemostasis failure, 1 case underwent re-operation following the use of gauze packing, and another 1 case received absorbable gauze hemostasis. All the 5 patients were healing. From 2013 to 2016, 2 cases completed scheduled anterior resection of rectum after successful hemostasis with absorbable gauze and were healing and discharged. Gauze packing hemostasis is a basic method for controlling presacral massive bleeding. Drawing pin and absorbable gauze hemostasis are more precise and may avoid the change of surgical procedure. But drawing pin has the
Improved operability of the CANDU 9 control centre
Macbeth, M. J.; Webster, A.
The next generation CANDU nuclear power plant being designed by AECL is the 900 MWe class CANDU 9 station. It is based upon the Darlington CANDU station design which is among the world leaders in capacity factor with low Operation, Maintenance and Administration (OM and A) costs. This Control Centre design includes the proven functionality of existing CANDU control centres (including the Wolsong 2,3, and 4 control centre improvements, such as the Emergency Core Cooling panels), the characteristics identified by systematic design with human factors analysis of operations requirements and the advanced features needed to improve station operability which is made possible by the application of new technology. The CANDU 9 Control Centre provides plant staff with an improved operability capability due to the combination of proveness, systematic design with human factors engineering and enhanced operating features. Significant features which contribute to this improved operability include: · Standard NSP, BOP and F/H panels with controls and indicators integrated by a standard display/presentation philosophy. · Common plant parameter signal database for extensive monitoring, checking, display and annunciation. · Powerful annunciation system allowing alarm filtering, prioritizing and interrogation to enhance staff recognition of events, plant state and required corrective procedural actions. · The use of an overview display to present immediate and uncomplicated plant status information to facilitate operator awareness of unit status in a highly readable and recognizable format. · Extensive cross checking of similar process parameters amongst themselves, with the counterpart safety system parameters and as well as with 'signature' values obtained from known steady state conditions. · Powerful calculation capabilities, using the plant wide database, providing immediate recognizable and readable and readable output data on plant state information and plant state change
The development of a model of control room operator cognition
Harrison, C. Felicity
The nuclear generation station CRO is one of the main contributors to plant performance and safety. In the past, studies of operator behaviour have been made under emergency or abnormal situations, with little consideration being given to the more routine aspects of plant operation. One of the tasks of the operator is to detect the early signs of a problem, and to take steps to prevent a transition to an abnormal plant state. In order to do this CRO must determine that plant indications are no longer in the normal range, and take action to prevent a further move away from normal. This task is made more difficult by the extreme complexity of the control room, and by the may hindrances that the operator must face. It would therefore be of great benefit to understand CRO cognitive performance, especially under normal operating conditions. Through research carried out at several Canadian nuclear facilities we were able to develop a deeper understanding of CRO monitoring of highly automated systems during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The consultants were asked to develop a deeper understanding of CRO monitoring during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The overall objective of this research was to develop and validate a model of CRO monitoring. The findings of this research have practical implications for systems integration, training, and interface design. The result of this work was a model of operator monitoring activities. (author)
Control principles for blackstart and island operation of microgrid
Laaksonen, H.; Kauhaniemi, K. (University of Vaasa (Finland))
In some unexpected situations a microgrid may become unstable after transition to islanded mode and all DG units must be disconnected from microgrid. In case of these events a restoration strategy for microgrid blackstart is needed. Also if the islanded microgrid is divided into different protection zones in case of a fault, fault management strategy with capability of very fast operation is needed to maintain stability in the healthy section of the islanded microgrid. The control of microgrid voltage and frequency during the microgrid blackstart is not possible without energy storage unit. In this paper sequence of actions for the microgrid blackstart operation as well as control principles of some DG units during blackstart are defined and simulated with two different microgrid configurations. Also one simulation case considering fault management strategy and control principles during fault in islanded microgrid is presented. Based on the simulations dimensioning principles for the needed energy storage and size of simultaneously controlled loads can be drawn. (orig.)
Enhancement of Arterial Pressure Pulsatility by Controlling Continuous-Flow Left Ventricular Assist Device Flow Rate in Mock Circulatory System.
Bozkurt, Selim; van de Vosse, Frans N; Rutten, Marcel C M
Continuous-flow left ventricular assist devices (CF-LVADs) generally operate at a constant speed, which reduces pulsatility in the arteries and may lead to complications such as functional changes in the vascular system, gastrointestinal bleeding, or both. The purpose of this study is to increase the arterial pulse pressure and pulsatility by controlling the CF-LVAD flow rate. A MicroMed DeBakey pump was used as the CF-LVAD. A model simulating the flow rate through the aortic valve was used as a reference model to drive the pump. A mock circulation containing two synchronized servomotor-operated piston pumps acting as left and right ventricles was used as a circulatory system. Proportional-integral control was used as the control method. First, the CF-LVAD was operated at a constant speed. With pulsatile-speed CF-LVAD assistance, the pump was driven such that the same mean pump output was generated. Continuous and pulsatile-speed CF-LVAD assistance provided the same mean arterial pressure and flow rate, while the index of pulsatility increased significantly for both arterial pressure and pump flow rate signals under pulsatile speed pump support. This study shows the possibility of improving the pulsatility of CF-LVAD support by regulating pump speed over a cardiac cycle without reducing the overall level of support.
Operation and control of large wind turbines and wind farms
Soerensen, Poul; Hansen, Anca D.; Thomsen, Kenneth (and others)
This report is the final report of a Danish research project 'Operation and control of large wind turbines and wind farms'. The objective of the project has been to analyse and assess operational strategies and possibilities for control of different types of wind turbines and different wind farm concepts. The potentials of optimising the lifetime/energy production ratio by means of using revised operational strategies for the individual wind turbines are investigated. Different strategies have been simulated, where the power production is decreased to an optimum when taking loads and actual price of produced electricity into account. Dynamic models and control strategies for the wind farms have also been developed, with the aim to optimise the operation of the wind farms considering participation in power system control of power (frequency) and reactive power (voltage), maximise power production, keep good power quality and limit mechanical loads and life time consumption. The project developed models for 3 different concepts for wind farms. Two of the concepts use active stall controlled wind turbines, one with AC connection and one with modern HVDC/VSC connection of the wind farm. The third concept is based on pitch controlled wind turbines using doubly fed induction generators. The models were applied to simulate the behaviour of the wind farm control when they were connected to a strong grid, and some initial simulations were performed to study the behaviour of the wind farms when it was isolated from the main grid on a local grid. Also the possibility to use the available information from the wind turbine controllers to predict the wind speed has been investigated. The main idea has been to predict the wind speed at a wind turbine using up-wind measurements of the wind speed in another wind turbine. (au)
77 FR 65937 - Pioneer Railcorp-Continuation in Control Exemption-Rail Switching Services, Inc.
... control of Rail Switching Services, Inc. (RSS), upon RSS's becoming a Class III rail carrier. \\1\\ Pioneer states that it owns 100% of the common stock of its 17 Class III rail carrier subsidiaries: West Michigan...--Continuation in Control Exemption--Rail Switching Services, Inc. Pioneer Railcorp (Pioneer) and its...
76 FR 50326 - Regional Rail, LLC-Continuance in Control Exemption-Tyburn Railroad, LLC
.... Regional is a Delaware limited liability company that currently controls 2 Class III railroads, East Penn... Company and operate approximately 0.9 miles of rail lines in Morrisville, Pa. The parties intend to...
Investigation on structural integrity of graphite component during high temperature 950degC continuous operation of HTTR
Sumita, Junya; Shimazaki, Yosuke; Shibata, Taiju
Graphite material is used for internal structures in high temperature gas-cooled reactor. The core components and graphite core support structures are so designed as to maintain the structural integrity to keep core cooling capability. To confirm that the core components and graphite core support structures satisfy the design requirements, the temperatures of the reactor internals are measured during the reactor operation. Surveillance test of graphite specimens and in-service inspection using TV camera are planned in conjunction with the refueling. This paper describes the evaluation results of the integrity of the core components and graphite core support structures during the high temperature 950degC continuous operation, a high temperature continuous operation with reactor outlet temperature of 950degC for 50 days, in high temperature engineering test reactor. The design requirements of the core components and graphite core support structures were satisfied during the high temperature 950degC continuous operation. The dimensional change of graphite which directly influences the temperature of coolant was estimated considering the temperature profiles of fuel block. The magnitude of irradiation-induced dimensional change considering temperature profiles was about 1.2 times larger than that under constant irradiation temperature of 1000degC. In addition, the programs of surveillance test and ISI using TV camera were introduced. (author)
Continuous and Discrete-Time Optimal Controls for an Isolated Signalized Intersection
Jiyuan Tan
Full Text Available A classical control problem for an isolated oversaturated intersection is revisited with a focus on the optimal control policy to minimize total delay. The difference and connection between existing continuous-time planning models and recently proposed discrete-time planning models are studied. A gradient descent algorithm is proposed to convert the optimal control plan of the continuous-time model to the plan of the discrete-time model in many cases. Analytic proof and numerical tests for the algorithm are also presented. The findings shed light on the links between two kinds of models.
Continuous Drug Infusion for Diabetes Therapy: A Closed-Loop Control System Design
Jiming Chen
Full Text Available While a typical way for diabetes therapy is discrete insulin infusion based on long-time interval measurement, in this paper, we design a closed-loop control system for continuous drug infusion to improve the traditional discrete methods and make diabetes therapy automatic in practice. By exploring the accumulative function of drug to insulin, a continuous injection model is proposed. Based on this model, proportional-integral-derivative (PID and fuzzy logic controllers are designed to tackle a control problem of the resulting highly nonlinear plant. Even with serious disturbance of glucose, such as nutrition absorption at meal time, the proposed scheme can perform well in simulation experiments.
Controlling the Filling and Capping Operation of a Bottling Plant using PLC and SCADA
Kunal Chakraborty
Full Text Available This paper presents basic stages of operation of a bottling plant, i.e. the filling and capping process. The main aim of our paper is to control the filling and capping section of a bottling plant simultaneously. At first a set of empty bottle is run by using a conveyer towards filling section, after the operation, the filled bottles are sent towards the capping section. After successful capping operation, the sealed bottles terminate towards exit and a new set of empty bottle arrive, in this way the process continues. This paper includes the method using which, a bunch of bottles can be filled and capped at one instant of time. This method has made the operation more flexible and time saving. The filling and capping operations are controlled using Programmable Logic Controllers (PLC, as the PLC's are very much user-efficient, cost-effective and easy to control. By using PLC automation the whole process is kept under control. SCADA (Supervisory Control and Data Acquisition is used to monitor the process by means of a display system.
Carbon flow electrodes for continuous operation of capacitive deionization and capacitive mixing energy generation
Porada, S.; Hamelers, H.V.M.; Bryjak, M.; Presser, V.; Biesheuvel, P.M.; Weingarth, D.
Capacitive technologies, such as capacitive deionization and energy harvesting based on mixing energy ("capmix� and "CO2 energy�), are characterized by intermittent operation: phases of ion electrosorption from the water are followed by system regeneration. From a system application point of view,
40 CFR 63.1104 - Process vents from continuous unit operations: applicability assessment procedures and methods.
...) Necessitating that the owner or operator make product in excess of demand. (e) TOC or Organic HAP concentration....306×10 −2 a Use according to procedures outlined in this section. MJ/scm = Mega Joules per standard cubic meter. scm/min = Standard cubic meters per minute. (2) Nonhalogenated process vents. The owner or...
Refining aggregation operator-based orderings in multifactorial evaluation - Part I: Continuous scales
Kyselová, D.; Dubois, D.; Komorníková, M.; Mesiar, Radko
Ro�. 15, �. 6 (2007), s. 1100-1106 ISSN 1063-6706 Institutional research plan: CEZ:AV0Z10750506 Keywords : aggregation operator * multicriteria decision making * preference relation * preorder Subject RIV: BA - General Mathematics Impact factor: 2.137, year: 2007
Nitrate to ammonia and ceramic (NAC) process during batch and continuous operation
Muguercia, I.; Solomon, S.; Ebadian, M.A.
The nitrate to ammonia and ceramic (NAC) process is an innovative technology for the denitration of radioactive sodium nitrate-based liquid waste found throughout Department of Energy (DOE) facilities in the United States. In the present investigation, two reaction systems were studied. The first utilized only sodium nitrate as the substrate for the aluminum. The second consisted of the multication composition of waste forms located at the Hanford facility. Studies were carried out on the batch reaction at three different starting nitrate ion concentrations, each at three different temperatures. For each of these conditions, the rate of nitrate depletion was determined, and rate constants were calculated. The reaction did not demonstrate simple kinetics; rather, it appeared to involve two zero order reactions. Certain generalities were obtained in both the batch reaction and in the continuous process, nonetheless. It was found that the conversion of nitrate to ammonia seemed to be most efficient at the lowest temperature studied, 50 degrees C. This behavior was more obvious in the case of the unadulterated nitrate solution than with the Hanford simulant. To elaborate a practical, marketable product, it was necessary to develop a process that could be carried out in a continuous matter, whereby reactants were continuously fed into a reactor while the products of the reaction were simultaneously removed. Thus, the objective has been to develop the prototype procedures for carrying out this continuous reaction. As a corollary of this research, it was first necessary to define the characteristics of the reaction with respect to rate, conversion efficiency, and safety. To achieve this end, reactions were run under various batch conditions, and an attempt was made to measure the rates of the depletion of nitrate and the production of ammonia and hydrogen as well as pH and temperature changes
Operational controlling - a tool of translating strategy into action
Full Text Available . Enterprises have a lot of problems with realization their strategic aims in the fast changing and competitive business arena from many years. Effective execution of strategic plan needs its translating into action, task results and indicators of everyday activities. The success on the market is attainable by communicating strategic and operating goals on the each level of organizational structure and their connecting with budget of units or employee motivation. The scorecards balancing in finance, customer, process and development perspectives is very useful for pointing - what do we control with? or - what do we have to achieve? But doesn't answer to question about ways of enterprise managing. Main aim of the article is proving that operational controlling system is a essential tool for translating strategy into action. The Balanced Scorecard methodology should to take into consideration system and process connection of enterprise with procurement, co-operation or distribution supply chain also.
Mold Heating and Cooling Pump Package Operator Interface Controls Upgrade
Josh A. Salmond
The modernization of the Mold Heating and Cooling Pump Package Operator Interface (MHC PP OI) consisted of upgrading the antiquated single board computer with a proprietary operating system to off-the-shelf hardware and off-the-shelf software with customizable software options. The pump package is the machine interface between a central heating and cooling system that pumps heat transfer fluid through an injection or compression mold base on a local plastic molding machine. The operator interface provides the intelligent means of controlling this pumping process. Strict temperature control of a mold allows the production of high quality parts with tight tolerances and low residual stresses. The products fabricated are used on multiple programs.
Control Operator for the Two-Dimensional Energized Wave Equation
Sunday Augustus REJU
Full Text Available This paper studies the analytical model for the construction of the two-dimensional Energized wave equation. The control operator is given in term of space and time t independent variables. The integral quadratic objective cost functional is subject to the constraint of two-dimensional Energized diffusion, Heat and a source. The operator that shall be obtained extends the Conjugate Gradient method (ECGM as developed by Hestenes et al (1952, [1]. The new operator enables the computation of the penalty cost, optimal controls and state trajectories of the two-dimensional energized wave equation when apply to the Conjugate Gradient methods in (Waziri & Reju, LEJPT & LJS, Issues 9, 2006, [2-4] to appear in this series.
Real-Time Plasma Control Tools for Advanced Tokamak Operation
Varandas, C. A. F.; Sousa, J.; Rodrigues, A. P.; Carvalho, B. B.; Fernandes, H.; Batista, A. J.; Cruz, N.; Combo, A.; Pereira, R. C.
Real-time control will play an important role in the operation and scientific exploitation of the new generation fusion devices. This paper summarizes the real-time systems and diagnostics developed by the Portuguese Fusion Euratom Association based on digital signal processors and field programmable gate arrays
|
CommonCrawl
|
Boundary-Layer Meteorology
A Simple Method for Simulating Wind Profiles in the Boundary Layer of Tropical Cyclones
George H. Bryan
Rochelle P. Worsnop
Julie K. Lundquist
Jun A. Zhang
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method also requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Temporal spectra from LES produce an inertial subrange for frequencies \(\gtrsim \)0.1 Hz, but only when the horizontal grid spacing \(\lesssim \)20 m.
Boundary-layer dynamics Large-eddy simulation Single-column modelling Tropical cyclone
Numerical model simulations have been used to help understand tropical cyclones for decades. Standard three-dimensional simulations can, of course, represent many dynamical processes in a tropical cyclone, including the centrifugal acceleration associated with rapidly rotating flow, and the large-scale pressure gradient acceleration that acts to counter centrifugal effects. However, substantial computational resources are often needed for three-dimensional simulations because tropical cyclones span several hundreds of kilometres in the horizontal, and require several days of model integration time. As a relatively inexpensive option, two-dimensional simulations using axisymmetric equations can be used to study many aspects of tropical cyclones. However, even axisymmetric model simulations become expensive to run, especially with continued advances in representation of physical processes such as atmospheric radiation, multi-moment microphysical schemes, and ocean feedback effects, to name but a few. Additionally, axisymmetric models are complex and difficult to modify, and interactions between the various physical parametrizations make it difficult to isolate cause and effect if tropical cyclone structure varies among different simulations.
Moreover, interest is growing in the use of large-eddy simulation (LES) to study the boundary layer of tropical cyclones (roughly, the lowest kilometre above the surface). The primary advantage of LES, of course, is that the statistical properties of turbulent flow can be predicted primarily by the model's governing equations, with only a small role being played by a subgrid turbulence scheme. Furthermore, coherent structures within the tropical cyclone boundary layer such as quasi-two-dimensional roll vortices (e.g., Foster 2005; Morrison et al. 2005) can only be resolved using grid spacing of \(\approx \)100 m or less, i.e., typical resolution for LES. However, tropical cyclones extend hundreds of km horizontally, and so LES becomes prohibitively expensive unless the domain size is restricted, thereby making it difficult to account for the dynamical processes in rapidly rotating flow mentioned above (e.g., centrifugal acceleration).
For these reasons, an inexpensive and simple method to study the boundary layer of a tropical cyclone is developed and evaluated herein. This new method can be used in a "single column modelling" framework, in which height z is the only coordinate. In this case, the large-scale inertial and pressure-gradient acceleration terms are included in the governing equations via mesoscale tendency terms (described in the next section) that are similar to those pioneered for boundary-layer modelling by Sommeria (1976). With only minor modifications, these mesoscale tendency terms can also be included within LES, which allows domain sizes to be only a few km in horizontal extent, making high-resolution LES (with grid spacing of O(10) m or less) computationally tractable.
Conceptual schematic of the modelling framework. A LES domain is shown as a white box. (For single-column model simulations, the model domain is technically infinitesimal in horizontal extent.) The distance from the centre of the tropical cyclone to the centre of the model domain is R. Colour shading illustrates idealized near-surface wind speed U, which is largest (\(U = U_{{max}}\)) at the radius of maximum winds (\(R_{{max}}\)). The direction of near-surface flow outside \(R_{{max}}\) is illustrated with arrows
With these points in mind, the overall conceptual set-up for this new methodology is that the scale of the LES domain is much smaller than the scale of the entire tropical cyclone, as illustrated schematically in Fig. 1. We envision domain sizes of O(5 km) on a side, which is similar to the horizontal grid spacing for present-day numerical weather prediction systems (e.g., Tallapragada et al. 2014), and is comparable to typical LES domain sizes. The primary circulation of a tropical cyclone, quantified by the tangential velocity (i.e., magnitude of flow around circles centered on the tropical cyclone), is imposed as an initial condition for model simulations and, as discussed below, we consider the radial gradient of tangential velocity to be an important input parameter in our framework because it allows us to include large-scale advection tendencies as in Sommeria (1976); a key difference from Sommeria's approach, however, is that we utilize model-predicted wind profiles in the mesoscale tendency terms, i.e., advection tendencies are not simply specified and held fixed throughout the simulation. The goal is to allow the model to predict details of the secondary circulation (via the radial velocity profile) after a user essentially specifies the primary circulation (via a few input parameters, namely, a reference wind-speed profile, and its radial gradient). This general concept is very similar to recent studies of tropical cyclones by Kepert and Wang (2001), Foster (2009), and Kepert (2012), although a key difference is that our approach is essentially one-dimensional (in height, z) rather than two-dimensional (radius and height). It is also not clear how several terms from these studies should be included in small-domain, three-dimensional LES (see Fig. 1). Some further comparison to previous studies is provided below (Sect. 2.3).
Compared to other recent studies of the tropical cyclone boundary layer, our method accounts for the advection of momentum by the secondary circulation of the tropical cyclone. Of note, Nakanishi and Niino (2012) and Green and Zhang (2015) considered centrifugal acceleration terms in their model equations, as does our new method, but we show herein that inclusion of radial advection is needed to produce mean wind profiles that are similar to observed profiles in tropical cyclones.
In Sect. 2, we explain the design of the mesoscale tendency terms, using a previously analyzed axisymmetric model simulation for reference. Single-column model simulations are presented and evaluated in Sect. 3, followed by large-eddy simulations in Sect. 4. We summarize this work and provide concluding remarks in Sect. 5.
2 Processes in the Tropical Cyclone Boundary Layer
2.1 Axisymmetric Model Details
To clarify the most important processes in the tropical cyclone boundary layer that must be included for an accurate simulation, we first examine an axisymmetric model simulation of an idealized tropical cyclone. This simulation produces similar flow structures compared with composite observations encompassing many tropical cyclones (e.g., Zhang et al. 2011) such as inflow-layer depth \(\approx \)1 km, surface inflow angle \(\approx \)23\(^\circ \), and a height of maximum wind speed roughly 500 m above sea level (a.s.l.), as discussed in more detail by Bryan (2012).
Extensive details of this axisymmetric model can be found in Bryan and Rotunno (2009). The simulation analyzed here is nearly the same as the "Setup B" configuration of Bryan (2012) except for a few details of the surface-layer and planetary boundary-layer (PBL) schemes, as explained below. The radial grid spacing is 1 km for radius (r) <250 km, and vertical grid spacing is 20 m at the surface but increases gradually to 250 m at the top of the domain (\(z = 25\) km a.s.l.).
Output from an axisymmetric model simulation of a tropical cyclone: a components of velocity (tangential velocity \(u_\phi \) in shading, and radial velocity \(u_r\) in black contours), b terms in the \(u_\phi \) budget at \(r = 40\) km, and c terms in the \(u_r\) budget at \(r = 40\) km. In a the contour interval for \(u_r\) is 4 m s\(^{-1}\), negative values are dashed, and the the zero contour is excluded; the blue contour denotes the eyewall (estimated by vertical velocity \(w =\) 1 m s\(^{-1}\)). In b and c, the grey curves show Coriolis acceleration, the red curves show the tendency from the PBL parametrization, and the blue curves show the mesoscale tendency terms. Black curves in b and c denote various components of the mesoscale tendency, as indicated: dotted lines radial advection; dashed lines centrifugal acceleration, dashed dotted lines pressure-gradient acceleration
PBL processes must be parametrized in axisymmetric models, and the PBL scheme for this simulation is a variant of that used by Rotunno and Emanuel (1987), also known as a "Louis PBL scheme" (e.g., Kepert 2012) (after Louis 1979). This scheme determines an eddy viscosity K from the local vertical deformation (\(S_v\)) and moist Brunt–Vaisala frequency (\(N_m\)),
$$\begin{aligned} K = {l_v} ^2 ( {S_v}^2 - {N_m}^2 )^{1/2} \end{aligned}$$
(for formulations of \({S_v}^2\) and \({N_m}^2\), see Bryan and Rotunno 2009, p. 1773). The vertical length scale \(l_v (z)\) is determined from the relation \(l_v^{-2} = l_\infty ^{-2} + [\kappa ( z + z_0 ) ]^{-2}\) (e.g., Mason and Thompson 1992) where \(l_\infty = 75\) m (following Bryan 2012), \(\kappa = 0.4\), and \(z_0\) is the aerodynamic roughness length. At the surface, a bulk exchange scheme (e.g., Rotunno and Emanuel 1987) is used to determine heat fluxes, with the surface exchange coefficient for both sensible and latent heat a constant, \(1.2 \times 10^{-3}\) (based on, Zhang et al. 2008a). The surface stress is based on a simple formulation for the drag coefficient \(C_d\) (following, Donelan et al. 2004) where
$$\begin{aligned} C_d = {\left\{ \begin{array}{ll} 1 \times 10^{-3} &{}\text{ for } U_{10} \le 5 \text{ m } \text{ s }^{-1} \\ 1 \times 10^{-3} + c ( U_{10} - 5 ) &{}\text{ for } 5 \text{ m } \text{ s }^{-1}< U_{10} < 25 \text{ m } \text{ s }^{-1} \\ 2.4 \times 10^{-3} &{}\text{ for } U_{10} \ge 25 \text{ m } \text{ s }^{-1} . \end{array}\right. } \end{aligned}$$
in which \(U_{10}\) is wind speed at a height of 10 m a.s.l., and \(c = 7 \times 10^{-5}\) s m\(^{-1}\). A standard logarithmic-layer relation for neutral conditions, \(U (z) = ( u_* / \kappa ) \ln { \left[ ( z+z_0) / z_0 \right] }\), is used to determine both friction velocity \(u_*\) and \(U_{10}\) using an iterative scheme, given horizontal wind speed at the lowest model level. Roughness length is determined from \(C_d\) using the relation \(z_0 = 10 / \exp { (\kappa / \sqrt{C_d}})\), and for reference, we note that \(z_0\) = 2.8 mm for \(U_{10} \ge 25\) m s\(^{-1}\).
2.2 Mesoscale Tendency Terms
Figure 2a shows tangential velocity (\(u_\phi \), shaded) and radial velocity (\(u_r\), contours) averaged over days 8–12 of the simulation, a time period when the simulated tropical cyclone is quasi-steady (i.e., local time-tendency terms in the velocity budgets are negligible). We are here interested in the region outside the eyewall [i.e., for r greater than the radius of maximum wind (\(R_{{max}}\))] where the tropical cyclone boundary layer is typically characterized by radial inflow (\(u_r < 0 \)) and vertical advection is typically much smaller than radial advection.
The leading tendency terms of the \(u_\phi \) budget at \(r = 40\) km are shown in Fig. 2b. (All other terms are at least one order of magnitude smaller.) At this location, the tendency from the PBL parametrization (red curve) is countered by what we refer to as a "mesoscale tendency" term (blue curve), which is the sum of several terms associated with the mesoscale flow patterns in a tropical cyclone. For the \(u_\phi \) budget, the mesoscale tendency term is simply 1 / r times the radial advection of absolute angular momentum, or
$$\begin{aligned} - \frac{u_r}{r} \frac{\partial M}{\partial r} = - u_r \frac{ \partial u_\phi }{ \partial r } - \frac{ u_r u_\phi }{ r } - f u_r \, , \end{aligned}$$
where \(M \equiv r u_\phi + 0.5 f r^2, \) and f is the Coriolis parameter (assumed constant herein). The three terms on the right side of (3) are, respectively, radial advection, centrifugal acceleration, and Coriolis acceleration.
As discussed in Sect. 1, in our simple modelling framework (Fig. 1) the primary circulation of the tropical cyclone is essentially specified at the beginning of a simulation by the model user. In this spirit, we choose to specify a reference tangential velocity profile V at a distance R from the tropical cyclone centre, in addition to its radial gradient \(\partial V / \partial R\). Considering these as input parameters that do not change during a simulation, and understanding that the model-predicted profiles of wind speed are used to account for the secondary circulation of a tropical cyclone (via model-predicted profile of \(u_r\)), we consider the following form for the mesoscale tendency for \(u_\phi \) (\(M_\phi \)),
$$\begin{aligned} M_\phi = \underbrace{ - u_r \frac{\partial V}{\partial R} }_{\text {I}} \quad \underbrace{ - u_r \frac{V}{R} }_{\text {II}} , \end{aligned}$$
where term I is the radial advection and term II is the centrifugal acceleration. We do not account for the Coriolis acceleration in the mesoscale tendency terms, since this term is straightforward to represent in a numerical model of any scale. The variables V and \(\partial V / \partial R\) may be functions of height, or could be considered constant with height for shallow domains of a few km depth. Several recent studies (Nakanishi and Niino (2012), hereafter NN12; Green and Zhang (2015), hereafter GZ15) have included centrifugal-like terms in their model equations, similar to the second term of (4). However, inclusion of an advection term (first term) makes this approach clearly different from these recent studies.
Considering now the processes that affect radial velocity \(u_r\), we return our attention to the axisymmetric model and note that tendencies from the PBL scheme (red in Fig. 2c) are again countered by a mesoscale tendency (blue in Fig. 2c). For this component of velocity, the mesoscale tendency can be considered the sum of four terms: radial advection, centrifugal acceleration, Coriolis acceleration, and radial pressure-gradient acceleration. Our formulation for the mesoscale tendency to \(u_r\) (\(M_r\)) must account for these terms, although we again consider the Coriolis term to be represented in the model equations in a standard manner, and account for the other three terms for \(M_r\), as described below.
For the terms in the mesoscale tendency that represent centrifugal acceleration (\(\left. M_\phi \right| _{{cent.}} = -u_r V /R \) from above, and \(\left. M_r \right| _{{cent.}}\) to be determined below), energetic consistency requires that
$$\begin{aligned} \frac{ \partial (u_r^2/2) }{ \partial t } + \frac{ \partial (u_\phi ^2/2) }{ \partial t } = \left. u_r M_r \right| _{{cent.}} + \left. u_\phi M_\phi \right| _{{cent.}} =0. \end{aligned}$$
Consequently, from (4) and (5), the centrifugal term in the radial velocity tendency must be
$$\begin{aligned} \left. M_r \right| _{cent.} = + u_\phi \frac{V}{R} . \end{aligned}$$
This term has an unusual form in the sense that it contains both model-produced flow \(u_\phi \) and the reference velocity V; this form is a consequence of the decision to use the model-predicted \(u_r\) to advect the user-specified gradient of angular momentum in \(M_\phi \), as discussed above.
For the pressure-gradient term, we assume this is determined from a gradient wind relation,
$$\begin{aligned} - \frac{1}{\rho } \frac{\partial p}{\partial r} = - f V - \frac{V^2}{R} \end{aligned}$$
where p is pressure and \(\rho \) is air density. Here it becomes clear that the reference velocity V is the so-called gradient wind, i.e., the tengential velocity necessary to balance the pressure gradient via Coriolis and centrifugal terms. We note that, although V is assumed to be in gradient-wind balance, the model-predicted tangential velocity \(u_\phi \) does not need to obey such a relation.
Finally, for the radial advection term, in order to have a convenient form (in the spirit of the "simple" approach desired herein) we make the approximation
$$\begin{aligned} - u_r \frac{\partial u_r}{\partial r} \approx + \frac{u_r^2}{R}, \end{aligned}$$
by using the mass-continuity equation for axisymmetric flow,
$$\begin{aligned} \frac{\partial u_r}{\partial r} + \frac{u_r}{r} + \frac{\partial w}{\partial z} = 0 \end{aligned}$$
and then assuming the last term on the left-hand side is negligible. This assumption, \(| \partial w / \partial z |<< | (1/r) \, \partial (r u_r ) / \partial r |\), contrasts with the usual assumption that these two terms are comparable (Smith 1968). However, as shown below, this assumption is reasonably accurate for the axisymmetric simulation outside of the eyewall where radial variations in \(u_r\) are greater than vertical variations in w. We also note that the advection of \(u_r\) is a fairly small component of the radial velocity budget (Fig. 2), at least for regions outside the eyewall. Finally, as in Foster (2005), NN12, and GZ15, we neglect variations in r, and simply use the constant R, for the right-hand side of (8).
Assembling the three components into one relation, we have
$$\begin{aligned} M_r = \underbrace{ +\frac{u_r^2}{R} }_{\text {I}} \quad \underbrace{ + u_\phi \frac{V}{R} }_{\text {II}} \quad \underbrace{ - f V - \frac{V^2}{R} }_{\text {III}} , \end{aligned}$$
where term I is the radial advection, term II is the centrifugal acceleration, and term III is the pressure-gradient acceleration. We consider the relations (4) and (10) to be the core of our new modelling method, which is analyzed using numerical simulations in Sects. 3 and 4. We note that, as a consequence of assumptions made in this section, this method is applicable only outside the eyewall of a tropical cyclone (i.e., for \(r > R_{{max}}\), see Fig. 1) where \(u_r\) and \(\partial V / \partial R\) are typically negative, and where vertical advection terms are typically negligible compared to radial advection.
2.3 Comparison to Previous Studies
Some aspects of our mesoscale tendency terms [(4) and (10)] make our study different from other recent numerical studies of the tropical cyclone boundary layer. One major difference is that we intend to utilize these relations at a single point in the flow, in which height z is the only dimension (i.e., single-column modelling). Even for our large-eddy simulations, the horizontal dimensions of the domain are presumed small (e.g., Fig. 1) and so we choose to treat the mesoscale tendencies analogously for LES (details of the LES implementation of these terms are provided in Sect. 4). Consequently, tendency terms that contain radial gradients (e.g., radial advection) must be treated differently as compared to studies that have both r and z as dimensions. In fact, a major motivation for using the approximation (8) is so \(\partial u_r / \partial r\) does not need to be specified as an input parameter. For these reasons, the radial advection and centrifugal acceleration terms are subtly different compared to other seemingly similar studies of the tropical cyclone boundary layer (e.g., Kepert and Wang 2001; Foster 2009). We also note that, for radial diffusion and turbulence, we have chosen to neglect these terms because of their complex form, and the lack of a clear method to simplify them for single-column modelling, even though radial turbulence can be important in tropical cyclones (e.g., Rotunno and Bryan 2012), although typically only in the eyewall where the present approach is not valid.
We also clarify that our approximate equations are not derived via linearization of the governing equations, as in Kepert (2001) and Foster (2005). Our underlying approach is essentially a scale analysis in which relatively small terms are neglected. The budgets from a single axisymmetric simulation of an intense tropical cyclone are used here as an example (Fig. 2), but the conclusions are consistent with our previous work (e.g., Rotunno and Bryan 2012). Although we have neglected vertical advection (we reiterate: outside of the eyewall), some studies have found it to be an important contributor, especially in the \(u_\phi \) budget (e.g., Kepert and Wang 2001, their Fig. 9). This assumption is not critical to our modelling approach, as vertical advection terms could easily be added to (4) and (10) following the approach of Sommeria (1976). Vertical velocity could also be determined from the simulated flow fields [\(u_r(z)\) and \(u_\phi (z)\)] via the continuity equation, which has been used in several types of analytical and numerical models (e.g., Ooyama 1969; Emanuel 1986; Kepert 2001, 2013). We suspect that vertical advection terms are more important for weaker storms and/or broader storms than the one shown in Fig. 2, and we plan to investigate these issues in the future.
Finally, regarding the assumption that \(\partial w / \partial z\) can be neglected in (9) [in contrast to the typical assumption (e.g., Smith 1968)], we provide the following scale analysis. For a point R in the flow with characteristic radial velocity \(\overline{U}\), we assume its radial variation is \(\delta \overline{U} / \delta R\). Further, assume the vertical variation of w is \(\delta \overline{W} / H\), where H is the approximate depth of the inflow layer. Then the \(\partial w / \partial z\) term can be neglected in (9) when \( \delta \overline{W} / H<< \delta \overline{U} / \delta R\), and if valid, it follows from (9) that \(\delta \overline{U} / \delta R \approx - \overline{U} / R\). Using these relations, we find \(\delta \overline{W}<< - \overline{U} H / R\), and since \(H/R < 0.1\) outside of the eyewall, and \(\overline{U} \approx -10\) m s\(^{-1}\), the assumption is valid when vertical velocity at the top of the inflow layer is roughly 0.05 m s\(^{-1}\) or less. (Clearly, the assumption is not valid in tropical cyclone eyewalls, and the portions of rainbands with strong updrafts.)
The approximation of (9) is only used to estimate the radial advection of radial velocity. Using output from the axisymmetric simulation, we note that the resulting approximation, (8), is reasonable over a large part of the tropical cyclone (Fig. 3). Equation 8 becomes less accurate near the tropical cyclone eyewall where the underlying approximation \(\delta \overline{W}<< - \overline{U} H / R\) no longer holds.
Vertical profiles of the radial advection of radial velocity (solid curves) and estimates of the same field using (8) (dashed curves) at various radii as indicated. The results at \(r = 100\) km are multiplied by 10 for clarity
2.4 Neglected Terms
As explained above, the mesoscale tendency terms (4) and (10) are not applicable in the eyewall of tropical cyclones, where vertical advection can be a leading-order process (e.g., Kepert and Wang 2001). They are probably also not applicable in certain parts of rainbands, especially where vertical velocity is large near the surface. Our goal has been to devise a simple set of tendency terms that can be added easily to single-column model simulations and small-domain LES for application over much (but clearly not all) of a tropical cyclone. The results reported in the following two sections demonstrate that our new approach has clear merits, especially when compared with other recently developed approaches.
To be clear about what processes are neglected in our approach, we consider the un-approximated axisymmetric velocity equations (e.g., Hinze 1975)
$$\begin{aligned} \frac{\partial u_r}{\partial t}&= -u_r \frac{\partial u_r}{\partial r} - w \frac{\partial u_r}{\partial z} + u_\phi \left( f + \frac{u_\phi }{r} \right) -\frac{1}{\rho } \frac{\partial p}{\partial r} +\frac{1}{\rho r} \left[ \frac{\partial \left( r \tau _{rr} \right) }{ \partial r } - \tau _{\phi \phi } \right] +\frac{1}{\rho } \frac{\partial \tau _{rz} }{ \partial z } \, , \end{aligned}$$
$$\begin{aligned} \frac{\partial u_\phi }{\partial t}&= -u_r \frac{\partial u_\phi }{\partial r} - w \frac{\partial u_\phi }{\partial z} - u_r \left( f + \frac{u_\phi }{r} \right) +\frac{1}{\rho r^2} \frac{\partial \left( r^2 \tau _{r \phi } \right) }{ \partial r } +\frac{1}{\rho } \frac{\partial \tau _{\phi z} }{ \partial z } \, , \end{aligned}$$
where the \(\tau \) terms could represent molecular or turbulent stresses. Defining \(u_\phi ^\prime (r,z,t) = u_\phi (r,z,t) - V (r,z)\), and making use of the continuity equation [only for the first term on the right-hand side of the \(u_r\) equation], we arrive at the following without approximation,
$$\begin{aligned} \frac{\partial u_r}{\partial t}&= \underbrace{+\frac{ u_r^2 }{ r} + u_\phi \frac{V}{r} -\frac{1}{\rho } \frac{\partial p}{\partial r} }_{\text {I}} + f u_\phi +\frac{1}{\rho } \frac{\partial \tau _{rz} }{ \partial z } \, \underbrace{ + u_r \frac{\partial w}{\partial z} + u_\phi \frac{u_\phi ^\prime }{r} +\frac{1}{\rho r} \left[ \frac{\partial \left( r \tau _{rr} \right) }{ \partial r } - \tau _{\phi \phi } \right] }_{\text {II}} \, \underbrace{ - w \frac{\partial u_r}{\partial z} }_{\text {III}}, \end{aligned}$$
$$\begin{aligned} \frac{\partial u_\phi }{\partial t}&= \underbrace{ -u_r \frac{\partial V}{\partial r} - u_r \frac{V}{r} }_{\text {IV}} -f u_r +\frac{1}{\rho } \frac{\partial \tau _{\phi z} }{ \partial z } \, \underbrace{ -u_r \frac{\partial u_\phi ^\prime }{\partial r} - u_r \frac{u_\phi ^\prime }{r} +\frac{1}{\rho r^2} \frac{\partial \left( r^2 \tau _{r \phi } \right) }{ \partial r } }_{\text {V}} \, \underbrace{ - w \frac{\partial u_\phi }{\partial z} }_{\text {VI}}, \end{aligned}$$
where term I is \(M_r\) and term IV is \(M_\phi \), terms II and V are neglected, and terms III and VI are the vertical advection terms. Here, vertical advection is singled out separately because it could be included easily in simulations, and is not a fundamental component of our mesoscale tendency terms. The terms that are not specifically labeled (i.e., Coriolis and vertical stress terms) are typically included in boundary-layer studies, even the classic Ekman-type case (explained below). The only other approximations (aside from neglecting terms II and V) use (7) for the pressure-gradient term, and to replace r by R.
3 Single Column Modelling
3.1 Methodology
For single-column modelling the only dimension is z, and the horizontal velocity equations are simply
$$\begin{aligned} \frac{\partial u_r}{\partial t}&= M_r + f u_\phi + \frac{1}{\rho } \frac{\partial \tau _{rz} }{\partial z} \end{aligned}$$
$$\begin{aligned} \frac{\partial u_\phi }{\partial t}&= M_\phi - f u_r + \frac{1}{\rho } \frac{\partial \tau _{\phi z} }{\partial z} \, \end{aligned}$$
where \(\tau _{rz}\) and \(\tau _{\phi z}\) are parametrized turbulent stresses. The same PBL and surface-layer parametrizations from the axisymmetric model (Sect. 2a) are used, and unless specified otherwise, the M terms are given by (4) and (10). We also integrate a potential temperature (\(\theta \)) equation,
$$\begin{aligned} \frac{\partial \theta }{\partial t} = - \frac{1}{\rho } \frac{\partial \tau ^\theta _z }{\partial z} \end{aligned}$$
where \(\tau ^\theta _z = - K \partial \theta / \partial z\) is the (parametrized) turbulent heat flux. Potential temperature is not needed for the velocity equations, Eqs. 15–16 but it indirectly affects the solutions through the turbulence parametrization, Eq. 1. Specifically, relatively strong stratification above the boundary layer forces the eddy viscosity K to small values above the PBL, and thus acts to limit growth of the boundary layer. Moisture is neglected herein for simplicity.
We integrate these equations using a third-order Runge–Kutta scheme, with vertical grid spacing of 25 m, and the domain depth of 4 km. To prevent reflection of vertically propagating waves, we apply a Rayleigh damper above 3 km. For initial conditions, we set \(u = 0\) and \(v (z) = V (z)\), with \(\theta = 300\) K at the surface and increasing linearly with height at \(5 \times 10^{-3}\) K m\(^{-1}\). We use \( f = 5 \times 10^{-5}\) s\(^{-1}\) for all simulations herein.
Surface heat flux is neglected for these simulations. The implicit assumption here is that boundary-layer turbulence in tropical cyclones is driven primarily by mean vertical wind shear near the surface. Alternatively, we might hypothesize that, in an approximately steady tropical cyclone boundary layer, the net heating via the surface heat flux is exactly canceled by a mesoscale tendency, specifically horizontal advection, that could be added to (17). For convenience, we simply neglect both effects.
3.2 Results
The first example is based on conditions in the axisymmetric model simulation from Sect 2. We choose \(R = 40\) km and, from the axisymmetric simulation at this radius, we find that constant values V = 38 m s\(^{-1}\) and \(\partial V/ \partial R = 8 \times 10^{-4}\) s\(^{-1}\) are reasonable matches to the gradient wind for \(z < 4\) km in the axisymmetric model [which is determined using (7) and the model-produced pressure field].
Profiles of \(u_r\) and \(u_\phi \) change rapidly in the first few hours of the simulation as the PBL parametrization modifies the (initially constant) wind profiles, and the mesoscale tendency terms alter with the simulated flow. Approximately steady results emerge after approximately 6 h. Profiles of the mesoscale tendency terms at \(t = 12\) h are shown in Fig. 4 (blue curves), along with individual components (black curves) as in Fig. 2. The similarity between Fig. 2 (panels b–c) and Fig. 4 is encouraging. Some subtle quantitative differences are apparent, such as the stronger values of radial advection and centrifugal acceleration near the surface for the single-column model. Nevertheless, the overall shape and approximate magnitude of all terms are replicated reasonably well when using the mesoscale tendency terms (4) and (10).
Tendency terms, a the \(u_\phi \) budget (as in Fig. 2b), and b the \(u_r\) budget (as in Fig. 2c), from a single-column model simulation using \(R = 40\) km, \(V = 38\) m s\(^{-1}\), and \(\partial V/ \partial R = -8 \times 10^{-4}\) s\(^{-1}\)
Profiles of a tangential velocity \(u_\phi \) and b radial velocity \(u_r\) from single-column model simulations at \(t = 12\) h. The dashed-green curve is from a simulation using classic Ekman-type mesoscale tendency terms, (18)–(19), and the red curve is from a simulation using the new method for mesoscale tendency terms, (4) and (10), with the same settings as in Fig. 4. The grey-dashed line is from the axisymmetric model at \(r = 40\) km
As in Fig. 4 but for the simulation using classic Ekman-type mesoscale tendency terms. Note a is the tangential velocity budget, b is the radial velocity budget
The predicted velocity profiles are shown in Fig. 5. The overall qualitative similarity between the single-column model (red curve in Fig. 5) and the axisymmetric model (grey curve) is also quite good, particular in terms of the shapes of both the \(u_r\) and \(u_\phi \) profiles. [The small-scale fluctuations in the profiles near the top of the boundary layer (at \(z = 1.4\) km in this case) are present in all of our single-column simulations, and denote where the simple PBL scheme (Eq. 1) is not used when stratification becomes large; this is a minor cosmetic feature that can be eliminated by requiring that K have a minimum value of O(1 m\(^2\) s\(^{-1}\)) (not shown).] The most noteworthy difference between the axisymmetric model and single-column model results is the magnitude of \(u_r\) near the surface, which is \(\approx \)20 % weaker for the single-column model (Fig. 5b). There are several possible reasons for these differences, such as the assumed constant values for V and \(\partial V / \partial R\) in this simulation, or the complete neglect of moisture, for example. It is more likely that the centrifigual acceleration is too large near the surface (\(z < 100\) m) compared to the axisymmetric model simulation because we use V instead of \(u_\phi \) in (4) and (10). However, we note that our results are reasonably accurate, and clearly improved on other approaches (as shown below).
To place these results into context, we also ran simulations with a classic Ekman-type formulation of the mesoscale tendency terms, which accounts only for large-scale geostrophic pressure gradients. (This approach is common in LES of shear-driven boundary layers.) Assuming \((1/\rho ) \partial p / \partial r = f V\), then mesoscale tendency terms for our set-up are simply
$$\begin{aligned} M_r^{\text {Ekman}} =&- f V \, , \end{aligned}$$
$$\begin{aligned} M_\phi ^{\text {Ekman}} =&\, 0 \, . \end{aligned}$$
As in Fig. 5 but using different values of \(\partial V / \partial R\) with the new method for mesocale tendency terms. (The red curve is the same as in Fig. 5)
We hereafter call this a classic "Ekman-type" formulation in the sense that the velocity equations include only large-scale pressure-gradient terms to oppose viscous terms, although unlike the classic analytic solution (see, e.g., Wyngaard 2010, pp. 208–209) the steady assumption is not made and viscosity is not assumed to be constant. Results using (18) and (19) [in place of \(M_r\) and \(M_\phi \) in (15) and (16)] are shown as green-dashed curves in Fig. 5. This simulation produces smaller \(u_\phi \) over all depths, and stronger radial inflow above 200 m a.s.l. that extends over a deeper layer (up to 2 km a.s.l.). The mesoscale tendency terms using the Ekman-type simulation (Fig. 6) are roughly one order of magnitude smaller than using the new method (cf. Fig. 4). A consequence of these weak tendencies is that the PBL scheme reduces \(u_\phi \) too much. In comparison, the new method has large-amplitude tendency terms that oppose the PBL tendencies. Results from our new method in comparison to results from the Ekman-type (Fig. 5) show obvious merits.
A possible shortcoming of the new method is a strong sensitivity to the input parameter \(\partial V / \partial R\). As an example, wind profiles at \(t = 12\) h from five simulations using different values of \(\partial V / \partial R\) are shown in Fig. 7; as \(\partial V / \partial R\) increases in magnitude, \(u_\phi \) decreases and \(u_r\) increases in magnitude. In fact, the largest magnitude of \(\partial V / \partial R\) produces results similar to the classic Ekman-type solution; the reason is that the \(-u_r (\partial V / \partial R)\) term becomes similar in magnitude to the \(-u_r ( V / R )\) term in (4), and thus both terms cancel each other, leaving no mechanism to oppose the PBL tendency.
The value of \(\partial V / \partial R\) for this first simulation was determined from our axisymmetric model simulation. For actual tropical cyclone cases, the value of \(\partial V / \partial R\) may be uncertain, particularly for tropical cyclones far from land with few observations. However, we note that, if tangential velocity \(u_\phi \) is a function of r according to a power law1 that satisfies \(u_\phi = V\) at \(r = R\), i.e.,
$$\begin{aligned} u_\phi = V \left( \frac{r}{ R } \right) ^{-n} \end{aligned}$$
then it follows that
$$\begin{aligned} \frac{\partial u_\phi }{\partial r} = -n \frac{V}{R} \left( \frac{r}{R} \right) ^{-n-1} = -n \frac{u_\phi }{R} \left( \frac{r}{R} \right) ^{-1} \, , \end{aligned}$$
$$\begin{aligned} \frac{ \partial u_\phi }{ \partial r } = -n \frac{u_\phi }{R} \, {\text { at }} \, r = R \, . \end{aligned}$$
As in Fig. 5 except using profiles of V and \(\partial V / \partial R\) that decrease linearly with height from maximum values at the surface of \(V = 40\) m s\(^{-1}\) and \(\partial V / \partial R = -8 \times 10^{-4}\) s\(^{-1}\). In addition, the method of Nakanishi and Niino (2012) (see Eq. 23) is shown by a solid-blue curve, and the method of Green and Zhang (2015) (see Eq. 24) is shown as a dashed-blue curve. The black curve is a composite of dropsonde observations for which mean 500–1000 m a.s.l. wind speed is between 35 and \(45\hbox { m s}^{-1}\)
Thus, our input parameter \(\partial V / \partial R\) can be related to the other two input parameters (V and R) through a decay rate n. We note that the parameters used for the simulation in the first example give \(\partial V / \partial R = -0.8 \times (V/R)\); consistently, the simulated vortex in the axisymmetric simulation has \(u_\phi \sim r^{-0.8}\) near the surface at \(r = 40\) km. Therefore, if \(\partial V / \partial R\) is not known from observations, then a guess can be made for the decay rate n, and the relation \(\partial V / \partial R = -n (V/R)\) can be used once V and R are specified. Typical values for n for tropical cyclones of various intensity were determined by Mallen et al. (2005, their Table 2). We also note that the largest absolute value of \(\partial V / \partial R\) for Fig. 7 corresponds to \(n \approx 1\), i.e., \(u_\phi \sim r^{-1}\), a potential vortex (Burggraf et al. 1971).
3.3 Comparison to Observations and Other Methods
To further assess the fidelity of these simulations, we compare model results with composite wind-speed profiles from observed tropical cyclones based on dropsonde data from the National Oceanographic and Atmospheric Administration (NOAA). We use the quality controlled dataset of Wang et al. (2015) which contains 17 years of data from 120 tropical cyclones. (We note that dropsondes from the U.S. Air Force, and from field projects that did not use NOAA aircraft, are not included in this dataset.) For this first case, we searched for all dropsonde profiles in which the average wind speed between 500 m and 1000 m a.s.l. was between 35 and 45 m s\(^{-1}\). We used only soundings that had at least two data points in this layer and that were separated by at least 250 m, to exclude soundings with a substantial amount of missing data. We also required each sounding to have at least one data point below 100 m a.s.l., and excluded any sounding located more than 300 km from the tropical cyclone centre. From this procedure we obtained 688 soundings that were then averaged to produce composite wind profiles, using 10 m vertical grid spacing, which are shown as black curves in Fig. 8. (Our analysis does not account for observed storm motion, and does not exclude dropsondes from the eyewall of tropical cyclones.) We note that the radial velocity profile (Fig. 8b) suggests very deep inflow up to 2.7 km a.s.l., although the strongest inflow (which is most important dynamically) is confined to the lowest \(\approx \)1 km a.s.l. The level of strongest inflow is at 100 m a.s.l., and the surface (10 m a.s.l.) inflow angle is 22\(^\circ \), both of which are similar to average values from previous studies (e.g., Powell et al. 2009; Zhang et al. 2011; Zhang and Uhlhorn 2012).
The next simulation is based on this composite profile. We assume that V decreases linearly with height from 40 m s\(^{-1}\) at the surface to zero at 18 km a.s.l. (to roughly match the composite profile, whereas in the previous subsection we assumed V = constant to roughly match the gradient-wind profile from the axisymmetric model). Similarly, we assume that \(\partial V / \partial R\) decreases linearly from its maximum value at the surface to zero at 18 km a.s.l. For lack of any specific data, we retain \(\partial V/ \partial R = -8 \times 10^{-4}\) s\(^{-1}\) (at the surface), and \(R = 40\) km.
Results using the new method for mesoscale tendencies after 12 h of integration (red curves in Fig. 8) are remarkably similar to the observed profiles, especially for \(z < 800\) m. In terms of specific quantitative features, the height of maximum inflow is 90 m a.s.l., the inflow-layer depth is 1.1 km, and the surface inflow angle is 23\(^\circ \), all of which are comparable with previous observational composites (e.g. Powell et al. 2009; Zhang et al. 2011; Zhang and Uhlhorn 2012). In contrast, a simulation using the Ekman-type method (green-dashed curve in Fig. 8) produces qualitatively similar results as before, i.e., \(u_\phi \) is too low and \(u_r\) is generally too high.
For comparison, we also show results using two recently developed techniques to simulate the boundary layer of tropical cyclones. NN12 added terms to the governing equations that approximate the centrifugal acceleration in a tropical cyclone, as well as a large-scale pressure-gradient term. Using our nomenclature and set-up (i.e., single-column modelling), their technique can be written,
$$\begin{aligned} M_r^{NN12} =&+ \frac{ u_\phi ^2 }{R} - \left( f V + \frac{V^2}{R} \right) \, , \end{aligned}$$
(23a)
$$\begin{aligned} M_\phi ^{NN12} =&- \frac{u_r u_\phi }{R} \, . \end{aligned}$$
(23b)
GZ15 also considered pressure-gradient and centrifugal acceleration terms, but derived their terms using a different procedure and set of assumptions than NN12. In our nomenclature, their approach can be written,
$$\begin{aligned} M_r^{GZ15} =&+2 \, u_\phi \frac{ V }{R} - \left( f V +2 \frac{V^2}{R} \right) \, , \end{aligned}$$
$$\begin{aligned} M_\phi ^{GZ15} =&-2 \, u_r \frac{ V }{R} \, . \end{aligned}$$
[The terms with parentheses in (23a) and (24a) are the pressure-gradient terms.] Results using these two methods (using the same profile of V as in the previous paragraph) are shown as blue curves in Fig. 8. Both of these techniques produce shallower inflow layers, by approximately a factor of 2, compared to the new method and the observational composite. Further, the surface inflow angle from these techniques is smaller (by 25–50 %) than the observed average value of 23\(^\circ \) (e.g., Powell et al. 2009; Zhang and Uhlhorn 2012). The results have notably greater \(u_{\dot{\phi }}\) than the observed profile for \(z < 500\) m, and vertical wind shear (\(\partial U / \partial z\)) is 20 % larger in the surface layer (roughly, lowest 100 m a.s.l.). Very similar mean wind profiles were produced (using LES) by NN12 (their Fig. 3) and GZ15 (their Fig. 4).
Time series of a inflow-layer depth, \(z_{infl}\), b height of minimum radial velocity, \(z_{u{{-min}}}\), and c inflow angle at 10 m a.s.l., \(\beta _{\text {10-m}}\), for the same simulations shown in Fig. 8
As in Fig. 8 except for higher wind speed (\(V = 60\) m s\(^{-1}\) and \(\partial V / \partial R = 1.1 \times 10^{-3}\) s\(^{-1}\) at the surface). The black curve is a composite of dropsonde observations for which mean 500–1000 m a.s.l. wind speed is between 55 and \(65 \hbox { m s}^{-1}\)
Time series of some notable flow parameters are shown in Fig. 9, where the inflow-layer depth, \(z_{{infl}}\), is defined here as the lowest height at which \(u_r\) exceeds \(-3\) m s\(^{-1}\) (Fig. 9a). The height of strongest inflow, \(z_{u{{-min}}}\), is defined as the height at which \(u_r\) is a minumum (Fig. 9b), and the surface inflow angle, \(\beta _{\text {10-m}}\), is the inflow angle at 10 m a.s.l. (Fig. 9c). In all cases, the variations near the beginning of the simulations are associated with decaying inertial oscillations (e.g., Lewis and Belcher 2004). The Ekman-type method produces oscillations with the longest period, by far (\({>}\)10 h); for the NN12 and GZ15 methods, the period are very short (\(\approx \)1 h) and oscillations last for nearly 12 h. For the new method, the period is roughly 2 h, and the signal is practically zero after two oscillations. The primary conclusions from Fig. 9, though, are that the new method produces the most accurate quantitative results (\(z_{{infl}} \approx \) 1 km, \(z_{u{{-min}}} \approx \) 0.1 km, and \(\beta _{\text {10-m}} \approx \) 23\(^\circ \)) and that these values clearly converge at the analysis time used above (\(t = 12\) h).
It could be argued that the techniques of NN12 and GZ15 are preferable to simulations that neglect any inertial terms, i.e., compared to the classic Ekman-type method that only accounts for geostrophic pressure-gradient acceleration (green-dashed curve in Fig. 8). Indeed, the NN12 and GZ15 techniques produce some qualitatively accurate results, such as greatest radial velocity near the surface, and greatest wind shear in the lowest 100 m a.s.l., which is broadly consistent with the observed tropical cyclone boundary layer. Nevertheless, it seems clear that our new formulation (Eqs. 4 and 10) produces better quantitative comparison to observations. The primary reason is the inclusion of radial advection terms, as discussed above.
As one final test of the single-column model, we evaluate simulations at higher wind speeds. In this case, we first calculated a composite of dropsonde observations, using the same methodology as above, but for soundings in which the mean wind speed in the 500-1,000 m layer was between 55 and 65 m s\(^{-1}\) (and the same method for excluding soundings with few datapoints); 269 soundings met all criteria. Based on the average results (black curve in Fig. 10) we ran simulations using \(V = 60\) m s\(^{-1}\) at the surface decreasing to zero at 18 km a.s.l. The average radius of the dropsondes was roughly 40 km, so we set \(R = 40\) km. Simulations with the Ekman-type method, the NN12 method, and the GZ15 method exhibit the same qualitative differences from observations as before (Fig. 10).
For the new method, we ran a series of simulations with different values for \(\partial V / \partial R\), and show in Fig. 10 the case that best matches the dropsonde composite: \(\partial V / \partial R = -1.1 \times 10^{-3}\) s\(^{-1}\), corresponding to \(n = 0.7\). Results using the new technique (red in Fig. 10) again produce the best match to the observed composite. There are a few subtle differences from the dropsonde composite (\(\thickapprox 5\) m s\(^{-1}\)) that may be attributable to the inclusion of dropsondes from the eyewall of tropical cyclones. (As noted in Sect. 2, our new method is not applicable in the eye and eyewall of tropical cyclones.) Nevertheless, these analyses show that the new method accurately reproduces robust qualitative aspects of the tropical cyclone boundary layer (e.g., depth and magnitude of the inflow layer) that several previous approaches do not.
4 Large-Eddy Simulations
In this section we evaluate the new method when used within LES, which can provide further insight into turbulent processes within the tropical cyclone boundary layer. For example, vertical momentum fluxes (which are notoriously difficult to measure within a tropical cyclone boundary layer due to hazardous conditions) can be estimated using LES results.
Here we use the numerical model "Cloud Model version 1" (CM1) that has been used for several LES studies in recent years (e.g., Kang and Bryan 2011; Kang et al. 2012; Wang 2014; Nowotarski et al. 2014; Markowski and Bryan 2016). Details of the model used here, including a new "two part" subgrid model near the surface, are provided in the Appendix. The parametrization of surface stress uses the same scheme as in Sect. 2, although for LES we use time-averaged values of wind speed at the lowest model level to calculate an average stress, and then calculate instantaneous values following Moeng (1984).
As with NN12 and GZ15, we use a Cartesian grid for our simulations, and for convenience, have chosen to make x and y from our Cartesian grid align with r and \(\phi \), respectively, in a cylindrical grid. In other words, we locate the model domain to the east of the tropical cyclone centre, as illustrated schematically in Fig. 1. But our method for the mesoscale tendency terms for the LES model in the two horizontal directions [\(M_1^{LES}\) and \(M_2^{LES}\)] differs from NN12 and GZ15, who derived their terms from the governing equations for flow in cylindrical coordinates. We simply adopt the method developed for single-column modelling (Sect. 2) but replace the variables \(u_r\) and \(u_\phi \) with domain-average zonal \(\left\langle u\right\rangle \) and meridional \(\left\langle v\right\rangle \) components of velocity, where angled brackets denote a horizontal average over the domain at each model level. Specifically, we use,
$$\begin{aligned} M_1^{LES} =&\, + \frac{\left\langle u\right\rangle ^2}{R} + \left\langle v\right\rangle \frac{V}{R} - \left( f V + \frac{V^2}{R} \right) \, , \end{aligned}$$
$$\begin{aligned} M_2^{LES} =&- \left\langle u\right\rangle \frac{\partial V}{\partial R} - \left\langle u\right\rangle \frac{V}{R} \, , \end{aligned}$$
and note that these mesoscale tendency terms are functions of t and z only. (In contrast, the methods advocated by NN12 and GZ15, discussed below, are calculated independently at each gridpoint.) We intentionally chose this form based on the methodology put forth in Sect. 1 (and discussed, for example, by Sommeria 1976), i.e., that mesoscale tendency terms should account for processes on scales larger than the size of the model domain. In other words, the large-scale gradients in wind speed (which are needed for calculations of large-scale advection) are presumed to not exist on the small domain, and so we assume constant (in x and y) values of large-scale gradient that are specified at the beginning of a simulation; mesoscale horizontal advection terms (first terms on the right side of Eqs. 25 and 26) are then applied uniformly across the entire domain at each timestep. As demonstrated below, this methodology produces realistic wind profiles as compared to observations in the tropical cyclone boundary layer, and also avoids the potential problem of introducing an instability via the vorticity equation (NN12, GZ15). That is, because these tendencies are not functions of x or y, they cannot contribute to the vertical vorticity tendency.
Horizontal wind speed (m s\(^{-1}\)) at \(t = 4\) h from LES at a 52 m a.s.l. and b 252 m a.s.l. using the same input parameters as in Fig. 8
The three terms on the right-hand side of Eq. 25 represent, respectively: mean radial advection of radial velocity in a tropical cyclone the centrifugal acceleration term associated with the mean flow in a tropical cyclone and the large-scale pressure gradient in a tropical cyclone. For (26), the two terms on the right-hand side represent, respectively: mean radial advection of tangential velocity and centrifugal acceleration. As with the single-column modelling approach, the model user must specify three parameters: R, V, and \(\partial V / \partial R\). Finally, we note that, unlike our simple framework for single-column modelling, the potential temperature field plays a direct role in the simulated flow, via the buoyancy term in the velocity equation, Eq. 31a.
For the first case with LES, we use one of the examples from the previous section: \(R = 40\) km; \(V = 40\) m s\(^{-1}\) at the surface decreasing linearly to zero at 18 km a.s.l.; and \(\partial V / \partial R = -8 \times 10^{-4}\) s\(^{-1}\) at the surface decreasing linearly to zero at 18 km a.s.l. The LES grid for this case has 512 \(\times \) 512 \(\times \) 512 grid points spanning 5.12 km \(\times \) 5.12 km horizontally. [A few simulations with larger domains in the horizontal were tested and found to produce very similar results (not shown).] The horizontal grid spacing is constant (10 m). In the vertical, the domain is 3 km deep; the vertical grid spacing is 5 m below 2 km, but increases gradually to 12.5 m between 2 and 3 km a.s.l. Rayleigh damping is used above 2 km for u, v, w, and \(\theta ^\prime \) (defined in the Appendix) to damp vertically propagating waves. Periodic boundary conditions are used in both horizontal directions.
Horizontal wind speed U from observations and LES using a classic Ekman-type formulation of mesoscale tendency terms and b the new method for mesoscale tendency terms, using the same input parameters as in Fig. 8. The red curves denote domain-averaged horizontal wind speed \(\left\langle U\right\rangle \), averaged between 5 and 6 h, and gray shading encloses the minimum and maximum instantaneous values of U during this time. The black curve is the composite from dropsonde observations as in Fig. 8. Black dots denote flight-level observations from NOAA P3 flights (events listed in Table 1 of Zhang and Drennan (2012)) for which average wind speed was within 4 m s\(^{-1}\) of model results at the same level for the simulation in panel (b)
Instantaneous fields of horizontal wind speed U are shown in Fig. 11. Linear "streaks" are apparent in U near the surface (Fig. 11a). Such structures have been observed in hurricane boundary layers (e.g., Wurman and Winslow 1998; Morrison et al. 2005; Zhang et al. 2008b; Kosiba and Wurman 2014) and are thought to be important for strong wind gusts near the surface in tropical cyclone boundary layers. Farther aloft (Fig. 11b), linear features are less obvious, although it is clear that the simulated boundary layer is turbulent.
Vertical profiles of averaged horizontal wind speed \(\left\langle U\right\rangle = \left\langle (u^2 + v^2)^{1/2} \right\rangle \) are shown as red curves in Fig. 12, where the simulation for Fig. 12a uses a classic Ekman-type method with only large-scale geostrophic pressure gradient,
$$\begin{aligned} M_1^{\text {LES,Ekman}} =&- f V \, , \end{aligned}$$
$$\begin{aligned} M_2^{\text {LES,Ekman}} =&\, 0 \, , \end{aligned}$$
and the simulation for Fig. 12b use the new mesoscale tendency terms, (25)–(26). We also ran simulations using the equation sets advocated by NN12 and GZ15 (not shown). Differences in mean flow fields using these different methods are qualitatively the same as they were for single-column modelling. That is, tangential velocity (not shown) tends to be lowest with the Ekman-type method but greatest with the NN12 and GZ15 methods, and radial inflow (not shown) tends to be be deepest with the Ekman-type method but shallowest with the NN12 and GZ15 methods. The profile of average horizontal wind speed from the composite droposonde profile (see Sect. 3c for details) is shown as a black curve in Fig. 12. Results using the Ekman-type approach are notably weaker than the observed composite (Fig. 12a), whereas results from the new method are in excellent agreement (Fig. 12b). Based on these results, only results using the new method are analyzed for the remainder of this section.
Total turbulent stress \(| \tau | / \rho \) from the LES simulation as in Fig. 12b. The solid-red curve denotes total stress averaged between 5 and 6 h, and grey shading denotes minimum and maximum values from one-minute output during this time. The dashed-red curve denotes the average parametrized part of the stress. Black dots denote observed values of turbulent stress as determined by French et al. (2007) for the same cases used for Fig. 12b
Horizontal wind speed is plotted in Fig. 12 because it gives us an opportunity to compare with in situ low-level data collected in 2003–2004 by NOAA aircraft during the Coupled Boundary Layer Air-Sea Transfer experiment (CBLAST, Black et al. 2007). Specifically, we use data collected in Hurricanes Fabian (2003), Isabel (2003), Frances (2004), and Jeanne (2004) [see Table 1 in Zhang and Drennan (2012)]. A total of 69 "flux runs" below 800 m altitude are analyzed. Quality control and analysis procedures are explained in previous studies (e.g., French et al. 2007; Zhang et al. 2009). For Fig. 12 the black dots are in situ wind-speed measurements, averaged along the length of the flux run. We selected all cases in which this value was within 4 m s\(^{-1}\) of the average model value, \(\left\langle U \right\rangle \), at the same height in the simulation with the new mesoscale tendency terms (Fig. 12b). This procedure yields 26 observational cases. Additional measurements from these same 26 cases are included in other analyses below.
Figure 13 shows the vertical profile of total turbulent stress \(\tau \), calculated as
$$\begin{aligned} \tau / \rho = \left[ \left\langle u^\prime w^\prime \right\rangle + \left\langle v^\prime w^\prime \right\rangle \right] ^{1/2} \end{aligned}$$
where prime superscripts denote a difference from domain-average values (e.g., \(v^\prime = v - \left\langle v \right\rangle \)). For the observations, we present averages along each flux run, and for simulations we use horizontal domain averages. The model output (solid-red curve in Fig.13) includes the parametrized value from the subgrid turbulence model (see Appendix), which is shown as a dashed-red curve for reference. The model profiles are averaged over hours 5 and 6 of the simulation, and maximum/minimum instantaneous values are denoted by grey shading. Overall, results (Fig.13) show a substantial positive bias by the model: the mean absolute difference is 0.36 m\(^{2}\) s\(^{-2}\). The cause of this difference is not clear, but may be related to differences in parametrized and actual surface stress (i.e., drag coefficient). Also, as with the single-column model (Sect. 3), we neglected surface heat flux (except for \(t < 50\) min, to spin up turbulence; see Appendix), and we also neglected all moist processes. The averaging methods are also different (i.e., along a flight leg for the observations, as compared to domain-average for the LES). Nevertheless, comparison of LES and turbulence observations in the tropical cyclone boundary layer is quite rare and this first look, without attempts to tune model parametrizations, is encouraging.
Many PBL parametrization schemes for numerical weather prediction models make use of an eddy viscosity \(K_m\), including the simple scheme used herein (see Eq. 1). The effective eddy viscosity in a turbulent flow can be estimated using the relation
$$\begin{aligned} K_m = ( \tau / \rho ) \left[ \left( \frac{\partial \left\langle u\right\rangle }{\partial z} \right) ^2 + \left( \frac{\partial \left\langle v\right\rangle }{\partial z} \right) ^2 \right] ^{-1/2} \, . \end{aligned}$$
We plot estimated eddy viscosity values from in situ turbulence observations (as determined by Zhang and Drennan (2012)) as black dots in Fig. 14. Zhang and Drennan (2012) did not report uncertainty for their estimates, so we provide a rough estimate here. The uncertainty arises from two parts: (1) uncertainty in the calculation of momentum flux; and (2) uncertainty in the calculation of the vertical gradient of wind speed. Drennan et al. (2007) and French et al. (2007) discussed factors that affect turbulent flux calculations using the eddy-correlation method. For momentum flux, the uncertainty is small (<5 %) given the thorough quality control of the CBLAST data. The uncertainty in the calculation of vertical gradient of wind speed is also small, as the uncertainty in the wind observations is only \(\approx \)1 % (Hock and Franklin 1999). (Of note, dropsonde data were collocated with the flux runs for CBLAST.) Overall, we estimate an uncertainty of <10 % in the observational values of vertical eddy viscosity.
Effective eddy viscosity \(K_m\) from the same LES as in Fig. 12b. The solid-red curve is the total, and the dashed-red curve is the subgrid component, averaged between 5 and 6 h. Grey shading denotes minimum and maximum values from 1-min output during this time. Black dots are observed estimates from NOAA P3 flux runs (Zhang and Drennan 2012) for the same cases used for Fig. 12b
Power spectral density for along-wind velocity component at approximately 200 m a.s.l. from LES (for three different grid resolutions, as indicated by the legend) and for flight-level observations (solid-black)
As in Fig. 15 but for vertical velocity
The LES results (red in Fig. 14) again have a mean positive bias compared with the observational estimates. In this case, the mean absolute difference is 22.0 m\(^2\) s\(^{-1}\). Nevertheless, we are encouraged to see comparable magnitudes in the lower half of the boundary layer (below roughly 500 m a.s.l.). An over-prediction of boundary-layer depth by our LES seems apparent when comparing the observations at \(z \approx 0.75\) km to model results. We find this over-prediction can be ameliorated by inclusion of subsidence terms to (25)–(26) (not shown).
4.3 Spectral Analysis
As another method to evaluate the LES, we examine temporal spectra of wind components near the surface. The observational data for this analysis were collected on 12 September 2003 in Hurricane Isabel, at an average radius of 130 km from the tropical cyclone centre, and at an average altitude of 194 m a.s.l. Mean flight-level wind speed was 33 m s\(^{-1}\). The flight leg was \(\approx \)54 km in length (\(\approx \)6 min in time), which was one of the longest flux runs during CBLAST. Power spectral density is calculated using fast Fourier transform of in situ 40-Hz data. Results are shown as black curves for the along-wind component of velocity in Fig. 15 and for vertical velocity in Fig 16. The power spectral density of both velocity components have roughly \(f^{-5/3}\) structure, indicative of a turbulent inertial subrange, for \(f > 0.1\) Hz, similar to a recent analysis (of a different dataset) by Nolan et al. (2014).
For LES, the input parameters are chosen to match data from this case. We estimate V using dropsonde data (Fig. 2 from Zhang and Drennan 2012); based on values near the top of the boundary layer, we choose \(V = 37\) m s\(^{-1}\). For \(\partial V / \partial R\), we use the radial gradient of wind speed from flight-level data, which is approximately \(-1.6 \times 10^{-4}\) s\(^{-1}\). Finally, the mean radius of the flight gives \(R = 130\) km.
For this simulation, the model domain extends 6 km \(\times \) 6 km horizontally, and is 4 km deep with a Rayleigh damper above 3 km. As a test of sensitivity to resolution, we use three different grid spacings: \(\Delta x =\) 31.25, 15.625, and 7.8125 m. In all cases, \(\Delta z = \Delta x / 2\). For the analysis of temporal spectra, we have wind speed every timestep from a location in the middle of the domain. To compare model results to the CBLAST observations, we use an ensemble average of power spectral density calculated using 50 %-overlapping, 6-min segments of the model time series at \(z \approx 190\) m. A 6-min segment was chosen to match the duration of the CBLAST data, resulting in 39 segments for the ensemble average.
For all three model resolutions, the power spectral densities of the along-wind component of wind speed (Fig. 15) are similar to each other in the low-frequency portion of the spectrum, from the lowest frequency (\(2.7 \times 10^{-3}\) Hz) to a critical frequency \(f_c\) above which the spectra decrease rapidly. As expected, \(f_c\) is larger for simulations with smaller grid spacing. The magnitude of \(f_c\) is apparently related to the mean advective velocity \(U_a\) divided by the smallest resolvable scale in the simulated flow, i.e., \(f_c \approx U_a / (6 \Delta )\), where \(\Delta \) is horizontal grid spacing, and \(6 \Delta \) is approximately the smallest scale that is unaffected by numerical filtering in CM1 (see, e.g., Appendix of Bryan et al. 2003).
There is a notable difference between model spectra and the observational spectrum at low frequencies (\(f < 0.1\) Hz) which is likely related to the different measuring techniques (i.e., at a single point for the model results, and along a flight path for the observations). The difference is especially pronounced for the w spectra.
Most important, though, the LES spectra have \(f^{-5/3}\) structure, similar to the observational spectrum, for a certain range of frequencies (specifically, between about 0.08 Hz and the cut-off frequency \(f_c\)). This behaviour is most apparent for the two highest resolution simulations, and suggests that \(\Delta < 20\) m is needed to produce an inertial subrange in the tropical cyclone boundary layer for this model; we note that this conclusion is technically only valid for this level (190 m a.s.l.), and that higher resolution may be needed near the surface. As further evidence for the existence of an inertial subrange in our simulations, we note that the cross-stream and vertical velocity spectra have amplitude roughly four-thirds of the along-stream spectra (as expected by theory (e.g., Wyngaard 2010)) for frequencies of \(\approx \)0.08–0.5 Hz (not shown).
An inexpensive method to simulate wind profiles in the boundary layer of tropical cyclones is developed and evaluated. The method is intended for single-column modelling and for three-dimensional modelling with small domains (of order 5 km in extent), and can be used with a PBL parametrization or within large-eddy simulations. The key step is to account for processes that occur on scales larger than the proposed model domain.
The core of the new procedure is to add "mesoscale tendency" terms to the horizontal velocity equations to account for large-scale radial advection, centrifugal acceleration, and pressure-gradient acceleration within a tropical cyclone. The method utilizes three simple input parameters: R, the distance from the centre of the tropical cyclone, V, a reference wind profile, and \(\partial V / \partial R\), the radial gradient of V. Ideally, these three parameters can be determined by observations from within a tropical cyclone, or from a reference simulation (such as the axisymmetric simulation used in Sects. 2–3). The value for \(\partial V / \partial R\) is particularly difficult to estimate from observed storms, and model output can be quite sensitive to its value. But as shown in Sect. 3.2, if the tangential velocity varies as a power law, i.e., \(u_\phi \sim r^{-n}\), then the relation \(\partial V / \partial R = - n (V / R)\) can be used to estimate \(\partial V / \partial R\) once V and R are chosen, and assuming a reasonable guess can be made for n.
Two slightly different formulations for the mesoscale tendency terms are presented: one intended for single-column modelling, given by (4) and (10); and one formulation to be used with LES, (25)–(26). Previous methods for simulating the boundary layer of tropical cyclones were evaluated alongside the new method, including a classic Ekman-type method that adds only a large-scale (geostrophic) pressure gradient acceleration to the horizontal velocity equations, and two recently developed methods that also account for centrifugal-acceleration-like terms (Nakanishi and Niino 2012; Green and Zhang 2015). The new method, which also accounts for large-scale radial (i.e., horizontal) advection, has clear advantages: it produces realistic mean wind-speed profiles compared with composites of dropsonde observations in tropical cyclones. Also, comparison of LES results with low-level data from NOAA flights during CBLAST shows reasonable results, including an effective eddy viscosity of O(50 m\(^2\) s\(^{-1}\)) and temporal spectra with frequency dependence of \(f^{-5/3}\) for frequencies \(\gtrsim 0.1\) Hz.
Before concluding, we note again that simulations discussed herein neglected surface heat flux and all moist processes, for simplicity. We also do not include any mesoscale tendency term for \(\theta \), which would tend to cool the boundary layer. Consequently, \(\theta \) profiles tend to be nearly well mixed in our simulations (not shown), whereas observed \(\theta \) profiles tend to be stratified (for \(z > 100\) m) in observed tropical cyclones (e.g., Kepert et al. 2016). Despite these approximations, the modelled wind profiles compare well with observations from the tropical cyclone boundary layer. Our results suggest that effects from stratification and moisture play a relatively minor role, compared to shear-driven turbulence mechanisms, in the boundary layer of tropical cyclones. Nevertheless, stratification and moisture should be considered in future work, which would necessitate inclusion of mesoscale tendency terms (i.e., radial advection) in equations for potential temperature and water vapour, and perhaps also the evaporation of water, which Kepert et al. (2016) found to be a major contributor to the \(\theta \) profile in tropical cyclones.
The authors thank Richard Rotunno for bringing this analysis to our attention.
The National Center for Atmospheric Research is sponsored by the National Science Foundation. Rochelle Worsnop was supported by NSF Grant DGE-1144083. Jun Zhang was supported by NOAA HFIP Grant NA14NWS4680028. High-performance computing support was provided by NCAR's Computational and Information Systems Laboratory (Allocation Numbers NMMM0026 and UCUB0025 on Yellowstone, ark:/85065/d7wd3xhc). The authors thank Richard Rotunno and Peter Sullivan for their informal reviews of this manuscript, as well as Kerry Emanuel, Benjamin Green, Daniel Stern, and the anonymous reviewers.
Appendix: Details of the Large-Eddy Simulations
This study utilizes "Cloud Model version 1" (CM1) (e.g., Bryan and Fritsch 2002; Bryan and Rotunno 2009), which uses a three-step Runge–Kutta method for the time integration scheme, and a fifth-order scheme for advection terms, following Wicker and Skamarock (2002). CM1 integrates a variety of equation sets including compressible, anelastic, or incompressible equations. This study uses the solver for compressible equations using the split-explicit technique (e.g., Skamarock and Klemp 1994), primarily because it is the most efficient of the CM1 solvers for distributed-memory supercomputers.
Using tensor notation (subscript i or \(j = 1,2,3\)) the governing equations are,
$$\begin{aligned} \frac{\partial u_i}{\partial t}&= - u_j \frac{\partial u_i}{\partial x_j} - c_p \theta \frac{\partial \pi ^\prime }{\partial x_i} + \delta _{i3 } g \frac{\theta ^\prime }{\theta _0} + \varepsilon _{ij3} u_j f + \frac{1}{\rho } \frac{\partial \tau ^t_{ij} }{\partial x_j} + \delta _{i1} M_1^{LES} + \delta _{i2} M_2^{LES} \, , \end{aligned}$$
$$\begin{aligned} \frac{\partial \theta ^\prime }{\partial t}&= - u_j \frac{\partial \theta }{\partial x_j} - \frac{1}{\rho } \frac{\partial \tau ^{\theta }_{j} }{\partial x_j} \, , \end{aligned}$$
$$\begin{aligned} \frac{\partial \pi ^\prime }{\partial t}&= - u_j \frac{\partial \pi }{\partial x_j} - \frac{R_a}{c_v} \pi \frac{\partial u_j}{\partial x_j} \, , \end{aligned}$$
(31c)
$$\begin{aligned} \frac{\partial e}{\partial t}&= - u_j \frac{\partial e}{\partial x_j} + \frac{1}{\rho } \left[ \left( \tau ^t_{ij} + \tau ^d_{ij} \right) \frac{\partial u_i}{\partial x_j} + \frac{g}{\theta _0} \tau ^\theta _{3} + \frac{\partial \tau ^e_j}{\partial x_j} \right] + \epsilon \end{aligned}$$
(31d)
where \(u_i\) is the model-predicted (i.e., resolved) velocity in Cartesian coordinates (\(u_1 = u\), \(u_2 = v\), \(u_3 =w\)), \(\theta \) is potential temperature, \(\pi = (p/p_r)^{R_a/c_p}\) is non-dimensional pressure, and e is subgrid TKE. Other symbols are defined as follows: g is the acceleration due to gravity, \(c_p\) and \(c_v\) are specific heats of air at constant pressure and volume respectively, \(R_a\) is the gas constant, \(p_r = 1000\) hPa is a reference pressure, \(\tau ^t_{ij}\) is stress from the subgrid turbulence model, \(\tau ^d_{ij}\) represents stress from the diffusive component of the advection scheme (explained below), \(\tau ^\theta _j\) is subgrid diffusivity for \(\theta \), \(\tau ^e_j\) is diffusivity for e, and \(\epsilon \) is dissipation. The equation of state \(\pi = ( \rho R_a \theta / p_r)^{R_a/c_v} \) is used to determine \(\rho \). Prime superscripts in (31) denote perturbations from a one-dimensional, time-independent, hydrostatic base state that is denoted by subscript 0; for example, \(\pi ^\prime (x,y,z,t) = \pi (x,y,z,t) - \pi _0(z)\). The \(\pi _0\) profile obeys the hydrostatic equation, \(d \pi _0 / d z = - g / (c_p \theta _0)\). Herein, \(\theta _0\) is assumed to vary linearly with height at a rate of 5 K km\(^{-1}\); we use surface values of \(\theta _0 (z=0) = 300\) K and \(\pi _0 (z=0) = 1\). At the initial time, \(v = V\) and \(u = w = \theta ^\prime = \pi ^\prime = e = 0\), except random small-amplitude (±0.1 K) perturbations are used for \(\theta ^\prime \) in the lowest 100 m a.s.l. As in Moeng and Sullivan (1994) we apply a surface heat flux for the first 50 min of the simulation to facilitate development of turbulence. Moist processes are neglected herein for simplicity.
The LES subgrid model is a "two part" scheme following Sullivan et al. (1994), which is an extension of the frequently used TKE-based scheme for LES (e.g., Deardorff 1980; Moeng 1984). The subgrid stress terms are parametrized as follows,
$$\begin{aligned} \tau ^t_{11} = 2 \rho K_m \gamma \frac{\partial u}{\partial x} , \qquad&\tau ^t_{22} = 2 \rho K_m \gamma \frac{\partial v}{\partial y} , \nonumber \\ \nonumber \tau ^t_{33} = 2 \rho K_m \gamma \frac{\partial w}{\partial z} , \qquad&\tau ^t_{12} = \rho K_m \gamma \left( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \right) ,\nonumber \\ \tau ^t_{13} = \rho K_m \gamma \left( \frac{\partial u}{\partial z} + \frac{\partial w}{\partial x} \right) + \rho K_w \frac{\partial \widetilde{u}}{\partial z} , \qquad&\tau ^t_{23} = \rho K_m \gamma \left( \frac{\partial v}{\partial z} + \frac{\partial w}{\partial y} \right) + \rho K_w \frac{\partial \widetilde{v}}{\partial z} \, , \end{aligned}$$
where \(K_m = c_m l e^{1/2}\) is a standard subgrid eddy viscosity, \(K_w\) is a supplemental near-surface eddy viscosity used to address the common problem of excessive mean vertical wind shear near the surface in LES (following Sullivan et al. 1994, their Eq. 23), and \(\gamma \) is a non-dimensional parameter used to blend these two eddy viscosities. We set the constant \(c_m = 0.1\) and use the same formulation for length scale l as Stevens et al. (1999). The variables \(\widetilde{u}\) and \(\widetilde{v}\) in (32) are average values: although Sullivan et al. (1994) used domain averages, we use 2-min time averages for these variables, and also to compute \(K_w\). For our applications, we find that these two types of averaging procedures produce very similar results for simulations of horizontally homogeneous shear-driven boundary layers, but that time averaging is more clearly applicable for the horizontally heterogeneous hurricane boundary layers that were are studying with a different application of CM1 (to be reported elsewhere).
Similar to other LES models, we use \(\tau _j^\theta = - \rho K_h ( \partial \theta / \partial x_j)\), \(\tau _j^e = 2 \rho K_m ( \partial e / \partial x_j )\), and \(\epsilon = c_\epsilon e^{3/2} / l\), where \(K_h = c_h K_m\), and the parameters \(c_h\) and \(c_\epsilon \) are formulated as in Stevens et al. (1999). One notable difference from other LES modes used in the atmospheric sciences is that our subgrid TKE equation, Eq. 31d, includes a second scale transfer term, \(\tau ^d_{ij} ( \partial u_i / \partial x_j )\). This term is associated with the fifth-order advection scheme in CM1, which contains flow-dependent diffusion (see, e.g., Wicker and Skamarock 2002). The specific formulation for \(\tau ^d_{ij}\) for CM1 was documented by Bryan and Rotunno (2014, pg. 1130). We note that this term is not needed in (31d) for numerical purposes, but is included primarily for completeness; also, this term is necessary for precise calculations of TKE budgets for CM1 (not shown).
Black PG, D'Asaro EA, Drennan WM, French JR, Niiler PP, Sanford TB, Terrill EJ, Walsh EJ, Zhang JA (2007) Air-sea exchange in hurricanes: synthesis of observations from the coupled boundary layer air-sea transfer experiment. Bull Am Meteorol Soc 88:357–374CrossRefGoogle Scholar
Bryan GH (2012) Effects of surface exchange coefficients and turbulence length scales on the intensity and structure of numerically simulated hurricanes. Mon Weather Rev 140:1125–1143CrossRefGoogle Scholar
Bryan GH, Fritsch JM (2002) A benchmark simulation for moist nonhydrostatic numerical models. Mon Weather Rev 130:2917–2928CrossRefGoogle Scholar
Bryan GH, Rotunno R (2009) The maximum intensity of tropical cyclones in axisymmetric numerical model simulations. Mon Weather Rev 137:1770–1789CrossRefGoogle Scholar
Bryan GH, Rotunno R (2014) Gravity currents in confined channels with environmental shear. J Atmos Sci 71:1121–1142CrossRefGoogle Scholar
Bryan GH, Wyngaard JC, Fritsch JM (2003) Resolution requirements for the simulation of deep moist convection. Mon Weather Rev 131:2394–2416CrossRefGoogle Scholar
Burggraf OR, Stewartson K, Belcher R (1971) Boundary layer induced by a potential vortex. Phys Fluids 14:1821–1833CrossRefGoogle Scholar
Deardorff JW (1980) Stratocumulus-capped mixed layer derived from a three-dimensional model. Boundary-Layer Meteorol 18:495–527CrossRefGoogle Scholar
Donelan MA, Haus BK, Reul N, Plant WJ, Stiassnie M, Graber HC, Brown OB, Saltzman ES (2004) On the limiting aerodynamic roughness of the ocean in very strong winds. Geophys Res Lett 31(L18):306. doi: 10.1029/2004GL019460 Google Scholar
Drennan WM, Zhang JA, French JR, McCormick C, Black PG (2007) Turbulent fluxes in the hurricane boundary layer. Part II: Latent heat flux. J Atmos Sci 64:1103–1115CrossRefGoogle Scholar
Emanuel KA (1986) An air-sea interaction theory for tropical cyclones. Part I: Steady-state maintenance. J Atmos Sci 43:585–604CrossRefGoogle Scholar
Foster RC (2005) Why rolls are prevalent in the hurricane boundary layer. J Atmos Sci 62:2647–2661CrossRefGoogle Scholar
Foster RC (2009) Boundary-layer similarity under and axisymmetric, gradient wind vortex. Boundary-Layer Meteorol 131:321–344CrossRefGoogle Scholar
French JR, Drennan WM, Zhang JA, Black PG (2007) Turbulent fluxes in the hurricane boundary layer. Part I: Momentum flux. J Atmos Sci 64:1089–1102CrossRefGoogle Scholar
Green BW, Zhang F (2015) Idealized large-eddy simulations of a tropical cyclone-like boundary layer. J Atmos Sci 72:1743–1764CrossRefGoogle Scholar
Hinze JO (1975) Turbulence. McGraw-Hill, New YorkGoogle Scholar
Hock TF, Franklin JL (1999) The NCAR GPS dropwindsonde. Bull Am Meteorol Soc 80:407–420CrossRefGoogle Scholar
Kang SL, Bryan GH (2011) A large-eddy simulation study of moist convection initiation over heterogeneous surface fluxes. Mon Weather Rev 139:2901–2917CrossRefGoogle Scholar
Kang SL, Lenschow D, Sullivan P (2012) Effects of mesoscale surface thermal heterogeneity on low-level horizontal wind speeds. Boundary-Layer Meteorol 143:409–432CrossRefGoogle Scholar
Kepert JD (2001) The dynamics of boundary layer jets within the tropical cyclone core. Part I: Linear theory. J Atmos Sci 58:2469–2484CrossRefGoogle Scholar
Kepert JD (2012) Choosing a boundary layer parameterization for tropical cyclone modeling. Mon Weather Rev 140:1427–1445CrossRefGoogle Scholar
Kepert JD (2013) How does the boundary layer contribute to eyewall replacement cycles in axisymmetric tropical cyclones? J Atmos Sci 70:2808–2830CrossRefGoogle Scholar
Kepert JD, Wang Y (2001) The dynamics of boundary layer jets within the tropical cyclone core. Part II: Nonlinear enhancement. J Atmos Sci 58:2469–2484CrossRefGoogle Scholar
Kepert JD, Schwendike J, Ramsay H (2016) Why is the tropical cyclone boundary layer not "well mixed"? J Atmos Sci 73:957–973CrossRefGoogle Scholar
Kosiba KA, Wurman J (2014) Finescale dual-Doppler analysis of hurricane boundary layer structures in Hurricane Frances (2004) at landfall. Mon Weather Rev 142:1874–1891CrossRefGoogle Scholar
Lewis DM, Belcher SE (2004) Time-dependent, coupled, Ekman boundary layer solutions incorporating Stokes drift. Dyn Atmos Oceans 37:313–351CrossRefGoogle Scholar
Louis JF (1979) A parametric model of vertical eddy fluxes in the atmosphere. Boundary-Layer Meteorol 17:187–202CrossRefGoogle Scholar
Mallen KJ, Montgomery MT, Wang B (2005) Reexamining the near-core radial structure of the tropical cyclone primary circulation: implications for vortex resiliency. J Atmos Sci 62:408–425CrossRefGoogle Scholar
Markowski PM, Bryan GH (2016) LES of laminar flow in the PBL: a potential problem for convective storm simulations. Mon Weather Rev 144:1841–1850CrossRefGoogle Scholar
Mason PJ, Thompson DJ (1992) Stochastic backscatter in large-eddy simulations of boundary layers. J Fluid Mech 242:51–78CrossRefGoogle Scholar
Moeng CH (1984) A large-eddy-simulation model for the study of planetary boundary-layer turbulence. J Atmos Sci 41:2052–2062CrossRefGoogle Scholar
Moeng CH, Sullivan PP (1994) A comparison of shear- and buoyancy-driven planetary boundary layer flows. J Atmos Sci 51:999–1022CrossRefGoogle Scholar
Morrison I, Businger S, Marks F, Dodge P, Businger JA (2005) An observational case for the prevalence of roll vortices in the hurricane boundary layer. J Atmos Sci 62:2662–2673CrossRefGoogle Scholar
Nakanishi M, Niino H (2012) Large-eddy simulation of roll vortices in a hurricane boundary layer. J Atmos Sci 69:3558–3575CrossRefGoogle Scholar
Nolan DS, Zhang JA, Uhlhorn EW (2014) On the limits of estimating the maximum wind speeds in hurricanes. Mon Weather Rev 142:2814–2837CrossRefGoogle Scholar
Nowotarski CJ, Markowski PM, Richardson YP, Bryan GH (2014) Properties of a simulated convective boundary layer in an idealized supercell thunderstorm environment. Mon Weather Rev 142:3955–3976CrossRefGoogle Scholar
Ooyama K (1969) Numerical simulation of the life cycle of tropical cyclones. J Atmos Sci 26:3–40CrossRefGoogle Scholar
Powell MD, Uhlhorn EW, Kepert JD (2009) Estimating maximum surface winds from hurricane reconnaissance measurements. Weather Forecast 24:868–883CrossRefGoogle Scholar
Rotunno R, Bryan GH (2012) Effects of parameterized diffusion on simulated hurricanes. J Atmos Sci 69:2284–2299CrossRefGoogle Scholar
Rotunno R, Emanuel KA (1987) An air-sea interaction theory for tropical cyclones. Part II: Evolutionary study using a nonhydrostatic axisymmetric numerical model. J Atmos Sci 44:542–561CrossRefGoogle Scholar
Skamarock WC, Klemp JB (1994) Efficiency and accuracy of the Klemp-Wilhelmson time-splitting technique. Mon Weather Rev 122:2623–2630CrossRefGoogle Scholar
Smith RK (1968) The surface boundary layer of a hurricane. Tellus 20:473–484Google Scholar
Sommeria G (1976) Three-dimensional simulation of turbulent processes in an undisturbed trade-wind boundary layer. J Atmos Sci 33:216–241CrossRefGoogle Scholar
Stevens B, Moeng CH, Sullivan PP (1999) Large-eddy simulations of radiatively driven convection: sensitivities to the representation of small scales. J Atmos Sci 56:3963–3984CrossRefGoogle Scholar
Sullivan PP, McWilliams JC, Moeng CH (1994) A subgrid-scale model for large-eddy simulations of boundary layers. Boundary-Layer Meteorol 71:247–276CrossRefGoogle Scholar
Tallapragada V, Kieu C, Kwon Y, Trahan S, Liu Q, Zhang Z, Kwon IH (2014) Evaluation of storm structure from the operational HWRF during 2012 implementation. Mon Weather Rev 142:4308–4325CrossRefGoogle Scholar
Wang J, Young K, Hock T, Lauritsen D, Behringer D, Black M, Black PG, Franklin J, Halverson J, Molinari J, Nguyen L, Reale T, Smith J, Sun B, Wang Q, Zhang JA (2015) A long-term, high-quality, high-vertical-resolution GPS dropsonde dataset for hurricane and other studies. Bull Am Meteorol Soc 96:961–973CrossRefGoogle Scholar
Wang W (2014) Analytically modelling mean wind and stress profiles in canopies. Boundary-Layer Meteorol 151:239–256CrossRefGoogle Scholar
Wicker LJ, Skamarock WC (2002) Time splitting methods for elastic models using forward time schemes. Mon Weather Rev 130:2088–2097CrossRefGoogle Scholar
Wurman J, Winslow J (1998) Intense sub-kilometer-scale boundary layer rolls observed in Hurricane Fran. Science 280:555–557CrossRefGoogle Scholar
Wyngaard JC (2010) Turbulence in the atmosphere. Cambridge University Press, Cambridge, 393 ppGoogle Scholar
Zhang JA, Drennan WM (2012) An observational study of vertical eddy diffusivity in the hurricane boundary layer. J Atmos Sci 69:3223–3236CrossRefGoogle Scholar
Zhang JA, Uhlhorn EW (2012) Hurricane sea surface inflow angle and an observation-based parametric model. Mon Weather Rev 140:3587–3605CrossRefGoogle Scholar
Zhang JA, Black PG, French JR, Drennan WM (2008a) First direct measurements of enthalpy flux in the hurricane boundary layer: the CBLAST results. Geophys Res Lett 35(L14):813. doi: 10.1029/2008GRL034374 Google Scholar
Zhang JA, Katsaros KB, Black PG, Lehner S, French JR, Drennan WM (2008b) Effects of roll vortices on turbulent fluxes in the hurricane boundary layer. Boundary-Layer Meteorol 128:173–189CrossRefGoogle Scholar
Zhang JA, Drennan WM, Black PG, French JR (2009) Turbulence structure of the hurricane boundary layer between the outer rainbands. J Atmos Sci 66:2455–2467CrossRefGoogle Scholar
Zhang JA, Rogers RF, Nolan DS, Marks FD Jr (2011) On the characteristic height scales of the hurricane boundary layer. Mon Weather Rev 139:2523–2535CrossRefGoogle Scholar
2.Department of Atmospheric and Oceanic SciencesUniversity of Colorado at BoulderBoulderUSA
3.National Renewable Energy LaboratoryGoldenUSA
4.NOAA/AOML/Hurricane Research Division, and Cooperative Institute for Marine and Atmospheric StudiesUniversity of MiamiMiamiUSA
Bryan, G.H., Worsnop, R.P., Lundquist, J.K. et al. Boundary-Layer Meteorol (2017) 162: 475. https://doi.org/10.1007/s10546-016-0207-0
|
CommonCrawl
|
EURASIP Journal on Information Security
Image life trails based on contrast reduction models for face counter-spoofing
Balaji Rao Katika ORCID: orcid.org/0000-0002-7315-142X1 &
Kannan Karthik1
EURASIP Journal on Information Security volume 2023, Article number: 1 (2023) Cite this article
Natural face images are both content and context-rich, in the sense that they carry significant immersive information via depth cues embedded in the form of self-shadows or a space varying blur. Images of planar face prints, on the other hand, tend to have lower contrast and also suppressed depth cues. In this work, a solution is proposed, to detect planar print spoofing by enhancing self-shadow patterns present in face images. This process is facilitated and siphoned via the application of a non-linear iterative functional map, which is used to produce a contrast reductionist image sequence, termed as an image life trail. Subsequent images in this trail tend to have lower contrast in relation to the previous iteration. Differences taken across this image sequence help in bringing out the self-shadows already present in the original image. The proposed solution has two fronts: (i) a calibration and customization heavy 2-class client specific model construction process, based on self-shadow statistics, in which the model has to be trained with respect to samples from the new environment, and (ii) a subject independent and virtually environment independent model building procedure using random scans and Fourier descriptors, which can be cross-ported and applied to new environments without prior training. For the first case, where calibration and customization is required, overall mean error rate for the calibration-set (reduced CASIA dataset) was found to be 0.3106%, and the error rates for other datasets such OULU-NPU and CASIA-SURF were 1.1928% and 2.2462% respectively. For the second case, which involved building a 1-class and 2-class model using CASIA alone and testing completely on OULU, the error rates were 5.86% and 2.34% respectively, comparable to the customized solution for OULU-NPU.
Given the seamless integration of functionalities and technologies inside smart-phones, it is imperative to incorporate not only biometric access control features inside it, but also include algorithms and architectures, which can detect and protect the contents against any form of impersonation or biometric-spoofing [1]. The face as a biometric establishes an individual's identity in a social setting, and this entrenchment permits easy traceability both in the digital space, as well as across surveillance networks. Phone models therefore tend to use the owner's face as a biometric unlocking feature [2]. It is practical to assume that the natural face capturing environment, which involves taking a single shot image of a person standing in front of a camera is well defined under somewhat constrained settings (of-course with some variability in lighting and pose). Spoofing operation however can be effected on multiple fronts: (i) presenting a planar printed photo as a mask, of the person who is being impersonated; (ii) replaying a video sequence from a tablet or another cell-phone of the target; and (iii) wearing a carefully designed prosthetic (with a certain texture and having appropriate slits) of the target individual.
There are many applications, particularly involving smart phones, where, prosthetic based spoofing is unlikely. This is mainly because the customized design of a prosthetic tailored to mimic a particular individual's face (who owns the smart-phone) is an extremely difficult scientific exercise. This problem is exacerbated by the fact that to prepare a 3D mask [3] (flexible or rigid), tuned to a particular individual's most recent facial parameters, one needs to first prepare a cast of the person's face or derive some form of holographic representation of the individual's facial parameters surreptitiously. This is an extremely expensive and time consuming affair. Hence, much of the spoofing technology is likely to be directed towards planar spoofing, wherein low or high-resolution facial images of individuals are either downloaded from the web and either printed and presented or presented via tablets to a particular face authentication/identification engine. Since most authentication engines look for facial similarity, the modality in which the authentication is done tends to ignore formatting anomalies connected with spoofing operation. One of the reasons why an authentication engine gets fooled by a planar print is because, while from a machine vision perspective this engine is designed to be robust to pose and illumination variations, this robustness comes at a price of overlooking format changes associated in the manner in which facial parameters are presented to the camera [4, 5]. Hence, there is a need for a counter-spoofing algorithmic layer, which searches for some form of naturalness based on some statistical lens, with respect to the facial parameters presented to the camera.
Counter-spoofing based on physical models
When the spoof-type is planar with a high probability, the counter spoofing solution can be designed more effectively by picking that statistical or forensic lens which separates the natural face class from the planar spoofed version. Very often the selection of this lens is governed by the manner in which the planar print representation is viewed or analyzed. When a planar printed photo is presented to the camera, on physical grounds it is easy to see that there are multiple fronts on the basis of which the so called naturalness can be compromised: (i) a planar presentation does not have depth, hence, the blur-profile in the target image is largely homogeneous [6,7,8], and (ii) the reprinting process to synthesize a planar print brings about a progressive degradation in contrast [9], clarity, specularity [10], quality [11], or color-naturalness [12].
One type of statistical lens for detecting planar spoofing is a specularity check [13]. If the paper printing of the target's face is done on a glossy type of paper, this results in a dominant specular component [10, 13] in the trapped image. While the non-specular component is a function of the object's color reflectivity profile and texture/roughness, its specular component is a measure of the object surface geometry witnessed by the camera in relation to a fixed light source. In the case of a natural face, on account of a natural depth variation, the magnitude of the specular component is likely to be highly heterogenous while it is largely homogeneous for planar-print presentations [13]. In Emmanuel et al. [14], primary low rank specular features were derived from training face-images belonging to both classes. However, a principal components analysis (PCA) model was built for the natural face space alone, in Balaji et al. [10]. The training samples were projected onto this natural eigenspace. Since the spoof projections were ideally expected to correspond to the null space in relation to this PCA model, they were observed to have much lower magnitudes as compared to natural specular samples. Since the natural variability associated with the specular component is a function of many factors such as ethnicity, facial profile, presence of cosmetics, and other facial elements such as glasses and beards, this remains an non-robust primary feature.
Planar geometric constraints also impact the manner in which other parameters are influenced,such as contrast [9] or sharpness (or its opposite blur) [6,7,8].
When natural photographs are either re-printed or re-imaged and re-presented to a still camera, there is a reduction in contrast which follows a power law drop [9]. This reduces the dynamic range in the intensity profile considerably, eventually resulting in a more homogeneous contrast profile throughout the image. This contrast homogeneity can be measured by fusing local contrast statistics, using a global variance measure [9]. One of the main issues with this choice of high-level feature is the lack of consistency when it comes to print re-production. There are high quality printers available for re-creating the original subject-face in virtually the exact same form before presenting it as a mask to the camera. Thus, this cannot be treated as a universal feature from the print of view of planar printing.
Alternatively, in literature, while examining the planar-spoofing problem, it was observed that in the case of closed cropped natural faces, the natural depth (or distance) variation with respect to the camera often had a tendency to reflect as a spatially varying blur [6, 8, 15] in the captured image. In the work of Kim et al. [6], two sets of images were taken of the same subject. In one case, the depth of field was narrowed deliberately to induce a significant blur deviation across the entire natural image. In case of a planar spoofing, the blur differential between the original and de-focused image is likely to be very small. This dis-similarity in the de-focus patterns was used by Kim et al. [6] to detect planar spoofing.
In another blur variability detection procedure [15], a camera with a variable focus was used in the experiment and was designed to focus manually at two different points on the person's natural face: (i) nose of the individual which is closest to the camera and the (ii) the ear of the individual which is the farthest from the camera. In the manual search procedure, the focal length adjustment was done to ensure clarity of one of these two facial-entities (nose or ear). It was observed that in the case of the natural face, the number of iterations required for the two cases were very different. On the other hand for a planar spoof presentation, virtually the same number of iterations were required to produce either a clear nose or a clear ear image. This difference between convergence trends was used to detect planar spoofing.
In an isolated image analysis setting (without deploying multiple entrapments and variable focus cameras), a pin-hole camera model was presented in [8] to bring out the problem connected with this blur phenomenon. A simple sharpness profile analysis based on gradients and gradient-thresholding was done to generate a statistic which gave an approximate measure of the sharpness measure for the presented image. In the case of planar spoofing, since the referential plane of focus (or object plane) need not coincide precisely with the spoof-print presentation, a homogeneous blur is likely to be superimposed on top of the original natural blur trapped in the printed version. Because of this, the average sharpness of the planar print version is expected to be much lower as compared to mean sharpness computed from a natural face image. The statistic proved to be sub-optimal, particularly for cases where the plane of focus was close to the print-object plane for print-presentations. The other problem was that with regular cameras in which the depth of field covers the complete face, the blur deviation is likely to be subtle. Thus, this blur diversity cannot be easily trapped without deploying a highly precise single face image based depth map computation algorithm.
Entrapment of scene related immersive information particularly regarding the positioning of light sources [16] is possible in the case of natural faces. This is because for portions of the face which are smooth in nature such as the cheeks and the forehead, the surface normal directions, for fixed ethnic group of individuals can be reliably estimated based on 3D registration frames. This becomes a referential pattern available in the repository. Now, when the subject presents his/her face to camera, at precisely the same spatial locations, based on the apparent intensity gradient and the known source co-ordinates relative to the subject, the surface normal directions are re-estimated. When there is a similarity in direction at a majority of the points where the measurements are taken, then the presentation can be declared as a natural one. When the estimated surface normal directions deviate considerably from the test subject, then it is highly probable that this inconsistency is due to a planar spoofing. While the approach is interesting there are some issues with this:
Multiple light sources are required at the surveillance point (at least two as in [16]), so that the same subject's face presentation can be illuminated from multiple directions. The overall setup requires additional lights, timers and switches and the per-subject assessment time is significant. This makes this architecture quite infeasible in large scale public scanning environments.
Intra-natural face class errors associated with the normal direction estimation tend to climb if there are pose, scale, and expression changes in the individual [16].
Since the points at which the measurements are taken must be registered in space, in a subject independent setting, identification of these key-points becomes a noisy affair for an arbitrary pose and scale presentation. This presents itself as what can be called subject-mixing noise or registration noise [4].
Planar spoofing (both print and digitized presentations) tend to imbibe some form radiometric distortion which stems from the additional printing and re-imaging stages which are constrained and lossy in nature [12]. Thus, an image of a planar printed face may not exhibit on one hand all the true colors which were originally present in natural face image of the same subject. Given the availability of both natural and spoof samples, this radiometric model can be estimated at a generic level but confined to a subject/client specific analysis [17]. When a test image arrives, its affiliation with the subject-specific radiometric distortion model is done via some form of regression analysis to establish the trueness or naturalness of the image. There are several issues with this arrangement:
To ensure that only the illumination and color profile confined to the facial-region of a particular subject is analyzed, the background is painted and cropped via a segmentation procedure. The close cropping is extreme to the extent that no part of the person's hair or lower neck/shoulders are included in the segmented region. When this close cropping is not done, then both the radiometric (real, planar) model-estimation, along with the detection procedure, becomes noisy and quite unreliable.
When there is subtle pose change, considerable illumination variation and scale change in the training sets, the model learning procedure (even on a subject specific note) becomes highly unreliable. Because of this lack of model reliability, the accuracy reported for difficult datasets such as CASIA [18] was found to be on the lower side.
Counter-spoofing based on image texture and quality analysis
It was proposed in Maatta et al. [19] that planar spoofing tends to bring about a change in texture and facial perspective (apparent or projected face) compared to real facial images. Local binary patterns (LBPs) [17, 19, 20], Gabor and Histogram of Gradients (HoG), can therefore be used to capture texture statistics linked to both the classes and build a 2-class SVM model. But without a crisp differential noise analysis, with respect to natural and planar spoof representations, features/statistics picked may not be robust enough.
In the same context of texture, facial micro-analysis via landmark identification can be used track faces across real-time surveillance videos [21]. Facial landmarks, such as eye centers and nose tips, once identified from a sequence of frames using standard face detection protocols, pixel information from their local neighborhoods can be collated to construct a statistical model for each landmark. These so called landmark-descriptors when stitched together in the form of a connected graph, can be tracked across videos. In a dynamic camera and still face arrangement, multiple collections of landmark-sets taken from a series of video frames can be used to recreate a generic 3D model of the person's face [22]. In the case of planar spoofings, these gathered measurements will result in the re-creation of face surfaces which are largely flat and lacking in depth information. There are several issues with this arrangement:
Need for relative movement between the subject and the camera is must in this arrangement to re-create either a 3D-representation by aligning the landmark features from multiple frames or for establishing whether the presentation is planar in nature. This relative dynamism may not always be feasible at an un-manned surveillance point, particularly when the camera is expected to move relative to a static face.
If too many landmark-points are identified, the graph structure is expected to become un-stable (leading to alignment problems) when there is a pose variation or an illumination profile change. Too few landmark points will result in an imprecise model in the context of 3D surface reconstruction. Under varying ethnic origins, this optimization problem will turn subject specific and difficult to handle. Cross-porting a particular counter-spoofing architecture/arrangement tuned to one dataset may not be very effective on a dataset housing subjects from a different geographical region.
Mixed bag techniques
Apart from model based approaches, in Wen et al. [23], statistics based on a mixed bag of features ranging from texture, color diversity, degree of blurriness were deployed, assuming that the extended acquisition pipeline (in a spoof-environment), connected with a re-printing and re-imaging procedure, tends to alter and impose constraints on this bag of features on a multitude of fronts. There were several issues with this arrangement:
In a diverse planar spoofing environment, there exist several uncertainties related to the spoofing-medium: (i) for paper-print-presentations, the nature of the paper (glossy/non-glossy), printing resolution, and print color quality remain unknowns; (ii) for tablet and other digitized presentations, the nature and extent of re-sampling noise [19], resolution, color re-transformation, and reproduction remain unknown. Thus, using a common and diverse statistical lens to segregate natural and planar-spoofings may not be very effective. What works for one type of spoofing may not work work for another.
The other main problem in conducting the training in a subject independent fashion is the influx of content dependent noise connected with subject-type variability [4] which stems from differences in facial parameters such as eye structures, their separation, nose profiles, and cheek and jaw-bone patterns. This is where client/subject dependent models [17, 20] tend to outshine the subject independent ones [9, 11].
Texture analysis in a broader context can be visualized as a quality assessment measure, wherein in most cases natural images are expected to possess a higher quality and clarity as compared to spoofed images [24, 25]. This blind quality assessment is brought about via a differential analysis wherein differential information between the original and its low pass filtered version is analyzed. Natural faces tend to exhibit a greater noise differential as compared to planar prints. Statistics such as pixel difference, correlation, and edge based measures were used to quantify the differential noise parameters and subsequently the overall quality score. There were several issues with this arrangement:
Since-edge related statistics are heavily dependent on the subject facial profiles, the measures were not subject-agnostic, inviting subject-specific content interference or "subject mixing noise" [4].
There was no scientific basis or analytical justification for choosing such a potpourri of statistics for performing this noise analysis. Hence, these features/statistics were not all that precise.
The differential noise and image quality analysis was done in a 2-class setting (real versus spoof), and assuming prior availability of sample training images from the spoof-segment, which is impractical.
Subject mixing noise
Overall, in the approaches discussed so far, features connected with intensity, contrast [9, 12], blur/sharpness [7, 8], specularity [10]. and differential statistics such as localized binary patterns (LBPs) and its variants collected in regular fashion are pooled together to generate a 2-class model assuming that spoof-print samples are available. The problem with this paradigm is that in this frame one cannot avoid what can be called "subject mixing noise," as subject-related perceptual content tends to interfere with the regularized measurements. This "mixing" problem stems from a lack of proper face registration due to pose and face-scale changes [4]. This problem can be mitigated to some extent in a client-authentication rather than a client-identification setting by restricting the analytical and decision space to specific subjects/clients [17, 20].
Since the facial parameters such as eye-type and relative positioning, nose (size and shape), mouth, and cheek bones are distinct but largely fixed for a given individual, registered measurements taken in a certain order for a natural image can be weighed against those taken from a print-spoof image without worrying about "subject-mixing noise." There are many more choices as far as feature selections are concerned in a client specific arrangement as opposed to a client agnostic one. While lack of portability and customization of the detection algorithm is a drawback of this architecture, a big advantage is the higher accuracy one can achieve, since the "subject mixing noise" is nullified provided, pose variation and scale change is minimal.
Identity independent counter-spoofing via random scans
This so called subject-mixing noise can be combated in a subject agnostic setting by noting that short-term pixel intensity correlation profiles carry significant immersive information regarding both the type of object presented to the camera and also the lighting environment [4, 5].Thus, by trapping this short-term correlation profile without inviting content dependent texture-noise, one can detect natural presentations. The first, second or third order pixel correlation profiles can be trapped by executing a simple random walk [4] from the center of the image. Multiple realizations of this random walk phenomenon can be used to auto-populate the features associated with a natural image. By ignoring the macro-structure in the face image, only the format differences are extracted via first order differential scan statistics [4]. This allows this random walk based counter-spoofing algorithm to transcend a variety of planar-spoof-media, lending itself as a monolithic yet universal solution. While such a random walk approach can tell the difference between a over-smoothed prosthetic and a natural face [5], with albeit a reduced degree of reliability, it has a tendency to hit an error-rate ceiling when the acquisition format or scene variability in the inlier/natural face space class is on the higher side. The error rates reported for CASIA-CASIA are therefore likely to saturate at EER = 1.89% and 2.16% for printed and digital planar spoof-sets respectively. This may not even decrease, even if one drifts to a client/subject specific frame.
Motivation and problem statement
In this work, as opposed to a universal one, a spoof model directed approach on client-specific grounds has been proposed wherein the spoofing frame is considered as a planar print presentation. This streamlining permits the design and deployment of a much more precise solution with a higher detection accuracy as compared to the universal case. As discussed earlier, this client specific weighing (in the image analysis domain, natural versus spoof) allows a mitigation of "subject mixing noise." The counter-spoofing system here knows the identity of the face presented to the camera and can access stored samples related to that "presented-subject" from the repository, with a client/subject-dependent [17, 20], 2-class support vector machine (SVM) model and use that prior data to perform the classification of this new test image sample. The main contributions in this work are:
Proposition of a new contrast reductionist frame for planar print counter-spoofing, by deploying a discrete logistic map at the pixel level [26]. This has been termed as an image life trail wherein the contrast of the original test image (real or spoof) drops with each iteration and eventually reaches a virtually zero contrast state (saturation point).
A self-shadow enhancement procedure which feeds on this life trail to make the self-shadows trapped in natural images much more prominent. It has been observed that planar-print spoof images tend to have suppressed self-shadows as compared to natural ones, which serves as a discriminatory feature for segregating the two classes.
A simple statistical model based on the dynamic range associated with intensity distributions connected with real and spoof/print classes has been used to justify the choice of first, first difference ratio statistic for enhancing self-shadow information and also arrive at the optimal choice of the exponent \(\alpha ^*\) via a calibration process and shape the final feature used to build the subject-specific 2-class model.
The proposed overall architecture has been split into two segments/blocks: (i) feature extraction, based on contrast reductionist image life trails leading to the extraction of critical information pertaining to self-shadows found in natural face-images (Fig. 1), and (ii) the training, subject-specific model building and final testing procedure shown in Fig. 2.
Block-diagram of feature extraction procedure
Block-diagram of training and classification/detection module
The section-specific organization is as follows: the proposed self-shadow formulation, i.e., base for the work in this paper where contrast reduced life trails are generated using logistic maps [26], is discussed in Section 2. The analytical frame and model in which the image is abstracted as random variable has been used to validate some of the claims made particularly linked to the life trails and the convergence rates of real and print images in Section 3. The self-shadow image statistic which is derived from the image life trail and further enhancements have been supported with an analytical justification in Section 4. Once the primary statistics have been finalized, it is known that every new illumination environment will demand a re-calibration and training for its own subjects. A method for arriving at the operating point for every new dataset is discussed in Section 5. Database description is given in Table 1 and the experimental results are presented in Section 8. Finally, to impart a certain flexibility a path has been proposed in which cross-porting can be done with a random scan front followed by a Fourier descriptor, to build subject agnostic models in Section 9.
Table 1 Selective face anti-spoofing datasets and related parameters
Motivation and formulation for extracting self-shadows
Natural faces taken under constrained lighting conditions, with a frontal camera view and the light source positioned at an incline related to the face tend to exhibit what are known as self-shadows. A self shadow is formed mainly because of the following reasons: (i) the natural face which is exposed to a particular lighting environment has an irregular 3-dimensional surface contour, depending on the facial features of the individual. (ii) When light is projected onto one side of the face, the elevated parts of the face, such as the nose, high cheek bones, and facial curvature on either side of the cheeks tend to serve as occlusions to the projected light, leaving behind a self-shadow or a partial shadow on the other side. An example of this has been illustrated via a clay model as shown in Figs. 3 and 4. The camera positioned in front of the individual can be marked as the referential northern direction, relative to the person's face (which is in the southern direction). This camera (viz. an attached and aligned cell-phone camera unit) coupled with the clay-face itself is kept fixed for the entire experiment. There are three light source orientations relative to the clay-face model indicated in a yellow-shade in Fig. 3.
Experimental setup using a clay model and a fixed cell-phone camera for producing natural images with self-shadows
Images captured using the experimental setup (Fig. 3), for three different table lamp positions (north-west, west and south-west)
The images captured with this arrangement for three different source locations are shown in Fig. 4a–c. In Fig4a, the light source has been positioned top-left-front of the person's face and beside the camera unit (north-west direction); in Fig 4b, the source is positioned towards the left of the person and partly in front (west position), while in Fig. 4c, the source is positioned behind the person in the south-west position. Self-shadows are evident in all the three images but minimal in the case of the north-west position and maximum when the light source is behind the clay-face (south-west position).
Claim 1
The first claim is that these self-shadows can be enhanced by first deploying an iterative contrast reducing procedure using a non-linear logistic map and then taking a relative difference ratio with the parent image. This difference image carries precious information related to the self-shadows.
The second claim is that in the case of a camera imaging of a planar print of a particular subject's face, these self shadows remain in a suppressed state. The original self-shadows which were trapped in the planar print of a natural facial image, are no longer fully visible, mainly owing to the secondary lighting environment, which leads to the formation of a much more uniformly illuminated image.
To facilitate an enhancement of this self-shadow pattern in the natural image, a non-linear logistic mapping [26] is deployed. This is an iterated function system that operates on an initial scalar value repeatedly and eventually converges to a "fixed point." One of the advantages of this logistic map is that on an average the convergence rate is quite fast and the fixed point is reached quickly, irrespective of the initial state (on an average).
Logistic maps and image life trails
Assume, \(I_0(x,y)\) to be the normalized intensity value at particular spatial location (x, y) in an \(N\times N\) face image of a particular subject, such that \(I_0(x,y)\in [0,1]\) and \(I_0(x,y) = 0\) represents the completely black; \(I_0(x,y) = 1\) represents the completely white pixel. The logistic map is a contrast reducing mapping which when applied to a "swarm" of image pixels independently, eventually after a few iterations the entire image reduces to a zero contrast image. We define an image "swarm" as the communion of all the intensity states of \(N^2\) pixels undergoing this non-linear transformation. The length of this contrast-reductionist trail has been termed as an "image life trail." The life-line here refers to the number of iterations required for the parent image to reach a virtually zero contrast image or reach a point wherein almost all the pixels in this image swarm have come close to the fixed point value. To begin with, this pixel swarm is defined as follows:
$$\begin{aligned} SWARM(I_0) = \{I_0(x,y), s.t. x,y\in \{1,2,...,N\}\} \end{aligned}$$
This non-linear iterated function system is defined as [26],
$$\begin{aligned} I_{n+1}(x,y) = 2I_n(x,y)(1-I_n(x,y)) \end{aligned}$$
with the initial value, \(I_0(x,y) \in (0,1)\) and \(I_n(x,y)\) is the value at the \(n^{th}, n > 0\) iteration with \(I_n(x,y) \in (0,1)\). Irrespective of the initial value the Logistic map directs the value towards what is well known as a fixed point which in this case happens to be 0.5. By design with every iteration this value drifts closer and closer to the fixed point.
When such a map is applied to the swarm on a pixel by pixel basis, the entire swarm undergoes a transformation with each iteration, eventually producing what can be called a sequence of low contrast image (Fig. 5). Finally, the swarm results in a zero contrast image when almost all the pixels have converged to a value close to the fixed point 0.5 (which corresponds to gray level value 128).
Contrast reductionist life trails for real and spoof image samples using the logistic map
Dynamic ranges of real and print face-images
At this point with respect to the life trail analysis, it is important to draw a distinction between the trails of a natural and spoof/print image. Any pixel having a particular normalized intensity in the range (0, 1) will converge to the fixed point 0.5 eventually, upon repeated application of the logistic map. However, the trail dynamics when considering the pixel swarm or rather the collective convergence will depend on the slowest among the myriad pixel convergence trails (over the image), as a function of the intensity value spread (or rather the dynamic intensity-range). Smaller the dynamic range, faster will be the convergence. Hence, trails of low-contrast spoof images are likely to converge much faster as compared to natural face images.
NATURAL VERSION decays much SLOWER and It was surmised in [9] that given two registered face images (belonging to the same subject), the original normalized intensity version can be linked to the planar printed version via a power law relation,
$$\begin{aligned} I_{pp}(x,y)\approx I_{ORIG}(x,y)^{\gamma } \end{aligned}$$
where \(gamma>1\) and and subsequent images of planar prints can be represented by the relation,
$$\begin{aligned} I_{pp[m]}(x,y) \approx I_{pp[m-1]}(x,y)^{\gamma } \end{aligned}$$
where \(m\ge 2\) with \(I_{ORIG}(x,y) \in [0,1]\) and \(I_{pp[m]}(x,y) \in [0,1]\). This implies that with subsequent printing, the moderately dark zones become darker and the lighter zones become darker. Eventually, as the planar printing is iterated, the entire image becomes completely dark. Hence, a planar printing procedure via a gamma power law is also a contrast reductionist transformation, wherein the transformed image has a lower intensity dynamic range as compared to the original image. The other thing that comes out of this is that a planar print version will always have a lower contrast as compared to that of the parent original image.
Consider the generation and deployment of a contrast score metric for measuring the dynamic range and score generated for eight subjects from the CASIA dataset (both real and spoof) [29]. Based on the metric used the scores produced for the natural faces are higher as compared to the spoof/print versions of the same subjects. Since all images have been resized to \(N \times N\), let the normalized intensity value at position (x, y) be represented/mapped as:
$$\begin{aligned} I((x-1)N+y) = I_0(x,y)\in [0,1] \end{aligned}$$
with \((x,y)\in {1,2,...N}\). Pull out the non-trivial intensity values and let \(I_{NZ}(k), k\in {1,2,...,M}\) (\(M \le N^2\)) be given by,
$$\begin{aligned} I_{NZ}(k) = I_0(x,y); \text { provided } I_0(x,y) > \epsilon _1 \end{aligned}$$
Using these non-zero intensity values, compute the mean and standard deviation over the entire image,
$$\begin{aligned} \mu _G= & {} \frac{1}{M}\sum _{k=1}^{M} I_{NZ}(k)\nonumber \\ \sigma _G= & {} \sqrt{\frac{1}{M}\sum _{k=1}^{M} (I_{NZ}(k)-\mu _G)^{2}} \end{aligned}$$
The final contrast score can be computed as [9], with a slight modification to account for images with very dark foregrounds:
$$\begin{aligned} CON = \frac{\sigma _G}{\mu _G} \end{aligned}$$
To check the validity of this contrast metric from a perceptual view point the scores produced for real and print versions are shown in Fig. 6. Print versions tend to have a lower contrast scores as compared to natural faces.
Real and planar prints with contrast scores as per Eqn. (4)
To link up this apparent contrast degradation seen in print images with the exponential gamma law presented earlier in this section and also in [9], the same dynamic range numbers have been computed using the standard deviations \(\sigma\) (over the intensity profiles), on synthetically produced images via an application of this gamma-exponentiation on a natural faces of subjects. For all the intensity values in the set derived from a natural image, the exponential law is applied as,
$$\begin{aligned} I_{synthetic}(i,\gamma ) = I(i)^{\gamma } \end{aligned}$$
where \(\gamma >1\) and \(I(i) \in SET_{0}\). The dynamic range scores for \(\gamma =1\) (i.e., no transformation), and then for \(\gamma = 1.5, 3, 5\), for natural face images of four subjects are shown in Fig. 7. A simple statistical model is used to understand the differences between natural and print versions and as to how the contrast reductionist life trails evolve in both these cases.
Impact of the Gamma power law on the degradation of the contrast profile of the original image (in the corresponding synthetic versions). Results are shown for γ = 1 (no transformation) and for 1.5, 3, 5
Analytical frame for validation
The motive for this section is to abstract the image (real or print) as a random variable and bring out various elements linked to the problem connected with image trail and at the same time in-part validate some of the results analytically. Two facial images of the same subject (one original and one print-version) are expected to have intensity distributions which are similar to a scale factor (in terms of shape). However, the planar print version is expected to exhibit a lower dynamic range with respect to the intensity distribution. The following aspects are evaluated in the subsequent sub-sections:
Statistical model and convergence to a fixed point and subsequent proof given in Appendix A.
Life trail dynamics discussing the rate of convergence of the real and print-abstractions as random sequences with proof details in Appendix B.
Fixed point and convergence analysis based on a simple statistical model
In this section, a simple statistical model is presented, to reflect the difference in dynamic range of natural and print images. The original image is modeled as a random variable \(X_0\) with a uniform distribution over the range (0, 1), while the print version is mapped to a uniform random variable \(Y_0\), with reduced range (0, 1/a), where \(a > 1\).
Impact of the iterative map function map on these two types of random variables is examined and some of the proofs are elaborated in Appendix A.
Let \(f_{x}(x)\); \(x\in (0,1)\) represent the referential probability density function (PDF) of a normal face image corresponding to the global pixel-intensity distribution. In a crude way, its low contrast version after planar printing is defined based on the functional mapping based on the exponential law discussed in the earlier section,
$$\begin{aligned} Y_0 = X_0^{\gamma } \end{aligned}$$
and this is expected to have a PDF,
$$\begin{aligned} f_{Y_0}(y) = a \times f_{X_0}(ay) \end{aligned}$$
with \(a>1\), a shrinking of the referential density function is created, without compromising on the overall structure of the intensity probability density function (the number of inflection points and their relative positioning would remain the same). Note that \(y\in [0, \frac{1}{a}]; a > 1\) with, \(a = e^{1/\gamma }\) with \(\gamma >1\). Upon the application of the logistic map [26] to both these random variables and its planar-printed and low contrast counter-part, \(Y_0=X_0^{\gamma }\), secondary random variables (after one iteration) \(X_{1}\) and \(Y_{1}\) are formed, \(X_{1}= 2X(1-X)\) and \(Y_{1}=2Y(1-Y)\).
It can shown that if \(f_{X_0}(x) ~ UNIFORM [0,1]\), then over successive iterations of this logistic map,
$$\begin{aligned} X_{n} = 2X_{n-1}(1-X_{n-1}); n\ge 1 \end{aligned}$$
the PDF of the transformed natural random variable, \(X_n\), via this logistic map in the \(n^{th}, n\ge 1\) iteration is,
$$\begin{aligned} f_{X_{n}}(x) = \left( \frac{1}{2}\right) ^{n-1} \left[ \frac{1}{\sqrt{1-2x}}\right] ^n \end{aligned}$$
with \(x\in [0,0.5]\), which implies that once the logistic map is applied, for all the following iterations the points stay on the left side of \(x = 0.5\) and approach the fixed point from the left. As n becomes large, it can be shown that \(f_{X_{n}}(x) \approx \delta (x-\frac{1}{2})\), i.e.,
$$\begin{aligned} X_n \rightarrow 0.5 \text { with prob. '1' for large } n \end{aligned}$$
Similarly starting off with \(Y_0 ~ UNIFORM[0, (1/a)]; a > 1\) (uniformly distributed but reduced dynamic range) and applying the logistic map several times, one can manipulate the equations to obtain the result:
$$\begin{aligned} Y_n \longrightarrow 0.5 \text { with probability '1' for } n>>1 \end{aligned}$$
This is illustrated in Appendix A.
Life trail dynamics
The intention here is to demonstrate when an image having a higher dynamic range in terms of intensity is subjected to the same logistic mapping, the convergence rate towards the fixed point is slower. For images with smaller dynamic ranges, the convergence is faster. The iterative functional mappings for both the natural (modeled by random variable X and print abstractions (modeled as random variable Y are:
$$\begin{aligned} X_{n} = 2X_{n-1}[1-X_{n-1}] \end{aligned}$$
with \(n>0\), and \(X_{0} = X \tilde{U}NIFORM [0,1]\).
$$\begin{aligned} Y_{n} = 2Y_{n-1}[1-Y_{n-1}] \end{aligned}$$
with \(n>0\), and \(Y_{0}= Y \tilde{U}NIFORM[0,1/a]\) such that \(a>1\). To monitor and track the fixed point convergence, the normalized first order difference metric is defined as,
$$\begin{aligned} G_{n} =\frac{X_{n}-X_{n-1}}{X_{n-1}} = 1-2X_{n-1}; n\ge 1 \end{aligned}$$
$$\begin{aligned} H_{n} = \frac{Y_{n}-Y_{n-1}}{Y_{n-1}}= 1-2Y_{n-1}; n\ge 1 \end{aligned}$$
It is shown in Appendix B, that the print-abstraction error sequence sequence, \(H_{n}\) converges faster in comparison with its counterpart, \(G_{n}\), the real-image-abstraction error-sequence. Thus, it follows that the parent \(Y_n\) print-sequence because of a reduced dynamic range converges faster than the corresponding parent real image sequence \(X_n\). In other words, life trails of low-contrast print images are shorter than the trails of real images.
Actual image life trails
While waiting for a precise convergence of all points is not necessary, in a practical image analysis setting, this convergence is approximate and designed to meet perceptual grounds with respect to a zero contrast image.
For a particular pixel positioned at location, (x, y), which is subjected to this non-linear mapping, the pixel is considered active if the value in the next iteration is significantly different from the earlier value. When two or more successive values are close, then the pixel in an approximate sense has assumed to have reached a saturation point and close enough to the fixed point. If \(I_n\) is the intensity level at iteration n, the pixel is considered to have converged and reached a saturation point if,
$$\begin{aligned} \frac{|I_{n} - I_{n-1}|}{I_{n}} < \epsilon \end{aligned}$$
All the pixels with a non-zero intensity state are expected to drift towards the fixed point, which is 0.5 eventually. Note that the convergence rates are non-uniform and a function of the initial value (or intensity state) of a particular pixel within the swarm. Hence, greater the spread of intensity levels (or diversity in the intensity profile), slower will be the swarm convergence. The entire swarm \(SWARM(I_0)\) is said to have converged at iteration \(n = s\), where s is the approximated saturation point of the complete image swarm if more than \(\gamma\) percent of the \(N^2\) pixels (\(\gamma \ge 0.9\)) have met the convergence constraint given in Eq. 10 individually. This swarm convergence trend has been tapped using a saturation curve based on a function P(n) (Fig. 8), where n is the iteration number. Typical saturation curves for natural and spoof images are shown in Fig. 8.
Saturation curves which bring out the trends linked to the rate at which the initial image samples (either natural or spoof) converge to a zero-contrast image in the life trail
Figure 5 shows the contrast life trails of both natural and spoof images along with the termination points/saturation points. The overall swarm will converge only if almost all the pixels have converged and now the final image saturation time to some extent depends on the MAXIMUM over all possible saturation timings across individual pixels. It is obvious that the more diverse the intensity profile, the greater the spread of intensity values, slower will be the swarm convergence. Natural face images tend to exhibit a higher dynamic range with respect to intensity in comparison with their planar print counter parts. The planar print versions tend to usually be of a lower quality, typically lower contrast [9], and limited color [17] as compared to the natural face images. Subsequently, on a subject specific note, these planar print images tend to have a shorter overall swarm life trail as compared to natural images. This can be seen in Fig. 5.
In the CASIA data-set, it was observed that there were some cases where the print versions had a very high quality and good clarity. Such cases turn out to be anomalies when examined from a life trail perspective. An example of this is CASIA subject-11 shown in Fig. 5e, f, wherein the print quality almost matches the natural face quality.
Images with scale changes also tend to exhibit some form of anomalous behavior. Certain subjects tend to present their faces much more closer to the camera compared to others. A scale increase in a face turns out to be tantamount to a contrast reduction as the amount of detail in the image is reduced because of this zoom-in effect.
The swarm activity trails can be captured in the form of a global-image saturation level spotted at each iteration. These saturation graphs can be termed as S-graphs which tend to reflect an inverse trend in some cases. Hence, under scale variations and printing quality differences, the spoof detection may not prove to be fully effective. To attack this lack of universality with respect to the life trail lengths or S-curve trends, the focus is shifted to self-shadows. These self-shadow enhanced versions can be siphoned and generated from the same Image life trail when the original image swarm is passed through this logistic map.
Enhancing the self-shadows
One trend that is universal and remains independent of scale change in natural images and printing quality variations is the notion of perceptible self-shadows. These self-shadows are less prominent in spoof-print images, where they remain in a suppressed mode mainly owing to printing limitations and the superposition of secondary frontal lighting during the re-imaging process. Particularly, in the case of planar printing, the same natural image originally gathered from some unknown route is printed and presented again to an unmanned camera unit with a view to overcome the counter-spoofing system. Typically, such presentations are designed for low-end systems such as smart-phones which rely on their local mobile cameras for performing facial recognition to grant access to legitimate cell-users. Since in the case of planar spoofing the attacker must ensure a full face presentation with proper uniform illumination to guarantee him/her access to a phone unit which belongs to another individual, a part of the originally trapped self-shadow information present in the printed photo tends to get suppressed by this secondary lighting. It is precisely this difference that this body of work picks out by extracting and enhancing the self-shadows.
This type of analysis is viable in indoor lighting and capture scenarios where invariably the sources are positioned towards one side of the individual's face creating in some cases a partial self-shadow. Given the original intensity normalized image \(I_0(x,y)\), when this is passed through the logistic map [26] (one iteration only), a contrast reduced image is obtained, \(I_1(x,y)\) such that,
$$\begin{aligned} I_1(x,y) = 2I_0(x,y)[1-I_0(x,y)] \end{aligned}$$
A differential image can be generated from the life trail in one of the following ways,
$$\begin{aligned} R_1(x,y) = |I_1(x,y) - I_0(x,y)| = | I_0(x,y)-2\left[ I_0(x,y)\right] ^{2} | \end{aligned}$$
$$\begin{aligned} R_2(x,y) = \left[ \frac{|I_1(x,y) - I_0(x,y)|}{I_0(x,y)}\right] = |1-2I_0(x,y)| \end{aligned}$$
$$\begin{aligned} R_3(x,y) = \left[ R_2(x,y)\right] ^{\alpha } \end{aligned}$$
where, \(\alpha \ge 1\). Since all these ratios can be exclusively expressed as a function of the original intensity pattern: \(I_0(x,y)\), this can be treated as an intensity transformation.
The TWIN-image [30] in Fig. 9 has been used to illustrate the impact of the exponent \(\alpha\) under two different illumination conditions: diffused lighting (right image) and virtually no self-shadows and regular outdoor lighting (left image) with the facial image showing prominent self-shadows. The main objective was to illustrate that when this exponent \(\alpha\) is increase from "1" to a larger number, visually, the separation between the two images (RIGHT vs LEFT) with virtually the same pose is best for some intermediate value of \(\alpha\). The right-twin image represents a spoofed low contrast image with virtually no self-shadows while the left-twin image mimics a natural image with prominent self-shadows further enhanced by the introduction of the exponential parameter \(\alpha\).
TWIN-image where one version is taken under normal outdoor lighting and the other one under diffused sunlight
This exponentiation leads to an intensity transformation, which, makes the penumbral zones darker (zones where there are partial self-shadows). The part where there is no penumbra is made lighter. This is precisely why a power-law arrangement of the form \(y=x^2\) or \(y=x^{\alpha }\), where \(\alpha >1\) was deployed. Thus, the final enhanced image-statistic was, \(E_{\alpha }(x,y) = R_{n=1}(x,y)^{\alpha }\).
For most natural images, it was found that when this \(\alpha\) was increased beyond a certain point, even the non-penumbral zones were darkened. On the other hand, too small a value of \(\alpha\) did not have much of an impact on the original self-shadows. This process of arriving at the optimal \(\alpha\) can be done more reliably with an analytical twist using the same probability model discussed earlier.
Justification for first, first-order difference ratio
Analytical proof as to why the first, first-order difference provides maximum information related to the self-shadows is provided in this segment. Given the normalized error term for the natural image abstraction, \(G_{n}= (1-2X_{0})^{2n}\) for \(n \ge 2\) and \(G_1 = (1-2X_{0})\), where, \(X_{0}\) has a uniform PDF over the interval [0, 1].
For \(n\ge 2\), the PDF of \(G_{n}\)can be derived using the classical random variable transformation analysis [31] as,
$$\begin{aligned} f_{G_{n}}(g)= \frac{1}{2n}\left( g^{(\frac{1}{2n}-1)}\right) \end{aligned}$$
where \(g \in [0,1]\). The continuous/differential entropy ([32]) of \(G_{n}\) can be evaluated as,
$$\begin{aligned} H[G_{n}]= -E_{G_n}\left[ ln\left( f_{G_n}(G)\right) \right] \end{aligned}$$
where the expectation is with respect to \(G_{n}=G\).
$$\begin{aligned} H[G_{n}]= -\int _{g=0}^{1} f_{G_{n}}(g) ln[f_{G_{n}}(g)] dg \end{aligned}$$
Can show that this evaluates to,
$$\begin{aligned} H[G_{n}] = ln(2n)-2n+1 \end{aligned}$$
which is a decreasing function of n, with the value obtained for \(n=2\) as, \(H[G_{2}] = 2\times 0.693-3= -1.6137\). For \(n=1\), since the same random variable evaluated at \(n=1, i.e., G_1= 1-2X_{0}\) is uniform over the interval \([-1,1]\), the entropy \(H[G_{1}]= ln(2) = 0.693\) is MAXIMUM and is greater than the entropies evaluated for \(n\ge 2\). This is a decaying trend with respect to entropy.
This implies that the self-shadow statistic provides maximal information when \(G_{n}=1\) is used as the normalized ratio statistic. All other differences larger than \(n=1\), provide less information than the information contained in the first difference ratio. Since, the distribution for \(G_{1}\) is uniform in a larger sense this can serve as a SUFFICIENT STATISTIC for trapping maximal self-shadow information.
Connection of the exponential parameter with the statistical model
The first difference normalized ratio as seen in the earlier section, traps the self-shadow pattern to a certain degree of statistical sufficiency. Thus, it is enough to use this ratio statistic to derive the final feature vector for building a subject-specific 2-class SVM model. From the point of view of model building there were two motives for choosing this additional parameter and not just feeding on the ratio statistic:
While the conditional ratio statistics, \(G_{1} = 1-2X_{0}\) and \(H_{1}= 1-2Y_{0}\) where \(X_{0}: UNIF[0,1] and Y_{0}: UNIF[0, \frac{1}{a}]\) carry sufficient information to trap self-shadow information, one factor which is of prime importance is the class separation with respect to real and spoof. It may be possible to post process these stats in such a way that the self-shadow profiles associated with real and spoof images are pushed further apart. This has been attempted via an exponentiation procedure as the exponentiation is likely to modify the dynamic ranges of both ratios.
Let \(R_{REAL} =[G_{1}]^\alpha\) and \(R_{SPOOF} =[H_{1}]^\alpha\) with \(\alpha >0\). Define \(\Delta H(\alpha ) = H[R_{REAL}]-H[R_{SPOOF}]\), as the difference between the information contained in the self-shadow profiles of the real and spoof versions, where \(H[R_{REAL}] =-E_{R}[ln f_{REAL}(R)]\) . Selection of \(\alpha\) must be done to ensure \(-\Delta H(\alpha )\) is as small as possible.
On the other hand, the absolute information contained in the self-shadow profile of the natural face image, i.e., \(H[R_{REAL}] = E_{REAL}(\alpha )\). should not be reduced significantly as this would impede the detection procedure.
The selection of the exponent \(\alpha\) is based on judicious tradeoff between maximizing the self-shadow information present in natural faces while at same time increasing the class-separation between the self-shadow distributions of the real and spoof classes. These two requirements are slightly conflicting.
Thus, the choice of the exponential parameter must be done to ensure \(-\Delta H(\alpha )\), is lowered as much as possible without compromising on the information contained in the absolute entropy of modified ratio statistic corresponding to the real face image, i.e., \(E_{REAL}(\alpha )\) must be as large as possible.
It can be shown that,
$$\begin{aligned} \left| G_{1} \right| : UNIF[0,1] \end{aligned}$$
$$\begin{aligned} \left| H_{1} \right| : UNIF[0, \frac{1}{a}] \end{aligned}$$
with \(a>1\). Using the random variable transformation formulation from Papoulis et al. [31],
$$\begin{aligned} f_{R_{REAL}} (r) = (\frac{1}{\alpha }) r^{\frac{1}{\alpha }-1} \end{aligned}$$
where \(r\in [0,1]\) and
$$\begin{aligned} f_{R_{SPOOF}}(r) = (\frac{a^{\alpha }}{\alpha }) r^{\frac{1}{\alpha }-1} \end{aligned}$$
where \(r \in [0, \frac{1}{a}^\alpha ]\). Subsequently,
$$\begin{aligned} H[R_{REAL}] = ln[\alpha ]+1-\alpha \end{aligned}$$
$$\begin{aligned} H[R_{SPOOF}] = ln(\frac{\alpha }{a})+(1-\alpha )(1+ln(a)) \end{aligned}$$
for \(a>1\) and \(\alpha > 0\). This gives two important metrics, (i) connected with the difference between real and spoof self-shadow entropies,
$$\begin{aligned} -\Delta H(\alpha )= & {} H[R_{SPOOF}] - H[R_{REAL}]\end{aligned}$$
$$\begin{aligned}= & {} -\alpha \times ln(a) \end{aligned}$$
and (ii) absolute entropy of the natural face self-shadow statistic as,
$$\begin{aligned} E_{REAL}(\alpha ) = ln (\alpha )+1-\alpha \end{aligned}$$
When the dynamic range parameter a is known or is estimated from the real and spoof versions corresponding to a particular calibration set, the operating point is decided by the point of intersection of the two constraints for the measured \(\hat{a}\). This is illustrated in Fig. 11. For different values of a different sets of contraints are obtained out of which one has to be picked based on the computation. Keeping in mind that the attacker will ensure a reasonable quality associated with planar prints, one need not expect a to go above 2-units. A value of \(a=2\) would correspond to a 50% drop in the dynamic range of the print version in relation to the natural intensity profile (Fig. 10).
Impact of changes in the exponential parameter α on both the versions from the TWIN-image set [28]. As the exponent increases, the self-shadows become much more discernible for the version where the lighting is normal. Beyond a certain point the ratio images corresponding to both the normal version and the diffused version become dark
The selection of the operating point, as the point of intersection towards the right of the RED and BLUE curves, to maximize class separation (not the one on the left) is shown for different values of α
Operating point and initial calibration
The right choice of exponent \(\alpha\) to strike a balance between the quantum of self-shadow information obtained from the differential ratio statistic taken from the life trail of natural faces and the differential entropy statistic is decided by a calibration process. The family of curves (seen in Fig. 11) is dependent on the knowledge of the dynamic range parameter \(\hat{a}\), connected with the print-spoof image intensity profile. It is therefore imperative that there be an elaborate procedure for estimating this parameter \(\hat{a}\), on both relativistic as well as approximate grounds, via measurements taken over the real and spoof image sets derived from calibration data. This calibration procedure for \(\alpha\) is designed as follows,
Take 5 subjects with a total of 75-samples from both real and spoof classes, from the the dataset being scrutinized.
For a particular image sample in the real-class, generate the global contrast score [9], (obtained from Eq. (4)).
$$\begin{aligned} CR_i = \sigma _i/\mu _i \end{aligned}$$
The mean contrast score for natural faces is,
$$\begin{aligned} CON_{REAL(E)} = \frac{1}{N_{CALIBREAL}}\sum _{i \in SET_{CALIBREAL}} CR_{i} \end{aligned}$$
where \(N_{CAL_{REAL}}\) is the number of real subject face samples and \(SET_{CALIBREAL}\) is the set of indices of real images deployed towards calibration.
Similarly, for the spoof/print segment from the calibration set,
$$\begin{aligned} CON_{SPOOF(E)} = \frac{1}{N_{CALIBSPOOF}}\sum _{i \in SET_{CALIBSPOOF}} CS_{i} \end{aligned}$$
where \(N_{CAL_{SPOOF}}\) is the number of spoof/print subject face samples and \(SET_{CALIBSPOOF}\) is the set of indices of spoof images deployed towards calibration.
To cross-reference this measurement profile against the analytical model and the curves shown in Fig. 11, the mean contrast score of the real-calibration set is referenced against the spoof set taking a ratio of the two:
$$\begin{aligned} \hat{a}_F = \frac{CON_{REAL(E)}}{CON_{SPOOF(E)}} \end{aligned}$$
Note that if this relativistic normalized dynamic range parameter, \(\hat{a}_F\), is close to UNITY or is smaller than unity, then the counter-spoofing system based on contrast reductionist life trails will not be very effective. However, because of the physical acquisition process, the spoof print version will always have a lower contrast as the corresponding original version. This will induce a high likelihood towards the EVENT, \(\hat{a}_{F} > 1\), from the measurements taken over the calibration set. This also explains why this method may not work on backlit planar images produced by tablets and laptops.
Use the family of curves from Fig. 11 (or an elaborate lookup table) and pick out the optimal value of \(\alpha\) for that dataset based on the corresponding quantum-value associated with \(\hat{a}_F \in [1.1, 1.3, 1.5, 1.7, 1.9, 2.1]\). For the CASIA-dataset, 5 subjects, with 75 samples per class, the parameters estimated were \(CON_{REAL(E)} = 0.5889\); \(CON_{SPOOF(E)}= 0.4716\); and \(\hat{a}_{F} = 1.2487\). This quantum corresponds to \(a= 1.2487\) pointing to an operating point of \(\alpha _{CASIA} = 2.7\).
Final feature extraction procedure and client-specific classification
Block diagrams of the feature extraction procedure following by the classification and testing are shown in Figs. 1 and 2 respectively.
Secondary statistics
To derive the feature sets and statistics for every image \(I_0\), a size normalization was done and all images were resized to \(N\times N\) pixels, with \(N = 250\). The enhanced self-shadow image R(x, y), is constructed by passing this swarm \(SWARM(I_0)\), through a logistic map, to produce contrast reduced image represented by \(SWARM(I_1)\) in the life trail. A secondary differential ratio image as discussed earlier was generated:
$$\begin{aligned} E_{\alpha }(x,y) = R_3(x,y) = \left[ \frac{|I_1(x,y) - I_0(x,y)|}{I_0(x,y)}\right] ^{\hat{\alpha }} \end{aligned}$$
where \(\hat{\alpha }\) can be obtained via a calibration process discussed in the previous section. This self shadow enhanced image with parameter \(\hat{\alpha }\) is placed in a rectangular grid and intensity standard deviations are computed for every patch. The patch size was chosen as \(10\%\) of the image size for this initial simulation setup. The secondary statistics matrix can be written as,
$$\begin{aligned} S = \left( \begin{array}{cccc} \sigma _{1,1} &{} \sigma _{1,2} &{} ... &{} \sigma _{1,n} \\ \sigma _{2,1} &{} \sigma _{2,2} &{} ... &{} \sigma _{2,n} \\ ... &{} ... &{} ... &{} ... \\ \sigma _{n,1} &{} \sigma _{n,2} &{} ... &{} \sigma _{n,n} \\ \end{array}\right) \end{aligned}$$
with,
$$\begin{aligned} \sigma _{i,j} = \sqrt{\frac{1}{W^2}\sum \sum _{(x,y)\in PATCH(i,j)} (R_3(x,y)-\mu _{i,j})^2} \end{aligned}$$
$$\begin{aligned} \mu _{i,j} = \frac{1}{W^2}\sum \sum _{(x,y)\in PATCH(i,j)} R_3(x,y) \end{aligned}$$
The complete algorithm from the image to the final feature and scalar statistics (both normalized and un-normalized) is discussed below :
Complete algorithm: generating self-shadow statistics from images
Step 0: Image size normalization while preserving the aspect ratio
Resizing the original \(N_{1}\times N_{2}\) image to \(N\times N\), with \(N = 250\)
Step 1: Formation of swarm/collection of pixel intensity values over the entire image
$$\begin{aligned} DOMAIN_{0} = \left\{ \begin{array}{c} (x,y) ~s.t.~ x\in \{1,2,.....250\}\ and\\ y\in \{1,2,.....N_{c}\} \end{array}\right\} \end{aligned}$$
$$\begin{aligned} SWARM_{0} = \{ I_{0}(x,y):\ s.t.\ (x,y) \in DOMAIN_{0}\} \end{aligned}$$
where \(I_{0}(x,y)\in [0,1]\) is the normalized luminance intensity level in the facial image.
Step 2: Application of the non-linear mapping to the entire swarm individually. Evaluate this iteratively for the entire SWARM for \(n=1,n=2,\ldots ,n=n_{TYPICAL}\) where \(n_{TYPICAL} =30\).
$$\begin{aligned} \forall (x,y) \in DOMAIN_{0},~ I_{n}(x,y) = 2I_{n-1}(x,y)\left[ 1- I_{n-1}(x,y)\right] \end{aligned}$$
Based on observations across subjects picked from the CASIA dataset, typical convergence timing, in terms of number of iterations for natural images, is around 10 and for spoof images is around 8. To ensure complete convergence as far as the life trail is concerned, the maximum number of iterations has been set to \(n_{TYPICAL}>> MAX(N_{TYP-NAT}, N_{TYP-SPOOF})\).
Step 3: Self-shadow enhancement via first-order differences as one traverses the LIFE trail
Stop with the first iteration: \(I_{(n=1)}(x,y):(x,y) \in DOMAIN_{0}\) Define
$$\begin{aligned} R(x,y)= & {} \frac{\left( | I_{1}(x,y)- I_{0}(x,y) |\right) }{I_{0}(x,y)}\\ E_{\alpha }(x,y)= & {} R(x,y)^{\hat{\alpha }} \end{aligned}$$
Step 4: Computing the patch-wise intensity diversity statistic. Let \(\beta \in (0,1)\) be the fractional patch size with respect to the ratio image \((E_{\alpha } (x,y)=R(x,y)^{\alpha })\), which is of the same size as the original image, i.e., \(250\times 250\). Set \(\beta =\beta ^{*}\in (0,1)\) (\(\beta \in \{2\%, 5\%, 10\%, 20\%\}\) of \(N = 250\), based on simulation experiments conducted and the tuning procedure related to a specific dataset. Let the patch size be \(W\times W\) with \(W= \lfloor \beta \times N\rfloor\). Let\((x_{p}, y_{p})\), be the top-left corner of the patch within the RATIO image statistic: i.e., \(E_{\alpha }(x,y)\).
$$\begin{aligned} DOMAIN_{Patch(p)} \left\{ \begin{array}{c} (x,y):s.t \\ \in x\in \{x_{p},...,(x_{p}+W-1)\}\\ y \in \{y_{p},...,(y_{p}+W-1)\} \end{array}\right\} \end{aligned}$$
\(\forall (x,y) DOMAIN_{Patch(p)}\) Compute
$$\begin{aligned} \mu _{p}= & {} \frac{1}{W^2} \sum _{(x,y)\in DOMAIN_{Patch(p)}}E_{\alpha }(x,y)\\ \sigma _{p}= & {} \sqrt{\frac{1}{W^2} \sum _{(x,y)\in DOMAIN_{Patch(p)}} \left[ E_{\alpha }(x,y)-\mu _p\right] ^2} \end{aligned}$$
Step 5: Statistics for analysis. Two types of statistics were computed. \(TYPE-1\): Pure variances from the ratio-image patches and their mean as the scalar statistic. This arrangement suffered from a statistical aperture effect with respect to patch size fractional increase (i.e., due to an increase in \(\beta\)). Hence, a normalized version was developed as \(TYPE-2\). The latter, i.e., TYPE-2 was deployed in the final test, while TYPE-1 was used in the calibration segment with respect to the trimmed version of the CASIA dataset (14 subjects). The scalar feature parameter can be chosen for the given image as, the mean diversity from the ratio image,
$$\begin{aligned} STAT_{RAW}(I_{0}) = \frac{1}{N_{patches}} \sum _{\forall patches} \sigma _{p}~{\textbf {TYPE1}} \end{aligned}$$
$$\begin{aligned} LSTAT_{NORM}(I_{0}) = \frac{2}{N_{patches}} \sum _{\forall patches} \left[ | ln(\sigma _{p})| \right] ~{\textbf {TYPE2}} \end{aligned}$$
The vector feature is a simple raster scan of all the \(\sigma\) parameters.
2-class SVM models for each client/subject
The original CASIA set [18] was deployed in the final testing round (50 subjects, \(3\times 30\) variations per subject at three different quality levels: low, medium, and high). From the original CASIA set, a reduced version was used as a calibration set from the point of view of algorithm refinement, final feature selection, keeping difficult subjects, and their variations in the backdrop. Final round test databases chosen for unbiased evaluation were OULU-NPU [27] and CASIA-SURF [28].
The reduced CASIA set had 14 subjects with 30 variations per subject covering both natural and print-spoof images. Thus, there were a total of 420 images across 14 subjects for natural and 420 images covering 14 subjects for print-spoofing. Out these 14 subjects, subjects 4, 6, and 11 have been identified as the anomalous and difficult ones (Fig. 12) keeping in mind various factors:
From the point of view of subject 4, there was a significant scale change/increase since the subject was closer to the camera than normal. This reduced the dynamic range in the intensity space leading to shorter life trails for natural faces as compared to the spoof ones (Fig. 12a, first and second images).
From the point of view of subject 6, there were cases where the light source was present in front but above the subject. This suppressed the self-shadow profile considerably for some natural images (Fig. 12a, third and fourth images).
In subject 11, the problem was very different and existed in the spoofing segment (Fig. 12b, fifth and sixth images), wherein the printing and re-imaging quality was very high and comparable to that of a natural face image.
Anomalous cases in CASIA which have a tendency to induce mis-classifications (Subjects 4, 6 and 11); (a) Some natural variations; (b) Spoof variations; Ordering is Subject 4, 6 and then 11
Thus, the life trail lengths turned out to be similar for natural and spoof faces for these anomalous cases.
To check the precision of the proposed algorithm, the CASIA set was segregated subject-wise (across both natural and spoof segments) and 50% of the variations per natural or print-version was used to build a 2-class-subject-specific SVM model [17, 20]. The remain 50% of the samples from both the natural and spoof segments were used for testing. The t-SNE maps [33] of the reduced CASIA set test set on a subject specific basis are shown in Fig. 13. The corresponding error rates for the test samples are shown alongside. The overall error mean equal error rate (EER) across all subjects for this reduced calibration CASIA dataset is 0.48% for the ratio-mapping parameter \(\alpha = 2.5\). The error rates climb for values less than \(\alpha = 2.5\) and larger than \(\alpha = 3.5\). The client/subject specific cluster separations have been generated using t-SNE mappings [33] (a stochastic map which presents a fairly realistic lower dimensional representation of higher dimensional data) in Fig. 13. In all the subject specific subplots of the test-data, Fig. 13a–n, the cluster separation was found to be excellent, attesting and reinforcing CLAIMS 1 and 2.
Cluster separation (subject-wise) in a 2-class setting, for the reduced CASIA-dataset comprising of 14-subjects (out of a total of 50) in which 50% of the variations per subject, were used for testing
Database description
In this section, a description of three different datasets, CASIA [18], OULU [27], and CASIA-SURF [28] is provided, then in the second phase of the calibration protocol in which the parameter \(\beta ^*\) is decided based on a parameter sweep for database-specific values of \(\alpha ^*\) obtained using the calibration protocol discussed earlier. Based on these optimized parameters, subject-specific model building, testing, and comparisons form the last few subsections.
A summary of the datasets used for final round testing of the proposed life trail algorithm is provided in Table 1. The original CASIA face dataset [18] shown in Fig. 14 which was created from Chinese individuals showed significant variability on both the natural face front as well as the planar spoofing front. The variability as far as the natural faces were concerned encompassed minor pose variations, significant light source positional variations, scale changes, etc. The variability as far as print-spoofing was concerned stemmed from color variations and minor scale variations depending on the manner in which the printing was done. The CASIA print set had 50 subjects and images were captured under different image acquisition resolutions (low, medium, and high). Each resolution level had 30 variations per subject for both natural and print classes. The OULU-NPU dataset [27] shown in Fig. 15, on the other hand, contained spoof samples related to print-photo and video attacks, along with natural face samples. The face presentation attack sub-database consisted of 4950 real access and attack videos that were recorded using front facing cameras of six different smartphones over a varied price range. The print attack was created using two printers (printer 1 and printer 2) and two display devices (display 1 and display 2) out of which 20 subjects were publicly available. The enrolled users were mostly Europeans and people from the middle east. Pose and scale changes were minimal here.
Examples (both natural and spoof-print versions) from the original CASIA dataset [18]
Examples (both natural and spoof-print versions) from the original OULU-NPU dataset [31]
The CASIA-SURF [28] shown in Fig. 16 is a wide dataset with real and spoof samples along with depth profiles. This dataset contained samples of 1000 Chinese individuals from 21000 videos across three modalities (RGB, Depth, IR). There were six scenarios under which the print-photo attacks were implemented:
Attack 1: Person holding his/her flat face photo with the eye-region cut
Attack 2: Person holding his/her curved face photo with eye-region cut
Attack 3: Person holding his/her flat face photo with eye and nose regions cut
Attack 4: Person holding his/her curved face photo with eye and nose regions cut
Attack 5: Person holding his/her flat face photo when eye, nose and mouth regions are cut
Attack 6: Person holding his/her curved face photo when eye, nose and mouth regions are cut
Examples (both natural and spoof-print versions) from the original CASIA-SURF dataset [32]
Final customized calibration and testing on different datasets
There are two parameters which are a function of the acquisition process and the environment in which the face images are generated. These are the exponent \(\alpha\), which is associated with the first, normalized first difference ratio statistic which captures the self-shadow information with a certain degree of sufficiency and other happens to be the patch size-fraction \(\beta \in [0,1]\) which decides the dimensionality of the feature space.
In close cropped images from datasets such as CASIA and CASIA-SURF, the face is virtually fully inscribed inside the "image-rectangle" (we take this as the referential 1:1 scenario). Here, the patch fraction \(\beta\) is expected to be around 10% to 20%. However, in datasets such as OULU, where the face is small part of a bigger background (here the ratio of face to whole rectangular area drops to 1:4), the optimal patch fraction (\(\beta\)) is expected to decrease, keeping the volume of perceptual information connected with self-shadow details the same.
To shortlist the optimal parameter for each dataset, 5 subjects with a total of 75 samples from each class were chosen and used to generate the class separation scores. To compensate for the statistical aperture effect stemming from the patch size increase, a normalizing factor inversely proportional to the square root of the size of the patch was introduced (this is mentioned as the TYPE-2 statistic in the scalar abstraction in the Algo. 6.2(Step 5).
If \(\sigma _p\) is the patch standard deviation, the quantum of self-shadow information present in it can be approximately represented as,
$$\begin{aligned} L_p = |ln\left( \epsilon +\sigma _p\right) | \end{aligned}$$
where \(\epsilon\) is a small positive number. The average self-shadow information for a given image can then be computed as,
$$\begin{aligned} LSTAT = \frac{1}{N_{patches}} \sum _{p=1}^{N_{patches}}L_p \end{aligned}$$
Let \(u_1, u_2,...,u_r\) be the LSTAT-scores computed from the natural face calibration set and let \(v_1, v_2,...v_r\) (\(r = 75\)) be the LSTAT-scores produced from the spoof-set. From these conditional LSTAT-scores, two conditional means and two conditional variances are computed:
$$\begin{aligned} \mu _{REAL}= & {} \frac{1}{r} \sum _{k=1}^{r} u_k\nonumber \\ \mu _{SPOOF}= & {} \frac{1}{r} \sum _{k=1}^{r} v_k\nonumber \\ \sigma _{REAL}^2= & {} \frac{1}{r} \sum _{k=1}^{r} (u_k-\mu _{REAL})^2\nonumber \\ \sigma _{SPOOF}^2= & {} \frac{1}{r} \sum _{k=1}^{r} (v_k-\mu _{SPOOF})^2 \end{aligned}$$
The separation between the two clusters as function of the parameter \(\beta\) for a particular calibrated \(\alpha ^{*}\) can be determined based on the symmetric version of the Kullback-Liebler (KL) divergence [34], under a conditional Gaussian assumption for the two classes: real and spoof. This metric based on KL-divergence for two univariate Gaussian distributions can be computed as:
$$\begin{aligned} SEPARATION_{KLD} = \left( \frac{\sigma _{REAL}^2}{\sigma _{SPOOF}^2} + \frac{\sigma _{SPOOF}^2}{\sigma _{REAL}^2}\right) + \left( \mu _{REAL} - \mu _{SPOOF}\right) ^{2}\left( \frac{1}{\sigma _{REAL}^2} + \frac{1}{\sigma _{SPOOF}^2}\right) \end{aligned}$$
The impact of a \(\beta\) parameter sweep for specific values of \(\alpha\), i.e., obtained via the initial exponential parameter calibration procedure is shown in Table. 2. For a specific database, when \(\beta\) is varied for a fixed \(\alpha\), the separation scores show a clear maximum for some \(\beta = \beta ^*\). It was observed that for the CASIA-SURF dataset, where the dynamic ranges of both the natural and spoof/print faces were close, optimal \(\beta _{CASIA-SURF} = 0.15\) corresponding to an \(\alpha _{CASIA-SURF} = 1.7\). On the other hand for the standard CASIA dataset, such fine grained scrutiny of the self-shadow image was not required and the optimal \(\beta _{CASIA} = 0.25\) for an \(\alpha _{CASIA} = 2.7\). For OULU, however, since the face information was a small part of a larger background, it was natural to expect the optimal \(\beta ^*\) to drop to \(\beta _{OULU} = 0.1\) for an \(\alpha _{OULU} = 3.4\). The final parameters from the two-stage calibration procedure have been captured in Table 3.
Table 2 Separation scores for all three datasets: CASIA, OULU, and CASIA-SURF
Table 3 Database and optimal parameter values for various databases, based on the tuning procedure
Testing: Experimental results and comparison with literature
There are two primary paradigms designed to suit two different types of applications: (i) the subject identity not known a priori, i.e., a face is presented to the camera and the counter-spoofing system must decide whether face-presentation is natural [4, 15, 18, 23, 35], and (ii) the subject identity is known to the counter-spoofing system (more like an authentication environment) [17, 20].
The proposed image trail architecture was evaluated over a client specific frame (i.e., Type-(ii), subject ID known). Since client specific architectures effectively suppress subject-mixing noise or registration noise, the error scores are much lower here (Table 4) as compared to the subject-independent error scores (Table 5). The best among them is the random walk/scan-based algorithm [4, 5] which uses short-stepped random walks to not just trap the short-term spatial correlation statistics but also to generate several equivalent randomly scanned realizations of the same parent face-image to transform an image feature to blob (or an ensemble), which can be used highly reliably to capture the natural immersive environment in a truly subject agnostic fashion. Error rates for the print-presentation attack (CASIA) for the random scan algorithm were reported as 3.5122% (without auto-population) and 1.8920 % (with auto-population). To begin with, this became one of the benchmark error measures against which the proposed life trail based approach in a client-specific setting needed to be compared.
Table 4 State of the art methods within a client specific frame.\(C_{sp}\): represents the subject-specific or client specific mode of training and testing; in\(Protocol-I\)below used on the OULU set, the same\(C_{sp}\)mode has been deployed
Table 5 State of the art methods which assume a client/subject independent frame and the corresponding error rates
For the complete CASIA print dataset (50 subjects, \(3\times 30\) variations per subject for three different quality levels), the proposed life trail algorithm showed a comparable error rate of 0.310% Table 4. With respect to state of the art client-specific face counter-spoofing architectures, the proposed life trail algorithm performed better than most on the planar-printing front.
The error rates of the proposed algorithm observed for the OULU-NPU dataset [27] was 1.192% and that for the CASIA-SURF [28] was found to be 2.246%. These numbers were comparable with the convolutional neural network (CNN)-based solutions shown in Table. 4. Notice that in the case of CASIA-SURF, the CNN-based solutions available in [43], depth map information was augmented with RGB information to support the learning process. With pure RGB information, these error numbers will be higher.
Random scan extension to facilitate cross-validation
Random scans [4, 5] were developed to capture acquisition noise statistics while suppressing both content and subject-content interference. Contiguous random scans in the form of space filling curves (SPCs) [44], were originally designed for communications applications to facilitate compression of videos after shuffling. These contiguous random scans when deployed towards face counter-spoofing have a few interesting properties:
The scans preserve the first, second and third order pixel-intensity correlation statistics in a particular image.
By executing the same scan multiple times on the same image or patch, one can auto-populate the features or statistics derived from a typical scan, at an ensemble level. An illustration of a short contiguous scan in given in Fig. 17.
CONTIGUOUS RANDOM WALK: Destination pixel marked in RED and last-mile entry is from the bottom pixel (i.e. pixel located below the final destination pixel)
Secondary differential statistics can be computed over the scanned vectors of the first, second, and third order to trap the mean acquisition noise energy over the entire image. Thus, every image can be abstracted as a 3-dimensional feature vector, which may contain crucial information regarding a certain phenomenon such as, BLUR-diversity (due to a PINHOLE LENS-effect [8]) or self-shadow prominence (in this paper).
The features and statistics are content and subject agnostic.
One has to note that these contiguous random walks tend to diverge considerably beyond a certain number of steps. Rather, when viewed conversely, given a walk length of d units, one can construct a graph from the destination pixel to one of its myriad origins \(d- foot-steps-away\) (or walk units away). This has been illustrated in Fig. 17: CONTIGUOUS RANDOM WALK, where the final destination is flagged by a RED-CIRCLE and length of the walk has been chosen as \(d=3\) units. The original source pixel from which the 3-unit distance walk had originated and the distinct paths traversed are shown in Fig. 17, where the final mile entry is from the bottom. The entry can similarly be from the left or right or above. Thus, the number of distinct paths is, \(N_{paths}= 9\times 4 = 36\) for \(d = 3\)-walk-length-units. Some exemplar generated walk patterns are shown in Fig. 18. Since the random scan can be fed with any target image or image-like-statistic, for this application which is concerned with life trails and self-shadows, it is fed with the following first, first difference ratio image,
$$\begin{aligned} G_1(x,y) = |X_1(x,y) - X_0(x,y)|/X_0(x,y) \end{aligned}$$
$$\begin{aligned} X_1(x,y) = 2X_0(x,y)[1-X_0(x,y)] \end{aligned}$$
with \(X_0(x,y) = IM_{norm}(x,y)\in (0,1)\) representing the original normalized real and natural intensity image. An enhanced version of this is created by raising \(G_1(x,y)\) to the power \(\alpha = 2.5\) (fixed) to ensure that the self-shadows in the natural face image are brought out much more clearly. This enhanced version is given by,
$$\begin{aligned} G_{1E}(x,y) = G(x,y)^{\alpha } \end{aligned}$$
This enhanced first first difference ratio \(G_{1E}\) is then fed to the random scan algorithm. Let the scanned self-shadow intensity vector of length L-walk units for a particular instance \(k\in {1,2,..N_S}\) be:
$$\begin{aligned} \bar{S}(G_{1E},k) = [s_{k,1}, s_{k,2},...,s_{k,L}] \end{aligned}$$
where \(N_S\) is the size of the ensemble of scans or number of differently scanned vectors from the same self-shadow image/statistic.
Some exemplar random walk patterns leading to the central pixel. Walk length is d = 7 units and the size of ensemble (or extent of auto-population) was 15-scans
Need for a secondary Fourier descriptor
Note that the motive in this segment is to detect the presence of the self-shadow, irrespective of its size, shape and positions. This size, shape, self-shadow prominence, and location is a function of an interplay between the light source orientation relative to the face-surface topography which is being photographed by a frontal camera. Most poses are assumed to be full-frontal, but mild scale changes and pose variations are allowed and expected. Thus, under natural lighting conditions there are clear bright and dark zones, the only issue being that the fraction of the zone that is dark and constitutes the self-shadow remains uncertain. When an image of a planar print is analyzed, in relation to the diffused lighting analogy of the TWIN-image of Fig. 9, the difference between the two cases is in the presence of darker zones for natural images versus suppressed umbral-penumbral zones for planar print images. Thus, a spectral analysis eventually leading to a computation of parameters such as spread of power over the discrete frequency space should be able to segregate natural spectra derived from self-shadow images from print spectra.
In this section, it is claimed that the bandwidth of the first, first difference ratio statistic \(G_1\), carries enough discriminatory information to distinguish natural face images from print spoof version via a self-shadow spectral analysis. Furthermore, it is also claimed that a contiguous random walk starting from the center of the image preserves the correlation statistics and subsequently some of the 2D-spectral parameters in the self-shadow image-statistic. Thus, by executing the random walk on the self-shadow image statistic \(G_1\) and then analyzing its magnitude spectrum, one can construct a robust Fourier descriptor for the natural face image.
The 1D-discrete Fourier transform (DFT) of the scanned vector \(\bar{S}(G_{1E},k)\), corresponding to instance or walk realization k is given by,
$$\begin{aligned} FS_{G_{1E},k}(r) = \sum _{n=0}^{L-1} s_{k,n} W_L^{nr} \end{aligned}$$
where \(W_L\) is the Twiddle factor, given by \(W_L = e^{-(j*2*pi)/L}\), where \(j = \sqrt{-1}\). The magnitude spectrum is given by,
$$\begin{aligned} MFS_{G_{1E},k}(r) = \sqrt{FS_{G_{1E},k}(r)\times FS_{G_{1E},k}^{*}(r)} \end{aligned}$$
Assuming L to be an even integer, the following BAND-related, spectral cumulative statistics are computed:
$$\begin{aligned} A(1)= & {} \frac{4}{L} \sum _{r=0}^{L/4 - 1} MFS_{G_{1E},k}(r)\nonumber \\ A(2)= & {} \frac{4}{L} \sum _{r=L/4}^{L/2 - 1} MFS_{G_{1E},k}(r)\nonumber \\ A(3)= & {} \frac{4}{L} \sum _{r=L/2}^{3L/4 - 1} MFS_{G_{1E},k}(r)\nonumber \\ A(4)= & {} \frac{4}{L} \sum _{r=3L/4}^{L-1} MFS_{G_{1E},k}(r) \end{aligned}$$
To ensure robustness to self-shadow variations, shape and size mainly, another set of spectral statistics are derived from the above set:
$$\begin{aligned} B(1)= & {} A(1) + A(2)\nonumber \\ B(2)= & {} A(2) + A(3)\nonumber \\ B(3)= & {} A(3) + A(4)\nonumber \\ B(4)= & {} A(2) + A(3) + A(4) \end{aligned}$$
The final feature or descriptor which can now be deployed in the cross-validation experiment is now:
$$\begin{aligned} FD_{CRV,k} = [A(1), A(2), A(3), A(4), B(1), B(2), B(3), B(4)] \end{aligned}$$
for scan-instance k.
Calibration with CASIA dataset
Sixty images across 10 subjects from the CASIA set (real and spoof) were used for testing the spectral frame and also for calibrating the parameters. All images were resized to \(100 \times 100\), and then the self-shadow image-statistic was computed. The exponent \(\alpha\) was fixed as 2.5, and what was fed to the random walk process was the enhanced self-shadow image-statistic:
$$\begin{aligned} G_{1E}(\alpha ) = G_1^{\alpha } \end{aligned}$$
The final feature descriptors were produced for each image. Fig. 19, shows the t-SNE plot of the real descriptors versus the print descriptors. A good separation with a small overlap can be seen in Fig. 19.
Clusters from 60-natural spectral descriptors and 60 print spectral descriptors. N S = 1 (which means only one random scan was generated per image statistic) and scan parameters were: Image-statistic or patch size W × W , W = 21 and walk length (complete) covering full image-statistic, L = W2
Cross-validation with OULU dataset
To check whether the model developed using subjects and images from the CASIA set can be applied to other datasets, one can take one of two pathways:
Single sided training: Characterize the natural space class alone [9, 11], via the self-shadow feature and its secondary statistics using the random scans and Fourier descriptor.The training model is confined to the CASIA dataset and is built across subjects (this subject-agnostic training is helped in part by the contiguous random scan which does not require feature registration [4]). Once the 1-class SVM [11] model is built, this is then tested on another dataset, OULU [27]. The complete image set, natural and print versions from OULU are used for testing. No part of OULU is used in model building.
Two-class training: Here, the training model is built with natural and spoof samples from CASIA. Testing is done in the same way as described earlier over OULU.
The parameters for the 1-class and 2-class model building were as follows:
1-class model: 14 subjects from a reduced CASIA dataset with 15 variations per subject for the natural face class alone was used to form the 1-SVM model with the final random scan-induced Fourier descriptors, discussed in the early of this section. All images were resized to \(100 \times 100\), and the ratio statistic \(G_1\) was computed first and then raised to an exponent \(\alpha = 2.5\) (fixed). These enhanced self-shadow statistics \(G_1^{\alpha }\) were then resized to \(21 \times 21\) and subjected to a random scan followed by a Fourier analysis to generate final descriptors.
2-class model: The only difference here is that 14 subjects with 15 variations across subjects from BOTH CASIA classes were used to form the 2-class SVM model. All other parameters remained the same.
Table 6 shows the error rates on the OULU set when the proposed random scan based with Fourier descriptor, self-shadow model (learnt on the CASIA dataset) was applied. The error rates for both one- and two- class SVMs were on the lower side (5.86% and 2.34% respectively) and comparable with the results obtained when customized calibration was done for the OULU-dataset (1.19%). This modified approach de-links the data-set training from the data-set testing and makes it more general (Table 7).
Table 6 Error rates when features are trained on CASIA and tested on OULU exclusively
Table 7 EEE for optimal values of \(\alpha ^{*}\) and \(beta^{*}\) for
In this paper, a novel contrast reductionist life trail based image sequence is generated using a non-linear logistic map, in such a way that successive images down the pipeline tend to have a progressively lower contrast when compared with previous iterations. Eventually, the sequence converges to a zero contrast image. A simple statistical model was used to show not just the proof of convergence but also arrive at fact that the first, first difference ratio statistic from the life trail carried sufficient and maximum information pertaining to self-shadows. This corroborated with the observations from the TWIN-image life trail analysis. The model also provided an insight into the selection of the optimal parameter \(\alpha ^*\) based on an intersection between two constraints: (i) absolute self-shadow entropy from the natural face ratio-statistic after exponentiation and (ii) class separation parameter \(-\Delta {H}(\alpha )\), leading to the crystallization of the operating point \(\alpha ^*\) if the dynamic range parameter \(\hat{(}a)_F = \hat{a}\) can be extracted via measuremen (Fig. 19).
For each dataset which was being tested, a small fraction of samples (both classes) were set aside for calibration which was done in two phases, and this was done in a subject agnostic fashion: (i) estimation of \(\alpha ^*\) based on measurements and the two constraints and (ii) varying the patch fraction \(\beta\) to trap the localized entropy score related to the self-shadow statistic and checking the separation between the real and spoof conditional distributions. The \(\beta\) which corresponding to the highest separation value was chosen as the optimal \(\beta ^*\).
When tested on three datasets, error rates for the proposed algorithm when applied to CASIA (the calibration database) and OULU-NPU and CASIA-SURF were found to be 0.3106% (\(\alpha ^* = 2.7, \beta ^* = 0.25\)), 1.1928% (\(\alpha ^* = 3.4, \beta ^* = 0.1\)), and 2.2462% (\(\alpha ^* = 1.7, \beta ^* = 0.15\)) respectively for planar-print-type spoofing operations.
To impart a certain degree of flexibility in the solution and avoid repeated calibration and tuning each time the acquisition and illumination environment is changed, a model was built on the basic enhanced self-shadow statistic using random scans, to make the information gathering subject agnostic. The basic idea was to focus on detecting only the presence of self-shadows and not in profiling the shape, position, and prominence of this self-shadow present. The moment the focus shifted from profiling to detection, this called for a Fourier analysis, particularly because self-shadow statistics from natural images tend to have dominant higher frequencies and exhibit higher bandwidths as compared to their print counterparts. Based on this random scan-induced Fourier descriptor, the proposed model which was trained on the CASIA set alone, was found to very effective when cross-ported to OULU.
The proposed algorithm and pipeline has other distinct advantages:
Since the main computation involves a swarm of parallel pixel-wise intensity manipulations using the logistic map, the model building is very simple and fast. Note that interestingly the computation is so simple that not even a simple image filtering operation is done. In a way, this trail building process demands a certain purity in the acquired image. While resizing introduces some quantum of interpolation noise, self-shadow profiles are not compromised. Thus, owing to its simplicity, it can be used a quick-check in most counter-spoofing applications.
A high accuracy was obtained with the proposed frame, both with calibration and customization and also while cross-porting (with a random scan inclusion and Fourier-descriptor subject-agnostic twist) to other datasets and environments such as OULU-NPU.
We however note that while the proposed solutions (including cross-validation) are precise enough to detect print-planar spoofing, it may not be effective against digital planar image presentation cases based on tablets and laptops. This is so because the back-lighting tends to enhance the self-shadow profiles present even in digitally spoofed segments.
CASIA [18], MSU-MFSD database [23].
SVM:
Support Vector Machines
Convolution Neural Netweok
L.A.M. Pereira, A. Pinto, F.A. Andaló, A.M. Ferreira, B. Lavi, A. Soriano-Vargas, M.V.M. Cirne, A. Rocha, The rise of data-driven models in presentation attack detection, in Deep Biometrics. Unsupervised and Semi-Supervised Learning. ed. by R. Jiang, C.-T. Li, D. Crookes, W. Meng, C. Rosenberger (Springer, Cham, 2020), pp.289–311. https://doi.org/10.1007/978-3-030-32583-1_13
K. Patel, H. Han, A.K. Jain, Secure face unlock: spoof detection on smartphones. IEEE Trans. Inf. Forensic Secur. 11(10), 2268–2283 (2016)
S.M. Nesli Erdogmus, Spoofing in 2d face recognition with 3d masks and antispoofing with kinect (IEEE BATS, USA, 2013)
B.R. Katika, K. Karthik, Face anti-spoofing by identity masking using random walk patterns and outlier detection. Patt. Anal. Appl. 1–20 (2020)
K.. Karthik, B.R. Katika, Identity independent face anti-spoofing based on random scan patterns, in 2019 8th PREMI International Conference on Pattern Recognition and Machine Intelligence (PREMI). (Springer, India, 2019)
S. Kim, S. Yu, K. Kim, Y. Ban, S. Lee, in Biometrics (ICB), 2013 International Conference On. Face liveness detection using variable focusing. (IEEE, 2013). pp. 1–6
Y. Kim, S.Y.J.L. Jaekenun, in Conference on Optical Society OF America, vol. 26. Masked fake face detection using radiance measurments. (2009) pp. 1054–1060
K. Karthik, B.R. Katika, in Industrial and Information Systems (ICIIS), 2017 IEEE International Conference On. Face anti-spoofing based on sharpness profiles (IEEE, 2017) pp. 1–6
K. Karthik, B.R. Katika, in Communication Systems, Computing and IT Applications (CSCITA), 2017 2nd International Conference On. Image quality assessment based outlier detection for face anti-spoofing. (IEEE, 2017), pp. 72–77
B.R. Katika, K. Karthik, Face anti-spoofing based on specular feature projections, in Proceedings of 3rd International Conference on Computer Vision and Image Processing. ed. by M. Chaudhuri, M. Nakagawa, P. Khanna, S. Kumar (Springer, Singapore, 2020), pp.145–155
S.R. Arashloo, J. Kittler, W. Christmas, An anomaly detection approach to face spoofing detection: a new formulation and evaluation protocol. IEEE Access 5, 13868–13882 (2017)
T. Edmunds, A. Caplier, Face spoofing detection based on colour distortions. IET Biom. 7(1), 27–38 (2017)
X. Gao, T.-T. Ng, B. Qiu, S.-F. Chang, in 2010 IEEE International Conference on Multimedia and Expo. Single-view recaptured image detection based on physics-based features. (2010). pp. 1469–1474. https://doi.org/10.1109/ICME.2010.5583280
E.J. Candès, X. Li, Y. Ma, J. Wright, Robust principal component analysis? CoRR abs/0912.3599 (2009)
S. Kim, Y. Ban, S. Lee, Face liveness detection using defocus. Sensors. 15(1), 1537–1563 (2015). https://doi.org/10.3390/s150101537
X. Zhang, X. Hu, M. Ma, C. Chen, S. Peng, in 2016 23rd International Conference on Pattern Recognition (ICPR). Face spoofing detection based on 3d lighting environment analysis of image pair. (2016). pp. 2995–3000. https://doi.org/10.1109/ICPR.2016.7900093
I. Chingovska, A.R. Dos Anjos, S. Marcel, Biometrics evaluation under spoofing attacks. IEEE Trans. Inf. Forensics Secur. 9(12), 2264–2276 (2014)
Z. Zhang, J. Yan, S. Liu, Z. Lei, D. Yi, S.Z. Li, in Biometrics (ICB), 2012 5th IAPR International Conference On. A face antispoofing database with diverse attacks. (IEEE, 2012). pp. 26–31
J. Määttä, A. Hadid, M. Pietikäinen, Face spoofing detection from single images using texture and local shape analysis. IET Biom. 1(1), 3–10 (2012)
J. Yang, Z. Lei, D. Yi, S.Z. Li, Person-specific face antispoofing with subject domain adaptation. IEEE Trans. Inf. Forensics Secur. 10(4), 797–809 (2015)
J.M. Saragih, S. Lucey, J.F. Cohn, Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vis. 91(2), 200–215 (2011)
T. Wang, J. Yang, Z. Lei, S. Liao, S.Z. Li, in 2013 International Conference on Biometrics (ICB). Face liveness detection using 3D structure recovered from a single camera. (IEEE, 2013). pp. 1–6
D. Wen, H. Han, A.K. Jain, Face spoof detection with image distortion analysis. IEEE Trans. Inf. Forensics Secur. 10(4), 746–761 (2015)
J. Galbally, S. Marcel, in Pattern Recognition (ICPR), 2014 22nd International Conference On. Face anti-spoofing based on general image quality assessment. (IEEE, 2014), pp. 1173–1178
J. Galbally, S. Marcel, J. Fierrez, Image quality assessment for fake biometric detection: application to iris, fingerprint, and face recognition. IEEE Trans. Image Process. 23(2), 710–724 (2014)
E.W. Weisstein, Logistic map
Z. Boulkenafet, J. Komulainen, L. Li, X. Feng, A. Hadid, in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). Oulu-npu: a mobile face presentation attack database with real-world variations. (IEEE, 2017), pp. 612–618
S. Zhang, X. Wang, A. Liu, C. Zhao, J. Wan, S. Escalera, H. Shi, Z. Wang, S.Z. Li, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. A dataset and benchmark for large-scale multi-modal face anti-spoofing. (2019), pp. 919–928
Z. Zhang, D. Yi, Z. Lei, S.Z. Li, in Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference On. Face liveness detection by learning multispectral reflectance distributions. (IEEE, 2011), pp. 436–441
C. Bell, Bright sunlight (2015)
A. Papoulis, Probability, random variables, and stochastic processes. (McGraw-Hill, 1991)
T.M. Cover, J.A. Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing) (Wiley-Interscience, USA, 2006)
L. Van der Maaten, G. Hinton, Visualizing data using t-sne. J. Mach. Learn. Res. 9(11), (2008)
Kullback-leibler divergence
J. Galbally, S. Marcel, J. Fierrez, Biometric antispoofing methods: a survey in face recognition. IEEE Access. 2, 1530–1552 (2014)
I. Chingovska, A.R. Dos Anjos, On the use of client identity information for face antispoofing. IEEE Trans. Inf. Forensics Secur. 10(4), 787–796 (2015)
Y. Sun, H. Xiong, S.M. Yiu, Understanding deep face anti-spoofing: from the perspective of data. Vis. Comput. 37, 1015–1028 (2021)
Z. Boulkenafet, J. Komulainen, Z. Akhtar, A. Benlamoudi, D. Samai, S.E. Bekhouche, A. Ouafi, F. Dornaika, A. Taleb-Ahmed, L. Qin, et al. in 2017 IEEE International Joint Conference on Biometrics (IJCB). A competition on generalized software-based face presentation attack detection in mobile scenarios. (IEEE, 2017), pp. 688–696
Z. Boulkenafet, J. Komulainen, Z. Akhtar, A. Benlamoudi, D. Samai, S.E. Bekhouche, A. Ouafi, F. Dornaika, A. Taleb-Ahmed, L. Qin, F. Peng, L.B. Zhang, M. Long, S. Bhilare, V. Kanhangad, A. Costa-Pazo, E. Vazquez-Fernandez, D. Perez-Cabo, J.J. Moreira-Perez, D. Gonzalez-Jimenez, A. Mohammadi, S. Bhattacharjee, S. Marcel, S. Volkova, Y. Tang, N. Abe, L. Li, X. Feng, Z. Xia, X. Jiang, S. Liu, R. Shao, P.C. Yuen, W.R. Almeida, F. Andalo, R. Padilha, G. Bertocco, W. Dias, J. Wainer, R. Torres, A. Rocha, M.A. Angeloni, G. Folego, A. Godoy, A. Hadid, in 2017 IEEE International Joint Conference on Biometrics (IJCB). A competition on generalized software-based face presentation attack detection in mobile scenarios. (2017), pp. 688–696. https://doi.org/10.1109/BTAS.2017.8272758
X. Yang, W. Luo, L. Y. Bao, D. Gong, S. Zheng, Z. Li, W. Liu, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Face anti-spoofing: model matters, so does data. (2019), pp. 3507–3516
A. Jourabloo, Y. Liu, X. Liu, in Proceedings of the European Conference on Computer Vision (ECCV). Face de-spoofing: anti-spoofing via noise modeling. (2018), pp. 290–306
Z. Wang, Z. Yu, C. Zhao, X. Zhu, Y. Qin, Q. Zhou, F. Zhou, Z. Lei, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Deep spatial gradient and temporal depth learning for face anti-spoofing. (2020), pp. 5042–5051
A. Liu, Z. Tan, J. Wan, S. Escalera, G. Guo, S.Z. Li, in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Casia-surf: A benchmark for multi-modal cross-ethnicity face anti-spoofing. (2021) pp. 1179–1187
Y. Matias, A. Shamir, in Conference on the Theory and Application of Cryptographic Techniques. A video scrambling technique based on space filling curves. (Springer, 1987), pp. 398–417
Department of Electronics and Electrical Engineering, Indian Institute of Technology Guwahati, Guwahati, 781039, Assam, India
Balaji Rao Katika & Kannan Karthik
Balaji Rao Katika
Kannan Karthik
Both authors read and approved the final manuscript.
Correspondence to Balaji Rao Katika.
Appendix A: Proof of convergence of the print-image-mapped Y-sequence
Starting off with \(Y_0 ~ UNIFORM[0, (1/a)]; a > 1\) (uniformly distributed but reduced dynamic range) and applying the logistic map several times prove that:
The iterative function map with respect to the Y-sequence is,
$$\begin{aligned} Y_{n} = 2Y_{n-1}(1-Y_{n-1}); n\ge 1 \end{aligned}$$
Thus, subtracting \(Y_n\) from 0.5, we get,
$$\begin{aligned} \frac{1}{2} - Y_n= & {} \left( \frac{1}{2} - 2Y_{n-1} + 2Y_{n-1}^2\right) \nonumber \\= & {} 2\times \left( \frac{1}{2} - Y_{n-1}\right) ^2 \nonumber \\= & {} 2\times 2^2 \times \left( \frac{1}{2} - Y_{n-2}\right) ^4 \end{aligned}$$
Multiplying both sides by factor of 2, the above equation can be re-written as,
$$\begin{aligned} 1-2Y_n= & {} 2^4 \times \left( \frac{1}{2} - Y_{n-2}\right) ^4\\= & {} \left( 1 - 2Y_{n-2}\right) ^4\\= & {} \left( 1 - 2Y_0\right) ^2n \end{aligned}$$
Let,
$$\begin{aligned} Z = (1 - 2Y_0)^2 \end{aligned}$$
It can be shown that the positive power of the random variable Z, i.e.,
$$\begin{aligned} Q_n = Z^n; n>> 1 \end{aligned}$$
will approach a deterministic zero with probability '1' as n becomes very large, i.e.,
$$\begin{aligned} f_{Q_n(z)} \longrightarrow \delta (z) \text { for } n>>1 \end{aligned}$$
where, \(\delta (.)\) is the DIRAC-DELTA function. This leads to the result that for large n,
$$\begin{aligned} Prob(Z^n \longrightarrow 0) \longrightarrow 1; n>> 1 \end{aligned}$$
This implies that based on Eq. (51) and then Eqs. (51), (52) and (53),
$$\begin{aligned} Prob(1 - 2Y_n \longrightarrow 0) \longrightarrow 1; n>> 1 \end{aligned}$$
Since, \(Y_n \in [0,0.5]; n\ge 1\), it follows that,
Thus, the proof.
Appendix B: Convergence rates of real and print life trail sequences
It is to be shown that the print-abstraction related sequence \(Y_n\), converges faster as compared the real-image-abstraction sequence \(X_n\). It suffices to show that the dynamics associated with the error sequence \(H_n\) is greater as compared to the original error sequence \(G_n\). This means the change and drift to a zero contrast image is faster for a print version as compared to a natural one.
To monitor and track the convergence rates of the two trails, the normalized first order difference (or error) metric is defined as,
Furthermore, it can be shown that,
$$\begin{aligned} G_{n}-G_{n-1}= & {} 2[X_{n-2}-X_{n-1}]\nonumber \\= & {} 2\left[ \frac{X_{n-2}-X_{n-1}}{X_{n-2}}\right] X_{n-2}\end{aligned}$$
$$\begin{aligned}= & {} -2G_{n-1}X_{n-2} ; n\ge 2 \end{aligned}$$
Can show that,
$$\begin{aligned} G_{n} = G_{n-1}(1-2X_{n-2}) = G_{n-1}^2; n\ge 2 \end{aligned}$$
It follows that,
$$\begin{aligned} G_{n}= (G_{0})^{2n}; n \ge 2 \end{aligned}$$
with \(G_1 = 1-2X_0\); Similarly,
$$\begin{aligned} H_{n}= (H_{0})^{2n} = (2Y_0 -1)^{2n}; n\ge 2 \end{aligned}$$
where, \(H_1 = 1-2Y_0\); Now, let the expected value, \(E[H_{n}]=\mu _H\). Given \(E[H_{n}]=\mu _H\) and
$$\begin{aligned} E[H_{n}]= & {} a \int _{u=0}^{u=1/a}{\left( 2u-1\right) ^{2n} du}\end{aligned}$$
$$\begin{aligned}= & {} \frac{a}{2(2n+1)}\left[ 1 + {\left( \frac{2}{a} - 1\right) }^{2n+1}\right] \end{aligned}$$
Taking the limit as \(a \longrightarrow 1^+\), since a is a number larger than '1',
$$\begin{aligned} E[H_{n}]\longrightarrow & {} E[G_{n}]\end{aligned}$$
$$\begin{aligned}= & {} \frac{1}{2(2n+1)} \text {; for } n\ge 2 \end{aligned}$$
It can be shown and verified analytically/numerically that for three different values of the parameter \(a \in {1.1, 1.25, 1.5}\), the function,
$$E[H_{n} ]>E[G_{n}]for\ n>1$$
This result is in fact valid for all \(a > 1\). This means that the error sequence associated with the print version has a greater magnitude as compared to the original error sequence. Two observations can be drawn from this:
The original (natural image related) sequence \(X_n\) decays much slower as compared to the print (i.e. spoof image related) sequence \(Y_n\). This happens because \(E[H_n] > E[G_n]\) for all \(n > 1\) and for \(a > 1\).
The separation between two errors \(E[H_{n}] - E[G_{n}]\), is maximum for \(n=1\), which implies that the class separation is maximum for the first, first order difference.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Katika, B.R., Karthik, K. Image life trails based on contrast reduction models for face counter-spoofing. EURASIP J. on Info. Security 2023, 1 (2023). https://doi.org/10.1186/s13635-022-00135-8
Face counter-spoofing
Self-shadows
Image life trail
Contrast reduction
Logistic maps
Iterated function
|
CommonCrawl
|
Tools for Fundamental Analysis
Sectors & Industries Analysis
Fundamental Analysis Tools for Fundamental Analysis
Degree of Financial Leverage – DFL Definition
By Adam Hayes
What Is a Degree of Financial Leverage - DFL?
A degree of financial leverage (DFL) is a leverage ratio that measures the sensitivity of a company's earnings per share (EPS) to fluctuations in its operating income, as a result of changes in its capital structure. The degree of financial leverage (DFL) measures the percentage change in EPS for a unit change in operating income, also known as earnings before interest and taxes (EBIT).
This ratio indicates that the higher the degree of financial leverage, the more volatile earnings will be. Since interest is usually a fixed expense, leverage magnifies returns and EPS. This is good when operating income is rising, but it can be a problem when operating income is under pressure.
The Formula for DFL Is
DFL=%change in EPS%change in EBIT\text{DFL}=\frac{\%\text{change in EPS}}{\%\text{change in EBIT}}DFL=%change in EBIT%change in EPS
DFL can also be represented by the equation below:
DFL=EBITEBIT − Interest\text{DFL}=\frac{\text{EBIT}}{\text{EBIT }-\text{ Interest}}DFL=EBIT − InterestEBIT
Degree of Financial Leverage (DFL)
What Does Degree of Financial Leverage Tell You?
The higher the DFL, the more volatile earnings per share (EPS) will be. Since interest is a fixed expense, leverage magnifies returns and EPS, which is good when operating income is rising but can be a problem during tough economic times when operating income is under pressure.
DFL is invaluable in helping a company assess the amount of debt or financial leverage it should opt for in its capital structure. If operating income is relatively stable, then earnings and EPS would be stable as well, and the company can afford to take on a significant amount of debt. However, if the company operates in a sector where operating income is quite volatile, it may be prudent to limit debt to easily manageable levels.
The use of financial leverage varies greatly by industry and by the business sector. There are many industry sectors in which companies operate with a high degree of financial leverage. Retail stores, airlines, grocery stores, utility companies, and banking institutions are classic examples. Unfortunately, the excessive use of financial leverage by many companies in these sectors has played a paramount role in forcing a lot of them to file for Chapter 11 bankruptcy.
Examples include R.H. Macy (1992), Trans World Airlines (2001), Great Atlantic & Pacific Tea Co (A&P) (2010) and Midwest Generation (2012). Moreover, excessive use of financial leverage was the primary culprit that led to the U.S. financial crisis between 2007 and 2009. The demise of Lehman Brothers (2008) and a host of other highly levered financial institutions are prime examples of the negative ramifications that are associated with the use of highly levered capital structures.
The degree of financial leverage (DFL) is a leverage ratio that measures the sensitivity of a company's earnings per share to fluctuations in its operating income, as a result of changes in its capital structure.
This ratio indicates that the higher the degree of financial leverage, the more volatile earnings will be.
The use of financial leverage varies greatly by industry and by the business sector.
Example of How to Use DFL
Consider the following example to illustrate the concept. Assume hypothetical company BigBox Inc. has operating income or earnings before interest and taxes (EBIT) of $100 million in Year 1, with interest expense of $10 million, and has 100 million shares outstanding. (For the sake of clarity, let's ignore the effect of taxes for the moment.)
EPS for BigBox in Year 1 would thus be:
Operating Income of $100 Million − $10 Million Interest Expense100 Million Shares Outstanding=$0.90\frac{\text{Operating Income of \$100 Million }-\text{ \$10 Million Interest Expense}}{\text{100 Million Shares Outstanding}}=\$0.90100 Million Shares OutstandingOperating Income of $100 Million − $10 Million Interest Expense=$0.90
The degree of financial leverage (DFL) is:
$100 Million$100 Million − $10 Million=1.11\frac{\text{\$100 Million}}{\text{\$100 Million }-\text{ \$10 Million}}=1.11$100 Million − $10 Million$100 Million=1.11
This means that for every 1% change in EBIT or operating income, EPS would change by 1.11%.
Now assume that BigBox has a 20% increase in operating income in Year 2. Notably, interest expenses remain unchanged at $10 million in Year 2 as well. EPS for BigBox in Year 2 would thus be:
In this instance, EPS has increased from 90 cents in Year 1 to $1.10 in Year 2, which represents a change of 22.2%.
This could also be obtained from the DFL number = 1.11 x 20% (EBIT change) = 22.2%.
If EBIT had decreased instead to $70 million in Year 2, what would have been the impact on EPS? EPS would have declined by 33.3% (i.e., DFL of 1.11 x -30% change in EBIT). This can be easily verified since EPS, in this case, would have been 60 cents, which represents a 33.3% decline.
A leverage ratio is any one of several financial measurements that look at how much capital comes in the form of debt, or that assesses the ability of a company to meet financial obligations.
Understanding the Degree of Operating Leverage
The degree of operating leverage is a multiple that measures how much operating income will change in response to a change in sales.
Earnings Before Interest, Taxes, Depreciation and Amortization – EBITDA Definition
EBITDA, or earnings before interest, taxes, depreciation and amortization, is a measure of a company's overall financial performance and is used as an alternative to simple earnings or net income in some circumstances.
How to Use the Hamada Equation to Find the Ideal Capital Structure
The Hamada equation is a fundamental analysis method of analyzing a firm's cost of capital as it uses additional financial leverage and how that relates to the overall riskiness of the firm.
The Graham number is the upper bound of the price range that a defensive investor should pay for a stock.
What the Debt/EBITDA Ratio Tells You
Debt/EBITDA is a ratio measuring the amount of income generation available to pay down debt before deducting interest, taxes, depreciation, and amortization.
How is EBIT Breakeven Affected By Leverage and Financing Plans?
How does degree of financial leverage (DFL) affect earnings per share (EPS)?
How Operating Leverage Can Impact a Business
The Most Crucial Financial Ratios For Penny Stocks
Fixed Income Essentials
Measuring Bond Risk With Duration and Convexity
How to Use Enterprise Value to Compare Companies
|
CommonCrawl
|
Physics and Astronomy (5)
Materials Research (2)
The British Journal of Psychiatry (33)
Psychological Medicine (10)
Ageing & Society (5)
BJPsych Open (3)
Psychiatric Bulletin (3)
Twin Research and Human Genetics (3)
Behavioural and Cognitive Psychotherapy (2)
MRS Communications (2)
Primary Health Care Research & Development (2)
Publications of the Astronomical Society of Australia (2)
Behavioral and Brain Sciences (1)
Bird Conservation International (1)
Developmental Medicine and Child Neurology (1)
Environmental Conservation (1)
Epidemiology and Psychiatric Sciences (1)
International Psychiatry (1)
Journal of Functional Programming (1)
Microscopy and Microanalysis (1)
Oryx (1)
Proceedings of the International Astronomical Union (1)
Royal College of Psychiatrists / RCPsych (21)
The Royal College of Psychiatrists (15)
Aust Assoc for Cogitive and Behaviour Therapy (2)
FACBT (2)
International Soc for Twin Studies (2)
Materials Research Society (2)
Society for Academic and Primary Care (2)
TestSociety (2)
ryantest123456 (2)
American Academy of Cerebral and Developmental Medicine (1)
BLI Birdlife International (1)
Fauna & Flora International - Oryx (1)
Human Genetics Soc of Australia (1)
Royal College of Speech and Language Therapists (1)
Ryan Test (1)
Cambridge Textbooks (1)
The ASKAP Variables and Slow Transients (VAST) Pilot Survey
Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 12 October 2021, e054
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.
Effects of the first COVID-19 lockdown on quality and safety in mental healthcare transitions in England
Natasha Tyler, Gavin Daker-White, Andrew Grundy, Leah Quinlivan, Chris Armitage, Stephen Campbell, Maria Panagioti
Published online by Cambridge University Press: 31 August 2021, e156
The COVID-19 pandemic forced the rapid implementation of changes to practice in mental health services, in particular transitions of care. Care transitions pose a particular threat to patient safety.
This study aimed to understand the perspectives of different stakeholders about the impact of temporary changes in practice and policy of mental health transitions as a result of coronavirus disease 2019 (COVID-19) on perceived healthcare quality and safety.
Thirty-four participants were interviewed about quality and safety in mental health transitions during May and June 2020 (the end of the first UK national lockdown). Semi-structured remote interviews were conducted to generate in-depth information pertaining to various stakeholders (patients, carers, healthcare professionals and key informants). Results were analysed thematically.
The qualitative data highlighted six overarching themes in relation to practice changes: (a) technology-enabled communication; (b) discharge planning and readiness; (c) community support and follow-up; (d) admissions; (e) adapting to new policy and guidelines; (f) health worker safety and well-being. The COVID-19 pandemic exacerbated some quality and safety concerns such as tensions between teams, reduced support in the community and increased threshold for admissions. Also, several improvement interventions previously recommended in the literature, were implemented locally.
The practice of mental health transitions has transformed during the COVID-19 pandemic, affecting quality and safety. National policies concerning mental health transitions should concentrate on converting the mostly local and temporary positive changes into sustainable service quality improvements and applying systematic corrective policies to prevent exacerbations of previous quality and safety concerns.
39607 Mapping the Draining Lymph Nodes in Central Nervous System Malignancies
Andrew T. Coxon, Barry A. Siegel, Tanner M. Johanns, Gavin P. Dunn
Journal: Journal of Clinical and Translational Science / Volume 5 / Issue s1 / March 2021
Published online by Cambridge University Press: 30 March 2021, p. 37
ABSTRACT IMPACT: We seek to determine which lymph nodes drain the human brain. OBJECTIVES/GOALS: Lymphatic vessels train lymphatic fluid from the central nervous system (CNS), but the specific lymph nodes that these vessels drain to remains unknown in humans. We intend on using technetium tilmanocept (TcTM)to map the draining lymph nodes of the CNSin humans. METHODS/STUDY POPULATION: Patients having a tumor resected are eligible for the trial. All patients will have TcTM injected intracranially after tumor resection. Six patients will be enrolled in Cohort 1 to define the time course of drainage to the lymph nodes. Patients in Cohort 1 will be imaged with planar LS within 7 hours of injection and the following day. Either 12 or 24 patients will be enrolled into Cohort 2 to localize the draining lymph nodes with SPECT-CT. The optimal imaging timepoint from Cohort 1 will be used for Cohort 2. Patients in Cohort 2 will be stratified depending on if their tumor is in the frontal, parietal, occipital, or temporal lobe. RESULTS/ANTICIPATED RESULTS: We anticipate that we will detect TcTMin the deep cervical lymph nodes after injection into the brain. It is unclear exactly which lymph nodes the tracer will go to. We hypothesize that the results among patients will be similar, but interindividual variation is a possibility. Furthermore, patients with disease in different lobes of the brain may have different lymph drainage patterns. DISCUSSION/SIGNIFICANCE OF FINDINGS: We seek to answer a fundamental question of human anatomy: what lymph nodes drain the human brain? Additionally, knowing which nodes drain the human brain could shape future research of immunotherapy in patients with brain cancer or autoimmune disease such as multiple sclerosis.
A national effectiveness trial of an eHealth program to prevent alcohol and cannabis misuse: responding to the replication crisis
Nicola C. Newton, Cath Chapman, Tim Slade, Louise Birrell, Annalise Healy, Marius Mather, Nyanda McBride, Leanne Hides, Steve Allsop, Louise Mewton, Gavin Andrews, Maree Teesson
Published online by Cambridge University Press: 17 June 2020, pp. 1-9
The burden of disease attributable to alcohol and other drug (AOD) use in young people is considerable. Prevention can be effective, yet few programs have demonstrated replicable effects. This study aimed to replicate research behind Climate Schools: Alcohol and Cannabis course among a large cohort of adolescents.
Seventy-one secondary schools across three States participated in a cluster-randomised controlled trial. Year 8 students received either the web-based Climate Schools: Alcohol and Cannabis course (Climate, n = 3236), or health education as usual (Control, n = 3150). Outcomes were measured via self-report and reported here for baseline, 6- and 12-months for alcohol and cannabis knowledge, alcohol, cannabis use and alcohol-related harms.
Compared to Controls, students in the Climate group showed greater increases in alcohol- [standardised mean difference (SMD) 0.51, p < 0.001] and cannabis-related knowledge (SMD 0.49, p < 0.001), less increases in the odds of drinking a full standard drink[(odds ratio (OR) 0.62, p = 0.014], and heavy episodic drinking (OR 0.49, p = 0.022). There was no evidence for differences in change over time in the odds of cannabis use (OR 0.57, p = 0.22) or alcohol harms (OR 0.73, p = 0.17).
The current study provides support for the effectiveness of the web-based Climate Schools: Alcohol and Cannabis course in increasing knowledge and reducing the uptake of alcohol. It represents one of the first trials of a web-based AOD prevention program to replicate alcohol effects in a large and diverse sample of students. Future research and/or adaptation of the program may be warranted with respect to prevention of cannabis use and alcohol harms.
Residual Stress in Focused Charged Particle Beam-Deposited Materials
Gavin Mitchson, Andrew Clarke, Jamie Johnson
Journal: Microscopy and Microanalysis / Volume 25 / Issue S2 / August 2019
The Mediating Relationship Between Maladaptive Behaviours, Cognitive Factors, and Generalised Anxiety Disorder Symptoms
Alison E.J. Mahoney, Megan J. Hobbs, Alishia D. Williams, Gavin Andrews, Jill M. Newby
Journal: Behaviour Change / Volume 35 / Issue 2 / June 2018
Cognitive theories of generalised anxiety disorder (GAD) posit that cognitive and behavioural factors maintain the disorder. This study examined whether avoidance and safety behaviours mediated the relationship between cognitive factors and GAD symptoms. We also examined the reverse mediation model; that is, whether cognitive factors mediated the relationship between maladaptive behaviours and GAD symptoms. Undergraduate psychology students (N = 125 and N = 292) completed the Worry Behaviours Inventory (a recently developed measure of maladaptive behaviours associated with GAD), in addition to measures of intolerance of uncertainty, cognitive avoidance, metacognitive beliefs, and symptoms of GAD and depression. Analyses supported the reliability and validity of the WBI. We consistently found that engagement in maladaptive behaviours significantly mediated the relationship between cognitive factors and symptoms of GAD. The reverse mediation model was also supported. Our results are consistent with the contention that cognitive and behavioural factors contribute to GAD symptom severity.
Population status of four endemic land bird species after an unsuccessful rodent eradication on Henderson Island
ALEXANDER L. BOND, M. DE L. BROOKE, RICHARD J. CUTHBERT, JENNIFER L. LAVERS, GREGORY T.W. MCCLELLAND, THOMAS CHURCHYARD, ANGUS DONALDSON, NEIL DUFFIELD, ALICE FORREST, GAVIN HARRISON, LORNA MACKINNON, TARA PROUD, ANDREW SKINNER, NICK TORR, JULIET A. VICKERY, STEFFEN OPPEL
Journal: Bird Conservation International / Volume 29 / Issue 1 / March 2019
Invasive rodents detrimentally affect native bird species on many islands worldwide, and rodent eradication is a useful tool to safeguard endemic and threatened species. However, especially on tropical islands, rodent eradications can fail for various reasons, and it is unclear whether the temporary reduction of a rodent population during an unsuccessful eradication operation has beneficial effects on native birds. Here we examine the response of four endemic land bird species on subtropical Henderson Island in the Pitcairn Island Group, South Pacific Ocean, following an unsuccessful rodent eradication in 2011. We conducted point counts at 25 sampling locations in 14 survey periods between 2011 and 2015, and modelled the abundance trends of all species using binomial mixture models accounting for observer and environmental variation in detection probability. Henderson Reed Warbler Acrocephalus taiti more than doubled in abundance (2015 population estimate: 7,194-28,776), and Henderson Fruit Dove Ptilinopus insularis increased slightly between 2011 and 2015 (2015 population estimate: 4,476–10,072), while we detected no change in abundance of the Henderson Lorikeet Vini stepheni (2015 population estimate: 554–3014). Henderson Crake Zapornia atra increased to pre-eradication levels following anticipated mortality during the operation (2015 population estimate: 4,960–20,783). A temporary reduction of rat predation pressure and rat competition for fruit may have benefitted the reed warbler and the fruit dove, respectively. However, a long drought may have naturally suppressed bird populations prior to the rat eradication operation in 2011, potentially confounding the effects of temporary rat reduction and natural recovery. We therefore cannot unequivocally ascribe the population recovery to the temporary reduction of the rat population. We encourage robust monitoring of island biodiversity both before and after any management operation to better understand responses of endemic species to failed or successful operations.
Maladaptive Behaviours Associated with Generalized Anxiety Disorder: An Item Response Theory Analysis
Alison E.J. Mahoney, Megan J. Hobbs, Jill M. Newby, Alishia D. Williams, Gavin Andrews
Journal: Behavioural and Cognitive Psychotherapy / Volume 46 / Issue 4 / July 2018
Background: Cognitive models of generalized anxiety disorder (GAD) suggest that maladaptive behaviours may contribute to the maintenance of the disorder; however, little research has concentrated on identifying and measuring these behaviours. To address this gap, the Worry Behaviors Inventory (WBI) was developed and has been evaluated within a classical test theory (CTT) approach. Aims: As CTT is limited in several important respects, this study examined the psychometric properties of the WBI using an Item Response Theory approach. Method: A large sample of adults commencing treatment for their symptoms of GAD (n = 537) completed the WBI in addition to measures of GAD and depression symptom severity. Results: Patients with a probable diagnosis of GAD typically engaged in four or five maladaptive behaviours most or all of the time in an attempt to prevent, control or avoid worrying about everyday concerns. The two-factor structure of the WBI was confirmed, and the WBI scales demonstrated good reliability across a broad range of the respective scales. Together with previous findings, our results suggested that hypervigilance and checking behaviours, as well as avoidance of saying or doing things that are worrisome, were the most relevant maladaptive behaviours associated with GAD, and discriminated well between adults with low, moderate and high degrees of the respective WBI scales. Conclusions: Our results support the importance of maladaptive behaviours to GAD and the utility of the WBI to index these behaviours. Ramifications for the classification, theoretical conceptualization and treatment of GAD are discussed.
Borderline personality disorder: patterns of self-harm, reported childhood trauma and clinical outcome
Mark Andrew McFetridge, Rebecca Milner, Victoria Gavin, Liat Levita
Journal: BJPsych Open / Volume 1 / Issue 1 / June 2015
Consecutive admissions of 214 women with borderline personality disorder were investigated for patterns of specific forms of self-harm and reported developmental experiences. Systematic examination of clinical notes found that 75% had previously reported a history of childhood sexual abuse. These women were more likely to self-harm, and in specific ways that may reflect their past experiences. Despite this, treatment within a dialectical behaviour therapy-informed therapeutic community leads to relatively greater clinical gains than for those without a reported sexual abuse trauma history. Notably, greater behavioural and self-reported distress and dissociation were not found to predict poor clinical outcome.
Internet cognitive–behavioural treatment for panic disorder: randomised controlled trial and evidence of effectiveness in primary care
Adrian R. Allen, Jill M. Newby, Anna Mackenzie, Jessica Smith, Matthew Boulton, Siobhan A. Loughnan, Gavin Andrews
Journal: BJPsych Open / Volume 2 / Issue 2 / March 2016
Internet cognitive–behavioural therapy (iCBT) for panic disorder of up to 10 lessons is well established. The utility of briefer programmes is unknown.
To determine the efficacy and effectiveness of a five-lesson iCBT programme for panic disorder.
Study 1 (efficacy): Randomised controlled trial comparing active iCBT (n=27) and waiting list control participants (n=36) on measures of panic severity and comorbid symptoms. Study 2 (effectiveness): 330 primary care patients completed the iCBT programme under the supervision of primary care practitioners.
iCBT was significantly more effective than waiting list control in reducing panic (g=0.97, 95% CI 0.34 to 1.61), distress (g=0.92, 95% CI 0.28 to 1.55), disability (g=0.81, 95% CI 0.19 to 1.44) and depression (g=0.79, 95% CI 0.17 to 1.41), and gains were maintained at 3 months post-treatment (iCBT group). iCBT remained effective in primary care, but lower completion rates were found (56.1% in study 2 v. 63% in study 1). Adherence appeared to be related to therapist contact.
The five-lesson Panic Program has utility for treating panic disorder, which translates to primary care. Adherence may be enhanced with therapist contact.
Psychometric Properties of the Worry Behaviors Inventory: Replication and Extension in a Large Clinical and Community Sample
Alison E. J. Mahoney, Megan J. Hobbs, Jill M. Newby, Alishia D. Williams, Gavin Andrews
Journal: Behavioural and Cognitive Psychotherapy / Volume 46 / Issue 1 / January 2018
Published online by Cambridge University Press: 31 July 2017, pp. 84-100
Background: The use of maladaptive behaviors by individuals with generalized anxiety disorder (GAD) is theoretically important and clinically meaningful. However, little is known about the specificity of avoidant behaviors to GAD and how these behaviors can be reliably assessed. Aims: This study replicated and extended the psychometric evaluation of the Worry Behaviors Inventory (WBI), a brief self-report measure of avoidant behaviors associated with GAD. Method: The WBI was administered to a hospital-based sample of adults seeking treatment for symptoms of anxiety and/or depression (n = 639) and to a community sample (n = 55). Participants completed measures of symptom severity (GAD, depression, panic disorder, health anxiety, and personality disorder), and measures of checking, reassurance-seeking and behavioral inhibition. Analyses evaluated the factor structure, convergent, divergent, incremental, and discriminant validity, as well the temporal stability and treatment sensitivity of the WBI. Results: The two-factor structure found in the preliminary psychometric evaluation of the WBI was replicated. The WBI was sensitive to changes across treatment and correlated well with measures of GAD symptom severity and maladaptive behaviors. The WBI was more strongly related to GAD symptom severity than other disorders. The WBI discriminated between clinical and community samples. Conclusions: The WBI provides clinicians and researchers with a brief, clinically meaningful index of problematic behaviors that may guide treatment decisions and contribute to our understanding of maintaining factors in GAD.
2 - Aboriginal identity, world views, research and the story of the Burra'gorang
By Gawaian Bodkin-Andrews, University of Technology, Sydney, Aunty Frances Bodkin, D'harawal nation of south-west Sydney, Uncle Gavin Andrews, D'harawal nation of south-west Sydney, Uncle Ross Evans, Banyadjaminga SWAG Inc
Edited by Cheryl Kickett-Tucker, Curtin University of Technology, Perth
Edited in association with Dawn Bessarab, University of Western Australia, Perth, Juli Coffin, Notre Dame University, Australia, Michael Wright, Curtin University, Perth
Book: Mia Mia Aboriginal Community Development
Print publication: 24 October 2016, pp 19-36
IN RECENT TIMES there has been a growing recognition that some Aboriginal and Torres Strait Islander peoples and communities have been harmed and even divided by those who question their very right to identify as 'Indigenous or not' (Bodkin-Andrews & Carlson 2016; New South Wales Aboriginal Education Consultative Group [NSW AECG] 2011). Numerous scholars have suggested that such 'questions' are an unfortunate extension of the continual epistemological violence (a pressure on ways of knowing) that has sought to eradicate the diverse world views, histories, and knowledges of our peoples since colonisation (Bodkin 2013a; Moreton-Robinson 2011; Nakata 2012), and that they result in the emergence of stereotypical accusations of 'inauthenticity', 'wanna-be-Aborigines', 'welfare-blacks', 'fragmentation' and 'cultural absurdity' (Behrendt 2006). It is the purpose of this chapter to highlight the existence of this form of epistemological and identity-based violence and explain how it threatens our communities. In addition, such violence will be challenged by focusing on the strength of diverse world views, knowledges and unique stories that exist within Aboriginal and Torres Strait Islander communities today. We also offer you a traditional D'harawal Law Story as the central case study within this chapter. This Law Story holds valuable insights that may guide individuals and communities towards a stronger and more resilient future.
D'harawal positioning
Many respected Indigenous scholars have argued that it is critical that people who seek to work within Aboriginal communities be aware of, and transparent about, their own ways of knowing, and how this may bias their learning and actions (Foley 2003; Linklater 2014; Kovach 2009; Rigney 1999; Smith 2012). As a result, it should be understood that this chapter is written through a lens shared by the authors. This lens emanates from clans within the D'harawal nation or language group located in south-west Sydney, Australia. In our own ways, we have each struggled against the longstanding and continuing impact of colonisation, ranging from popular media misinformation to our location, learnings, stories and oral histories being contested by quasi-anthropological works relying on, and selectively ignoring, conflicting evidence from the diaries and scribblings of the early colonisers (cf. Kohen 1993).
Atom probe tomography of phosphorus- and boron-doped silicon nanocrystals with various compositions of silicon rich oxide — ERRATUM
Keita Nomoto, Sebastian Gutsch, Anna V. Ceguerra, Andrew Breen, Hiroshi Sugimoto, Minoru Fujii, Ivan Perez-Wurfl, Simon P. Ringer, Gavin Conibeer
Journal: MRS Communications / Volume 6 / Issue 4 / December 2016
Published online by Cambridge University Press: 17 October 2016, p. 469
Atom probe tomography of phosphorus- and boron-doped silicon nanocrystals with various compositions of silicon rich oxide
Journal: MRS Communications / Volume 6 / Issue 3 / September 2016
We analyze phosphorus (P)- and boron (B)-doped silicon nanocrystals (Si NCs) with various compositions of silicon-rich oxide using atom probe tomography. By creating Si iso-concentration surfaces, it is confirmed that there are two types of Si NC networks depending on the amount of excess Si. A proximity histogram shows that P prefers to locate inside the Si NCs, whereas B is more likely to reside outside the Si NCs. We discuss the difference in a preferential location between P and B by a segregation coefficient.
RISK AND RETURNS OF SPRING AND FALL CALVING FOR BEEF CATTLE IN TENNESSEE
GAVIN W. HENRY, CHRISTOPHER N. BOYER, ANDREW P. GRIFFITH, JAMES LARSON, AARON SMITH, KAREN LEWIS
Journal: Journal of Agricultural and Applied Economics / Volume 48 / Issue 3 / August 2016
We determined the profitability and risk for spring- and fall-calving beef cows in Tennessee. Simulation models were developed using 19 years of data and considered the seasonality of cattle prices and feed prices for least-cost feed rations to find a distribution of net returns for spring- and fall-calving seasons for two weaning months. Fall calving was more profitable than the spring calving for all feed rations and weaning months. Fall calving was also risk preferred over spring calving for all levels of risk aversion. Higher calf prices at weaning were the primary factor influencing the risk efficiency of fall calving.
The relational making of people and place: the case of the Teignmouth World War II homefront
GAVIN J. ANDREWS
Journal: Ageing & Society / Volume 37 / Issue 4 / April 2017
Building on the pioneering research of a small number of gerontologists, this paper explores the rarely trodden common ground between the academic domains of social gerontology and modern history. Through empirical research it illustrates the complex networking that exists through space and time in the relational making of people and places. Indeed, the study focuses specifically on the lived reality and ongoing significance of life on the small-town British coastal homefront during World War II. Seventeen interviews with older residents of Teignmouth, Devon, United Kingdom, investigate two points in their lives: the 'then' (their historical experiences during this period) and the 'then and now' (how they continue to reverberate). In particular, their stories illustrate the relationalities that make each of these points. The first involves residents' unique interactions during the war with structures and technologies (such as rules, bombs and barriers) and other people (such as soldiers and outsiders) which themselves were connected to wider historical, social, political and military networks. The second involves residents' perceptions of their own and their town's wartime histories, how this gels or conflicts with public awareness, and how this history connects to their current lives. The paper closes with some thoughts on bringing together the past, present and older people in the same scholarship.
Dense Gas Towards the RX J1713.7–3946 Supernova Remnant
Nigel I. Maxted, Gavin P. Rowell, Bruce R. Dawson, Michael G. Burton, Yasuo Fukui, Jasmina Lazendic, Akiko Kawamura, Hirotaka Horachi, Hidetoshi Sano, Andrew J. Walsh, Satoshi Yoshiike, Tatsuya Fukuda
We present results from a Mopra 7 mm-wavelength survey that targeted the dense gas-tracing CS(1-0) transition towards the young γ-ray-bright supernova remnant, RX J1713.7–3946 (SNR G 347.3−0.5). In a hadronic γ-ray emission scenario, where cosmic ray (CR) protons interact with gas to produce the observed γ-ray emission, the mass of potential CR target material is an important factor. We summarise newly discovered dense gas components, towards Cores G and L, and Clumps N1, N2, N3, and T1, which have masses of 1 – 104 M⊙. We argue that these components are not likely to contribute significantly to γ-ray emission in a hadronic γ-ray emission scenario. This would be the case if RX J1713.7–3946 were at either the currently favoured distance of ~1 kpc or an alternate distance (as suggested in some previous studies) of ~6 kpc.
This survey also targeted the shock-tracing SiO molecule. Although no SiO emission corresponding to the RX J1713.7–3946 shock was observed, vibrationally excited SiO(1-0) maser emission was discovered towards what may be an evolved star. Observations taken 1 yr apart confirmed a transient nature, since the intensity, line-width, and central velocity of SiO(J = 1-0,v = 1,2) emission varied significantly.
Health anxiety in Australia: prevalence, comorbidity, disability and service use
Matthew Sunderland, Jill M. Newby, Gavin Andrews
Journal: The British Journal of Psychiatry / Volume 202 / Issue 1 / January 2013
Health anxiety is associated with high distress, disability and increased health service utilisation. However, there are relatively few epidemiological studies examining the extent of health anxiety or the associated sociodemographic and health risk factors in the general population.
To provide epidemiological data on health anxiety in the Australian population.
Lifetime and current prevalence estimates, associations between comorbid disorders, psychological distress, impairment, disability and mental health service utilisation were generated using the Australian 2007 National Survey of Mental Hearth and Wellbeing.
Health anxiety affects approximately 5.7% of the Australian population across the lifespan and 3.4% met criteria for health anxiety at the time of the interview. Age, employment status, smoking status and comorbid physical conditions were significantly related to health anxiety symptoms. Health anxiety was associated with significantly more distress, impairment, disability and health service utilisation than that found in respondents without health anxiety.
Hearth anxiety is non-trivial; it affects a significant proportion of the population and further research and clinical investigation of health anxiety is required.
Re-spacing and re-placing gerontology: relationality and affect
GAVIN J. ANDREWS, JOSHUA EVANS, JANINE L. WILES
Journal: Ageing & Society / Volume 33 / Issue 8 / November 2013
Published online by Cambridge University Press: 24 July 2012, pp. 1339-1373
This paper describes how space and place have been understood in gerontology as phenomenon that are both physical and social in character, yet are relatively bounded and static. The argument is posed as to how, following recent developments in human geography, a relational approach might be adopted. Involving a twist in current thinking, this would instead understand space and place each as highly permeable, fluid and networked at multiple scales. Moreover, it is proposed that the concept of 'affect' might also be insightful, recognising space and place as being relationally configured and performed, possessing a somatically registered energy, intensity and momentum that precedes deep cognition. Three vignettes illustrate the relationalities and affects in the lives and circumstances of older people, and how focusing more explicitly on them would allow for a richer understanding of where and how they live their lives. The paper closes with some thoughts on future theoretical, methodological and disciplinary considerations.
Prevalence of anti-basal ganglia antibodies in adult obsessive–compulsive disorder: cross-sectional study
Timothy R. J. Nicholson, Sumudu Ferdinando, Ravikumar B. Krishnaiah, Sophie Anhoury, Belinda R. Lennox, David Mataix-Cols, Anthony Cleare, David M. Veale, Lynne M. Drummond, Naomi A. Fineberg, Andrew J. Church, Gavin Giovannoni, Isobel Heyman
Symptoms of obsessive–compulsive disorder (OCD) have been described in neuropsychiatric syndromes associated with streptococcal infections. It is proposed that antibodies raised against streptococcal proteins cross-react with neuronal proteins (antigens) in the brain, particularly in the basal ganglia, which is a brain region implicated in OCD pathogenesis.
To test the hypothesis that post-streptococcal autoimmunity, directed against neuronal antigens, may contribute to the pathogenesis of OCD in adults.
Ninety-six participants with OCD were tested for the presence of anti-streptolysin-O titres (ASOT) and the presence of anti-basal ganglia antibodies (ABGA) in a cross-sectional study. The ABGA were tested for with western blots using three recombinant antigens; aldolase C, enolase and pyruvate kinase. The findings were compared with those in a control group of individuals with depression (n = 33) and schizophrenia (n = 17).
Positivity for ABGA was observed in 19/96 (19.8%) participants with OCD compared with 2/50 (4%) of controls (Fisher's exact test P = 0.012). The majority of positive OCD sera (13/19) had antibodies against the enolase antigen. No clinical variables were associated with ABGA positivity. Positivity for ASOT was not associated with ABGA positivity nor found at an increased incidence in participants with OCD compared with controls.
These findings support the hypothesis that central nervous system autoimmunity may have an aetiological role in some adults with OCD. Further study is required to examine whether the antibodies concerned are pathogenic and whether exposure to streptococcal infection in vulnerable individuals is a risk factor for the development of OCD.
|
CommonCrawl
|
www.springer.com The European Mathematical Society
Pages A-Z
StatProb Collection
Project talk
Law of large numbers
From Encyclopedia of Mathematics
2010 Mathematics Subject Classification: Primary: 60F05 [MSN][ZBL]
A general principle according to which under certain very general conditions the simultaneous action of random factors leads to a result which is practically non-random. That the frequency of occurrence of a random event tends to become equal to its probability as the number of trials increases (a phenomenon which was probably first noted for games of chance) may serve as the first example of this principle.
At the turn of the 17th century J. Bernoulli [B], [B2] demonstrated a theorem stating that, in a sequence of independent trials, in each of which the probability of occurrence of a certain event $ A $ has the same value $ p $, $ 0 < p < 1 $, the relationship
$$ \tag{1 } {\mathsf P} \left \{ \left | \frac{\mu _ {n} }{n} - p \right | > \epsilon \right \} \rightarrow 0 $$
is valid for any $ \epsilon > 0 $ if $ n \rightarrow \infty $; here, $ \mu _ {n} $ is the number of occurrences of the event in the first $ n $ trials and $ {\mu _ {n} } /n $ is the frequency of the occurrence. This Bernoulli theorem was extended by S. Poisson [P] to the case of a sequence of independent trials in which the probability of the occurrence of an event $ A $ varies with the number of the trial. Let this probability in the $ k $- th trial be $ p _ {k} $, $ k = 1, 2 \dots $ and let
$$ \overline{p}\; _ {n} = \ \frac{p _ {1} + \dots + p _ {n} }{n} . $$
Then, according to the Poisson theorem,
$$ \tag{2 } {\mathsf P} \left \{ \left | \frac{\mu _ {n} }{n} - {\overline{p}\; _ {n} } \right | > \epsilon \right \} \rightarrow 0 $$
for any $ \epsilon > 0 $ if $ n \rightarrow \infty $. A rigorous proof of this theorem was first given in 1846 by P.L. Chebyshev, whose method was quite different from that of Poisson and was based on certain extremal considerations, whereas Poisson deduced (2) from an approximate formula for the above probability, based on a law of Gauss which at that time had not yet been proved rigorously. Poisson was the first to use the term "law of large numbers" , by which he denoted his own generalization of the Bernoulli theorem.
A further natural extension of the Bernoulli and Poisson theorems is a consequence of the fact that the random variables $ \mu _ {n} $ may be represented as the sum
$$ \mu _ {n} = X _ {1} + \dots + X _ {n} $$
of independent random variables, where $ X _ {k} = 1 $ if $ A $ occurs in the $ k $- th trial and $ X _ {k} = 0 $ otherwise. The mathematical expectation $ {\mathsf E} ( \mu _ {n} / n) $( which is identical with the average of the mathematical expectations $ {\mathsf E} X _ {k} $, $ 1 \leq k \leq n $) is $ p $ in the case of Bernoulli and is $ \overline{p}\; _ {n} $ in the case of Poisson. In other words, in both cases one deals with a deviation of the arithmetical average of the variables $ X _ {k} $ from the arithmetical average of their mathematical expectations.
In his work "On average quantities" , which appeared in 1867, Chebyshev established that the relationship
$$ \tag{3 } {\mathsf P} \left \{ \left | \frac{X _ {1} + \dots + X _ {n} }{n} - \frac{ {\mathsf E} X _ {1} + \dots + {\mathsf E} X _ {n} }{n} \right | > \epsilon \right \} \rightarrow 0 $$
is valid, for any $ \epsilon > 0 $ if $ n \rightarrow \infty $, for independent random variables $ X _ {1} \dots X _ {n} \dots $ under fairly general assumptions. Chebyshev assumed that the mathematical expectations $ {\mathsf E} X _ {k} ^ {2} $ are all bounded by the same constant, even though his proof makes it clear that the requirement of boundedness of the variances of $ X _ {k} $, $ {\mathsf D} X _ {k} = {\mathsf E} X _ {k} ^ {2} - ( {\mathsf E} X _ {k} ) ^ {2} $, or even the requirement
$$ B _ {n} ^ {2} = \ {\mathsf D} X _ {1} + \dots + {\mathsf D} X _ {n} = o ( n ^ {2} ),\ \ n \rightarrow \infty , $$
is sufficient. It was thus shown by Chebyshev that the Bernoulli theorem is susceptible of extensive generalizations. A.A. Markov noted the possibility of further extensions and proposed to apply the term "law of large numbers" to all extensions of the Bernoulli theorem (and, in particular, to (3)). Chebyshev's method is based on a rigorous formulation of all the properties of the mathematical expectations and on the use of the so- called Chebyshev inequality in probability theory. For the probability (3) it yields an estimate of the form
$$ n ^ {-} 2 \epsilon ^ {-} 2 \sum _ { k= } 1 ^ { n } {\mathsf D} X _ {k} . $$
This estimate may be replaced by a more exact one — but under more restrictive conditions, see Bernstein inequality. Subsequent proofs of the law of large numbers are all, to varying extents, developments of Chebyshev's method. Using an appropriate "truncation" of the random variables $ X _ {k} $( replacement of these variables by auxiliary variables $ X _ {n,k} ^ \prime $, viz. $ X _ {n,k} ^ \prime = X _ {k} $ if $ | X _ {k} - {\mathsf E} X _ {k} | \leq L _ {n} $ and $ X _ {n,k} ^ \prime = 0 $ if $ | X _ {k} - {\mathsf E} X _ {k} | > L _ {n} $, where $ L _ {n} $ is a constant), Markov extended the law of large numbers to the case when the variance of the terms does not exist. He showed, for instance, that (3) is valid if, for certain constants $ \delta > 0 $ and $ L > 0 $ and for all $ n $,
$$ {\mathsf E} | X _ {n} - {\mathsf E} X _ {n} | ^ {1+ \delta } < L . $$
Khinchin's theorem (1929) is proved in a similar manner: If $ X _ {n} $ have the same distribution and if $ {\mathsf E} X _ {n} $ exists, then the law of large numbers (3) is valid.
It is possible to formulate more or less final versions of the law of large numbers for sums of independent random variables. To do this, it is expedient to adopt a more general point of view, involving the concept of a sequence of asymptotically-constant random variables. The random variables of a sequence $ Y _ {1} \dots Y _ {n} \dots $ are said to be asymptotically constant if there exists a sequence of constants $ C _ {1} \dots C _ {n} \dots $ such that for any $ \epsilon > 0 $, if $ n \rightarrow \infty $,
$$ \tag{4 } {\mathsf P} \{ | Y _ {n} - C _ {n} | > \epsilon \} \rightarrow 0 $$
(i.e. $ Y _ {n} - C _ {n} $ converges to zero in probability). If (4) is valid for some $ C _ {n} $, it is also valid for $ C _ {n} ^ { \prime } = m Y _ {n} $, where $ mY $ is the median (cf. Median (in statistics)) of the random variable $ Y $. Furthermore, instead of the sequence $ X _ {1} \dots X _ {n} \dots $ of independent random variables, one may also take a so-called triangular array:
$$ \begin{array}{c} X _ {1,1 } \dots X _ {1,k _ {1} } , \\ X _ {2,1 } \dots X _ {2,k _ {2} } , \\ {} \dots \dots \dots , \\ X _ {n,1 } \dots X _ {n,k _ {n} } , \\ {} \dots \dots \dots \\ \end{array} $$
of random variables (the first subscript is the row number, while the second indicates the number of the variable within the row). The random variables of each individual row are assumed to be mutually independent. A sequence of partial sums is readily reduced to a triangular array if one assumes that $ k _ {1} = 1 $, $ k _ {2} = 2 \dots $ $ X _ {n,k } = {X _ {k} } /n $.
$$ Y _ {n} = X _ {n,1 } + \dots + X _ {n,k _ {n} } . $$
The general problem of the applicability of the law of large numbers to sums of independent random variables may then be stated as follows: Under what conditions are the sums $ Y _ {n} $ asymptotically constant?
This question was answered in 1928 by A.N. Kolmogorov. Assume, without loss of generality, that the medians of the variables $ X _ {n,k } $ are zero. Let $ \widetilde{X} _ {n,k } = X _ {n,k } $ if $ | X _ {n,k } | \leq 1 $ and $ \widetilde{X} _ {n,k } = 0 $ if $ | X _ {n,k } | > 1 $. The simultaneous fulfillment of the two conditions
$$ \sum _ { k = 1 } ^ { {k _ n } } {\mathsf P} \{ | X _ {n,k } | > 1 \} \rightarrow 0 \ \textrm{ if } n \rightarrow \infty $$
$$ \sum _ { k = 1 } ^ { {k _ n } } {\mathsf E} \widetilde{X} {} _ {n,k } ^ {2} \rightarrow 0 \ \textrm{ if } n \rightarrow \infty , $$
is necessary and sufficient for the sums $ Y _ {n} $ to be asymptotically constant. The sum $ \sum _ {k=} 1 ^ {k _ {n} } {\mathsf E} {\widetilde{X} _ {n,k } } $ may be taken as $ C _ {n} $. That these conditions are sufficient is readily demonstrated by Chebyshev's method. If the mathematical expectations $ {\mathsf E} X _ {n,k} $ exist, it is easy to find supplementary conditions under which it is permissible to choose $ C _ {n} = {\mathsf E} Y _ {n} $, which yields necessary and sufficient conditions for the law of large numbers in the classical formulation (3). For a sequence of independent identically-distributed variables $ \{ X _ {n} \} $ these conditions are reduced — in accordance with Khinchin's theorem quoted above — to the existence of the mathematical expectation. At the same time, the condition
$$ \tag{5 } n {\mathsf P} \{ | X _ {1} | > n \} \rightarrow 0 \ \textrm{ if } n \rightarrow \infty $$
is necessary and sufficient for the arithmetical averages $ Y _ {n} $ to be asymptotically constant. Examples of cases in which condition (5) is not met are easily found. Thus, the condition is not met if all $ X _ {n} $ have a Cauchy distribution with density $ 1/ \pi ( 1 + x ^ {2} ) $( to which corresponds the characteristic function $ \mathop{\rm exp} \{ - | t | \} $). Here, the arithmetical averages $ Y _ {n} = ( X _ {1} + \dots + X _ {n} ) / n $ have the characteristic function $ \mathop{\rm exp} \{ - n | t/n | \} = \mathop{\rm exp} \{ - | t | \} $ and, as a result, have the same distribution for any $ n $ as do the individual terms.
The most important cases to which the law of large numbers does not apply involve the times of return to the starting point in a random walk. Thus, in a symmetric Bernoulli random walk the time $ T _ {n} $ passing prior to the $ n $- th return to the starting point is the sum of the $ n $ independent random variables $ X _ {1} \dots X _ {n} $, where $ X _ {1} $ is the time passed prior to the first return, $ X _ {2} $ is the time between the first and the second return, etc. The distribution of the variable $ 2T _ {n} / \pi n ^ {2} $ converges as $ n \rightarrow \infty $ to a non-degenerate limit distribution with density
$$ p ( x) = \ \frac{1}{\sqrt {2 \pi } } e ^ {- 1/2x } x ^ {- 3/2 } \ \textrm{ for } x > 0 $$
and zero for $ x \leq 0 $. Accordingly, in this case the distribution of the arithmetical average of $ X _ {i} $, i.e. $ {T _ {n} } /n $, is located, roughly speaking, on a segment of length of order $ n $( whereas, in cases where the law of large numbers is applicable, it is located on segments of length $ o( 1) $).
The applicability of the law of large numbers to sums of dependent variables (both in its classical and in its more general formulations) is connected, first and foremost, with the asymptotic independence of the random variables $ X _ {i} $ and $ X _ {j} $ as the difference between their subscripts $ | i - j | $ increases. The corresponding theorems were first proved by Markov in 1907 for variables connected in a Markov chain. In fact, let $ X _ {1} \dots X _ {n} \dots $ be a homogeneous Markov chain with finite number of states, and let all one-step transition probabilities be positive. Here, the asymptotic independence of $ X _ {j} $ and $ X _ {i} $ as $ | i- j | \rightarrow \infty $ follows from the fact that the conditional distribution of $ X _ {j} $ under a fixed value of $ X _ {i} $ tends, as $ n \rightarrow \infty $, to a limit which does not depend on the chosen value of $ X _ {i} $( Markov's ergodic theorem). The law of large numbers is deduced from this theorem: It is first established that, as $ n \rightarrow \infty $,
$$ {\mathsf E} \left ( \frac{X _ {1} + \dots + X _ {n} }{n} - a \right ) ^ {2} \rightarrow 0, $$
where $ a = \lim\limits _ {k \rightarrow \infty } {\mathsf E} X _ {k} $; hence it follows that as $ n \rightarrow \infty $,
$$ {\mathsf P} \left \{ \left | \frac{X _ {1} + \dots + X _ {n} }{n} - a \ \right | > \epsilon \right \} \rightarrow 0. $$
The following, more general case, is governed by Bernstein's conditions: If $ {\mathsf D} X _ {j} < L $, $ | R( X _ {i} , X _ {j} ) | \leq \phi ( | i - j |) $, where $ L $ is a constant, $ R $ is the correlation coefficient and $ \phi ( n) $ is a function which tends to zero as $ n \rightarrow \infty $, then the law of large numbers (3) is applicable to the variables $ \{ X _ {n} \} $. For sequences $ \{ X _ {n} \} $ which are stationary in the wide sense, the correlation condition may be weakened somewhat, by replacing it by
$$ \lim\limits _ {n \rightarrow \infty } { \frac{1}{n} } \sum _ {j = 1 } ^ { n } R _ {j} = 0, $$
where $ R _ {j} = R( X _ {i} , X _ {i+} j ) $.
The above-mentioned results may be extended in various ways. First, only convergence in probability was considered so far. Other types of convergence may also be considered, including convergence with probability one, mean-square convergence, etc. (in fact, many of the above conditions ensure mean-square convergence, which implies convergence in probability). The case of convergence with probability one is important, and for this reason has been especially named the strong law of large numbers.
Furthermore, many of these theorems can be applied, with suitable modifications, to random vectors having values in Euclidean spaces of any dimension, in a Hilbert space or in certain Banach spaces. For instance, if $ \{ X _ {n} \} $ is a sequence of independent identically-distributed random vectors with values in a separable Banach space, and if $ {\mathsf E} \| X _ {n} \| $( where $ \| x \| $ is the norm of $ x $) is finite, one has
$$ {\mathsf P} \left \{ \left \| \frac{X _ {1} + \dots + X _ {n} }{n} - {\mathsf E} X _ {1} \right \| > \epsilon \right \} \rightarrow 0 $$
for any $ \epsilon > 0 $ if $ n \rightarrow \infty $.
The law of large numbers, when considered in its most general form, is closely related to ergodic theorems (cf. Ergodic theorem). Clearly, many theorems are also applicable to the case of the average $ ( 1 / T) \int _ {0} ^ {T} X( t) dt $, where $ X( t) $ is a random process depending on a continuous parameter (see, for example, [L]).
Finally, instead of the sums of random variables one may consider other symmetric functions of them. This was in fact done by A.Ya. Khinchin (1951–1955) to prove certain conclusions in statistical mechanics [K]. The result obtained by him may be illustrated by the following example. Let $ X _ {n,1} \dots X _ {n,n} $ be the coordinates of a point uniformly distributed on the surface of the sphere
$$ X _ {n,1 } ^ {2} + \dots + X _ {n,n } ^ {2} = n \mu ,\ \ \mu > 0. $$
The law of large numbers then applies to a wide class of symmetric functions $ f( X _ {n,1} \dots X _ {n,n} ) $ in the sense that as $ n \rightarrow \infty $, their values are asymptotically constant (this is similar to the observation made in 1925 by P. Lévy to the effect that sufficiently regular functions of a very large number of variables are almost constant in a large part of their domain of definition).
Extensive statistical data, illustrating the law of large numbers, can be found in older textbooks; see, for example, [M], [U].
[B] J. Bernoulli, "Ars conjectandi" , Werke , 3 , Birkhäuser (1975) pp. 107–286 (Original: Basle, 1713) MR2349550 MR2393219 MR0935946 MR0850992 MR0827905 Zbl 0957.01032 Zbl 0694.01020 Zbl 30.0210.01
[P] S.D. Poisson, "Récherches sur la probabilité des jugements en matière criminelle et en matière civile" , Paris (1837)
[C] P.L. Chebyshev, "Oeuvres de P.L. Tchebycheff" , 2 , Chelsea (1961) (Translated from Russian)
[M] A.A. Markov, "Wahrscheinlichkeitsrechung" , Teubner (1912) (Translated from Russian) Zbl 39.0292.02
[Be] S.N. Bernshtein, "Probability theory" , Moscow-Leningrad (1946) (In Russian) MR1868030
[GK] B.V. Gnedenko, A.N. Kolmogorov, "Limit distributions for sums of independent random variables" , Addison-Wesley (1954) (Translated from Russian) MR0062975 Zbl 0056.36001
[D] J.L. Doob, "Stochastic processes" , Chapman & Hall (1953) MR1570654 MR0058896 Zbl 0053.26802
[G] U. Grenander, "Probabilities on algebraic structures" , Wiley (1963) MR0206994 Zbl 0131.34804
[K] A.Ya. Khinchin, "Symmetric functions on multi-dimensional surfaces" , To the memory of A.A. Adronov , Moscow (1955) pp. 541–574 (In Russian) MR74335
[L] M. Loève, "Probability theory" , Princeton Univ. Press (1963) MR0203748 Zbl 0108.14202
[U] J.V. Uspensky, "Introduction to mathematical probability" , McGraw-Hill (1937) MR1524355 Zbl 63.1069.01
[B2] J. Bernoulli, "On the law of large numbers" , Moscow (1986) (In Russian) Zbl 0646.01008
[R] P. Révész, "The laws of large numbers" , Acad. Press (1968) MR0245079 Zbl 0203.50403
[HP] J. Hoffman-Jørgensen, G. Pisier, "The law of large numbers and the central limit theorem in Banach spaces" Ann. Prob. , 4 (1976) pp. 587–599 MR423451
[HH] P. Hall, C.C. Heyde, "Martingale limit theory and its application" , Acad. Press (1980) MR0624435 Zbl 0462.60045
How to Cite This Entry:
Law of large numbers. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Law_of_large_numbers&oldid=47595
This article was adapted from an original article by Yu.V. Prohorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Law_of_large_numbers&oldid=47595"
TeX auto
TeX done
Limit theorems
About Encyclopedia of Mathematics
Impressum-Legal
|
CommonCrawl
|
Prognostic impact of the prognostic nutritional index in cases of resected oral squamous cell carcinoma: a retrospective study
Atsushi Abe ORCID: orcid.org/0000-0001-8215-27691,
Hiroki Hayashi1,
Takanori Ishihama1 &
Hiroshi Furuta1
The systemic inflammatory response and nutritional status of patients with malignant tumors are related to postoperative results. We examined the usefulness of the prognostic nutritional index (PNI) as a prognostic tool in patients with oral squamous cell carcinoma who underwent radical surgery.
From 2008 to 2019, 102 patients (73 males, 29 females; age, 65.6 ± 9.8 years) who visited our hospital and underwent surgical therapy were included in this study. The endpoint was the total survival period, and the evaluation markers included the lymphocyte count and albumin level in peripheral blood obtained 4 weeks preoperatively, age, sex, alcohol consumption, smoking history, site of the tumor, pathological stage, and surgery status. The PNI was calculated using serum albumin levels and the peripheral blood lymphocyte count. The relationship between the PNI and patient characteristics were analyzed using Fisher's exact test. The Kaplan–Meier method was used to evaluate the survival rate. The survival periods were compared using the log-rank method. We evaluated the prognostic factors for overall survival (OS) and disease-free survival (DFS) in a logistic regression model.
The tumor sites included the maxilla (n = 12), buccal mucosa (n = 11), mandible (n = 17), floor of the mouth (n = 9), and tongue (n = 53). The number of patients with stage I, II, III, and IV oral cancers was 28 (27.5%), 34 (27.5%), 26 (33.3%), and 14 (13.7%), respectively. During the observation period, 21 patients died of head and neck cancer. The optimal cut-off PNI value was 42.9, according to the receiver operating characteristic analysis. The proportion of patients with a short OS was lower in those with PNI higher than 42.9, and the 5-year OS in patients with PNI higher and lower than the cut-off value was 62.3% and 86.0%, respectively (P = 0.0105).
The OS of patients with PNI < 42.9 was lower than that of patients with PNI ≥ 42.9. The PNI, which is a preoperative head-to-foot inflammatory marker, can help in estimating the prognosis of oral cancer.
Although the treatment of oral cancer and the post-treatment quality of life have improved, late metastasis and recurrence are possible complications [1, 2]. The prognostic factors for patients with oral cancer include tumor depth, vascular and neural invasion, cervical lymph node metastasis, and extranodal invasion [3,4,5,6,7,8]. However, pathological findings and staging alone cannot completely define prognosis. In cancer involving other organ systems, such as gastrointestinal cancer, host-related factors like nutritional indicators and systemic inflammatory responses, are useful in evaluating survival and recurrence, and the prognosis has been reported to relate with these factors [9,10,11,12,13,14,15,16,17,18,19]. The systemic inflammatory response is not only an indicator of the nutritional status [20, 21] but is also useful as a prognostic tool based on mechanisms different from those underlying tumor markers [22]. A previous report has examined the systemic inflammatory response and the effect of the nutritional status in patients with oral cancer receiving radiation or chemotherapy; however, there are few reports on patients having undergone surgical therapy [10, 23].
The prognostic nutritional index (PNI) is evaluated using the serum albumin level and the lymphocyte count. Albumin has been reported as a biomarker of the nutritional status, and its level has been identified to be related to the co-morbidities and the prognosis for certain cancers [24, 25]. It evaluates the susceptibility to infection by assessing malnutrition associated with insufficient protein intake and the evaluation of biological defense capabilities using tests combining evaluation of visceral protein status and immunological function. The lymphocytes take part in cell-mediated immunity and inhibit proliferation and invasion of cancer cells [26]. Therefore, PNI reflects the nutritional status and immunological state of the patient.
The clinicopathologic utility of the PNI has been studied for several malignant tumors, and it has been reported as an independent prognostic tool to assess patient overall survival (OS) [21, 27, 28]. However, the prognostic value of the PNI and its clinicopathologic correlation in patients with oral cancer remains unknown. Therefore, we aimed to examine whether the preoperative PNI could affect the 5-year survival rates in patients who have undergone surgical treatment for oral cancer.
Patients and evaluating parameters
We performed a cross-sectional analysis, including patients with primary oral cancer. We included 102 out of the 117 patients who visited the Nagoya Ekisaikai Hospital and underwent radical surgical therapy for oral squamous cell carcinoma between Jan 2008 and June 2019. Fifteen patients were excluded due to recurrence, metabolic diseases (such as diabetes mellitus), missing data, or the case that treatment was not able to continue because of intention and the overall status of the patients. Data of 102 patients (73 men, 29 women; mean age, 65.6 ± 9.8 years; the Performance Status(PS) intended for the patients of 1 or 2 were analyzed in this study. The clinical and histopathological features and the treatment course of the patients were retrospectively assessed using their medical records. The inclusion criteria for treatment protocol followed were as follows: (1) Extent of resection was determined using a clinical examination, imaging, and evaluation of cervical lymph node metastasis, degree of differentiation, and degree of invasion. (2) Safety margins for resection were kept at 1 cm. (3) Prophylactic neck dissection was not performed for patients without lymph node involvement. However, when the case at elevated risk for the potential metastasis and an ablative range was big, and reconstructive operation was necessary for cT3/T4N0, the dissection of the neck was performed. (4) Neoadjuvant chemotherapy or radiotherapy was not administered. (5) When more than two histopathologically confirmed extracapsular lymph nodes were present or the safety margin of the resection stump was inadequate, postoperative chemoradiotherapy was administered. The average observation period was 48.1 months (6–252.1 months). The examined factors were the survival periods and the long-term prognosis based on PNI grouping. We assessed clinical background factors (preoperative peripheral blood lymphocyte and monocyte counts in relation to age, sex, alcohol consumption history, smoking history, site of the primary tumor, TNM classification, and tumor stage) to examine their association with the OS and DFS (disease-free survival). Lymphocyte and neutrophil counts were measured from peripheral blood samples obtained within 4 weeks before radical surgery. Oral cancer evaluation was based on the findings obtained from visual examination, palpation, computed tomography, and magnetic resonance imaging, and an assessment of the site of occurrence and progression was also performed. Tumor stage was defined according to the Union for International Cancer Control classification [29]. The overall health was evaluated using the body mass index (BMI), albumin levels, and a preoperative examination. The PNI, a systemic inflammation biomarker, was calculated using the serum albumin level and peripheral blood lymphocyte count. The OS was defined as the period between the diagnosis of OSCC and either death. DFS was defined as the time between the first operation to the first documented recurrence, metastasis, or death. Patients, who had not passed away at the end of the investigated period, or patients in whom it was unclear if they had passed away, were censored. The formula used for PNI calculation is as follows [30]:
$$\text{PNI}=\left[ 10\times \text{serum}\text{albumin}\text{level}\left( \text{g/dL} \right) \right]+\left[ 0.005\times \text{total}\text{peripheral}\text{lymphocyte}\text{count}\left( \text{per}\text{m}{{\text{m}}^{3}} \right) \right]$$
The study was approved by the Ethical Review Board of Nagoya Ekisaikai Hospital (approval no. 2019–046), and written informed consent was obtained from all participants.
We conducted a univariate analysis to examine the association of the PNI with the prognosis. Then, we performed multivariate analysis using selected prognosis-related factors. The multivariate analysis was performed by calculating the hazard ratio (HR) and 95% confidence interval (CI) using the Cox proportional hazards model. Patient characteristics and their relationships with the PNI score were analyzed using Fisher's exact test. Analyses of the associations between PNI multiple clinicopathological parameters were conducted using Fisher's exact test or Mann–Whitney U test accordingly.
The PNI cut-off level was set using the receiver operating characteristic (ROC) curve and the area under the curve (AUC) analysis. The ratios of patient OS and DFS were calculated with the Kaplan–Meier method and compared with the log-rank test. Prognostic factors for the OS and DFS were adjusted in a Cox regression model before the evaluations. All analyses were performed with a two-sided test, and P values of 0.05 or less were considered statistically significant. Kaplan–Meier curves of the estimated OS and DFS were generated, and comparisons between the groups were performed using the log-rank test. The multivariate analysis used a Cox proportional hazards model. Each variable was deleted by the model only when the supporting P values in the univariate analysis were 0.1 or higher. All statistical analyses were performed using EZR (Jichi Medical University, Saitama Japan), a graphical user interface for R Ver. 2.8.1 (The R Foundation for Statistical Computing, Vienna, Austria).
Clinicopathological characteristics of the patients
Table 1 shows the characteristics of the patients included in this study. The average age of the patients was 65.6 ± 9.8 years, and the number of men and women was 73 (71.6%) and 29 (28.4%), respectively. Sixty patients (58.8%) had a history of smoking. The BMI ranged from 33.5 to 14.9 kg/m2 (mean ± standard deviation, 22.8 ± 3.9 kg/m2). The PNI ranged from 49.4 to 38.8 (mean, 44.0 ± 2.14). We used a ROC curve analysis to evaluate whether the PNI could predict DFS or OS. ROC analyses showed that the optimal PNI was 42.9 (OS: sensitivity- 69.2, specificity- 0.583; AUC = 0.62; DFS: sensitivity- 75.8, specificity- 0.575; AUC = 0.66) (Figs. 1, 2). The PNI cut-off value was therefore set at 42.9, and the patients were divided into low PNI (< 42.9; OS: n = 37 [36.3%]; DFS: n = 35 [34.3%]) and high PNI (42.9 ≤ ; OS: n = 65 [63.7%]; DFS: n = 67 [65.7%]) groups.
Table 1 Characteristics of the patients
ROC curve for the PNI. The continuous variables PNI and OS were used as the test and state variables, respectively. The PNI cut-off value was 42.9 with the area under the curve, sensitivity, and specificity being 0.634, 0.765, and 0.524, respectively. ROC receiver operating characteristic, PNI prognostic nutritional index, OS overall survival
ROC curve for the PNI. The continuous variables PNI and DFS were used as the test and state variables, respectively. The PNI cut-off value was 42.9, with the area under the curve, sensitivity, and specificity being 0.663, 0.758, and 0.575, respectively. ROC receiver operating characteristic; PNI prognostic nutritional index, DFS disease-free survival
The OS and DFS, according to the PNI
The relationship between specific clinicopathological factors and the OS and DFS is summarized in Tables 2, 3. The Kaplan–Meier survival curve outlining the relationship between the PNI and the OS and DFS rate (P < 0.001) is shown in Figs. 3 and 4. The group with low PNI showed significantly lower rates of OS and DFS compared to the group with high PNI. Univariate analysis revealed that the stage (P = 0.016), vascular invasion (P = 0.014), pre-treatment serum CRP level (P = 0.002), and PNI (P = 0.011) were associated with the rate of OS (Table 2); however, univariate analysis revealed no association between the rate of DFS and the stage (P = 0.042), albumin level (P = 0.045), pre-treatment serum CRP level (P = 0.007), lymphovascular invasion (P = 0.001), postoperative treatment (P = 0.0002), and PNI (P = 0.006) (Table 3). As a result of the analysis, multicollinearity was absent. We included the factors included in the univariate analysis along with important prognostic factors (histopathological differentiation, surgical margin, vascular and perineural invasion, and postoperative treatment) as covariates in the multivariate analysis. The multivariate analysis showed that only the CRP level (HR 2.99; 95% CI 11.20–7.46; P = 0.019), perineural invasion (HR 3.73; 95% CI 1.06–13.09; P = 0.04), and PNI (HR 0.32; 95% CI 0.13–0.79; P = 0.013) were associated with the rate of OS (Table 4). The multivariate analysis also showed that the margin (HR 4.10; 95% CI 1.13–14.94; P = 0.032), postoperative treatment (HR 3.71; 95% CI 1.65–8.33; P = 0.0015), and the PNI (HR-0.27; 95% CI 0.13–0.54; P = 0.0024) were independent predictors of the DFS (Table 5).
Table 2 Univariate analysis of the associations between the clinicopathological characteristics of the patients and their prognostic variables and overall survival
Table 3 Univariate analysis of the associations between the clinicopathological characteristics of the patients and their prognostic variables and DFS
Kaplan–Meier survival curves for the PNI and overall survival of oral squamous cell carcinoma patients. Kaplan–Meier curves, according to the PNI score. The OS was significantly worse in patients with a lower PNI than those with a higher PNI (≥ 42.9) (P = 0.0007886, respectively). PNI prognostic nutritional index, OS overall survival
Kaplan–Meier survival curves for the PNI and the DFS of oral squamous cell carcinoma patients. Kaplan–Meier curves, according to the PNI score. The DFS was significantly worse in patients with lower PNI than those with a higher PNI (≥ 42.9) (P = 0.000005792, respectively). PNI prognostic nutritional index, DFS disease-free survival
Table 4 Multivariate analyses for the associations with the OS
Table 5 Multivariate analyses for the associations with the DFS
In some studies, PNI has been confirmed as a new prognostic tool for cancer, and a low PNI has been shown to be significantly associated with lower survival for pancreatic cancer, hepatocellular carcinoma, esophageal cancer, gastric cancer, colorectal cancer, renal cell carcinoma, and ovarian cancer [31,32,33,34,35,36,37]. In other reports, the cut-off value of PNI used to predict prognosis was 42–47.8. The results of our retrospective analysis showed that low preoperative PNI and high CRP levels were prognostic factors for poorer OS and DFS in patients with oral cancer. In this study, we divided the patients into two groups based on a PNI cut-off value of 42.9 derived from the ROC curve, and we compared the clinical background factors in the two groups. The cut-off value of 42.9 that we used in this study is within the range used in previous studies; therefore, it can be argued that PNI is a practical tool to assess postoperative prognosis [38,39,40]. In the multivariate analysis, a low PNI, a high CRP level, and perineural invasion were significantly associated with poorer OS. Significant differences were also observed in the HR (Hazard Ratio) with respect to the surgical margin, postoperative treatment, and PNI in the multivariate analysis for DFS. Additionally, the two groups showed differences in the DFS and the 5-year OS. These results suggest that the low PNI group has a poorer preoperative nutritional status and a higher degree of inflammatory response than the high PNI group, resulting in poor prognosis. The PNI, which is estimated using the serum albumin level and the lymphocyte count, reflects the nutritional and immunological state of the patient. Previous studies have reported the PNI as a prognostic factor affecting OS for different malignancies [41,42,43,44,45].
Microenvironmental inflammation affects the growth of tumor cells and promotes angiogenesis and metastasis [46, 47]. The immune system recognizes cancer cells and secretes, as a response, inflammatory cytokines, leading to hypercytokinemia [46,47,48]. Interleukin-8 (IL-8) and vascular endothelial growth factor (VEGF) are two cancer-associated cytokines. These cytokines cause the resolution of the extracellular matrix and neovascularization. Consequently, growth, invasion, and metastasis of tumors are accelerated. However, it is difficult to easily measure these cytokines [49, 50]. Blood biochemical changes caused by these cytokines can be assessed by measuring inflammatory reaction markers based on the systemic inflammatory reaction. [46,47,48,49,50,51]. To date, numerous traditional systemic inflammation markers have been reported, including the Glasgow Prognostic Score [52, 53] based on plasma components, the neutrophil-to-lymphocyte ratio [54, 55] derived from the number of blood cells, the lymphocyte-to-monocyte ratio [56, 57], CRP-to-albumin ratio [58], and the PNI [27, 59] based on serum albumin levels and lymphocyte counts. Most of these markers are based on blood cell counts, serum protein level measurement, and the ratios derived from these parameters. Albumin is a significant component of the plasma protein content and reflects the nutritional status, whereas lymphocytes reflect the immunological state; therefore, the ratio of serum albumin level to the lymphocyte count is associated with the survival of patients with cancer [60,61,62].
Low PNI levels show poor prognosis for oral cancer because the inflammatory cytokines IL-6 and IL-8 increased the number of neutrophils and decreased those of lymphocytes besides enhancing proteolysis [48,49,50,51]. Thus, low PNI was considered as an indicator of high inflammatory cytokine levels. The release of cytokines by cancer cells results in a rise in the serum CRP level at the same time. Elevated CRP levels have been reported to be associated with a lower rate of DFS and OS in operable oral cancers [62]. Similarly, some reports have investigated the impact of serum albumin and CRP on the outcome of combination chemoradiotherapy in cases of unresectable head and neck cancers [63]. The association between OS and CRP has been reflected in this study.
The mechanisms underlying the associations between systemic inflammatory response and survival in patients with oral squamous cell carcinoma are not evident. However, using albumin levels and lymphocyte counts, the components used for PNI calculation, cancer cachexia associated with growth factors release, impaired cell-mediated immune response, and angiogenesis can be estimated [64,65,66,67,68]. These mechanisms are complex and include a combination of the factors mentioned above. Therefore, further studies involving metrics such as the PNI, along with an appropriate grading system for it, are necessary to assess its prognostic value in oral cancer. We incorporated the PNI in a prognostic model, and the prospective analysis of this model in a large group of patients was essential to assess the pretreatment risk. In the following paragraphs, we provide some hypotheses to explain why a low PNI level is associated with a poor prognosis for oral cancer.
First, the levels of serum albumin, which is a chief component of plasma proteins, can reflect the nutritional status, while lymphocytes, which can eliminate cancer cells and are important components of the immune system, can reflect the immunological state. Thus, the PNI reflects the nutritional and immunological states of the host and can indicate the prognosis in patients with cancer. Consistent with this, the results of some studies have shown that the PNI, after an adjustment for other risk factors, was an independent prognostic factor for the OS.
Second, a low PNI has been reported to be associated with poorer tumor prognosis (increased depth of tumor, lymph node metastasis, poor TNM staging), and an extensive hematic and lymphatic spread. In the multivariate analysis, a significant association was observed between perineural invasion and OS. Cytokines may promote perineural invasion; however, the relationship between such invasion and the PNI is not clear at the moment. Perineural invasion and its relation to PNI are future research themes in oral squamous cell carcinoma.
Multivariate analysis also showed a significant association of the surgical margin, postoperative treatment, and PNI with the DFS. Therefore, PNI has a role in predicting DFS. Moreover, a low PNI is associated with malnutrition and immunosuppression and may inhibit the success of chemoradiotherapy. In this context, PNI can be thought of as having a prognostic value in predicting DFS.
These results suggest that in evaluating systemic inflammatory response in oral cancer, a blood protein reflects the actual situation rather than the blood cells. This suggestion is consistent with a previously published report [27].
Using clinical background factors including the PNI, we performed single multivariate analyses, including factors that are most related to prognosis, and found that a low PNI value was related to prognosis. These results suggest that the PNI is independent of clinical background and surgical-related factors and that the relationship between the PNI and the prognosis may involve a different mechanism from that associated with tumor markers. These results suggest that PNI can predict the prognosis of oral cancer before surgery.
A limitation of this study is the retrospective analysis of data from a single facility. Additionally, the ROC, when determining the cutoff value was relatively low, affected by a treatment protocol, and the number of samples in this study was likely not sufficient (102 cases). Furthermore, since the median observation period was as short as 48.1 months, an increase in the number of cases and longer observation periods are essential. In cases involving metastasis or inflammation, inflammatory cytokines increase the production of acute-phase proteins such as CRP in the liver and reduce the production of albumin. Therefore, when examining a condition including an inflammatory response and considering the change in nutritional status using biomarkers, it should be assumed that the inflammatory response (CRP and white blood cell count) is normal and does not vary [57]. Whether low PNI is the cause or the effect of tumor progression remains unknown, and additional research is required to elucidate this problem.
The assessments of the PNI are cheaper than those involving tumor markers, and the PNI can be easily calculated using blood samples. Therefore, the PNI can be a prognostic factor for OS and may be a useful long-term marker for evaluating recurrence and metastasis before postoperative chemoradiotherapy and during follow-up. Furthermore, poor nutritional status leads to delay and abandonment of postoperative adjuvant therapy and immunological treatment. Thus, these findings may partially explain the relationship between low OS and low PNI in patients with oral cancer.
The PNI, a cheaper alternative to tumor markers that can be easily measured using common preoperative blood sampling techniques, can be a prognostic tool to assess the OS. This may partially explain its relationship with the survival period in patients with oral cancer. Moreover, it can be a useful long-term prognostic marker for assessing the recurrence, metastasis, and follow-up assessments. Furthermore, PNI assessments may facilitate the choice between postoperative chemoradiotherapy and adjuvant therapy.
The raw data are confdential and cannot readily be shared. Researchers need to obtain permission from the Institutional Review Board and apply for access to the data from The Ethics Committee of Nagoya Ekisaikai Hospital.
IL:
Overall survival
PNI:
Prognostic nutritional index
ROC:
Receiver operating characteristic
Ferlay J, Colombet M, Soerjomataram I, Mathers C, Parkin DM, Piñeros M, et al. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int J Cancer. 2019;144:1941–53.
Pfister DG, Ang KK, Brizel DM, Burtness BA, Busse PM, Caudell JJ, et al. Head and neck cancers, version 2.2013. Featured updates to the NCCN guidelines. J Natl Compr Canc Netw. 2013;11:917–23.
McMahon J, O'Brien CJ, Pathak I, Hamill R, McNeill E, Hammersley N, et al. Influence of condition of surgical margins on local recurrence and disease- specific survival in oral and oropharyngeal cancer. Br J Oral Maxillofac Surg. 2003;41:224–31.
Capote-Moreno A, Naval L, Muñoz-Guerra MF, Sastre J, Rodríguez-Campo FJ. Prognostic factors influencing contralateral neck lymph node metastases in oral and oropharyngeal carcinoma. J Oral Maxillofac Surg. 2010;68:268–75.
Tankéré F, Camproux A, Barry B, Guedon C, Depondt J, Gehanno P. Prognostic value of lymph node involvement in oral cancers: a study of 137 cases. Laryngoscope. 2000;110:2061–5.
Preda L, Chiesa F, Calabrese L, Latronico A, Bruschini R, Leon ME, et al. Relationship between histologic thickness of tongue carcinoma and thickness estimated from preoperative MRI. Eur Radiol. 2006;16:2242–8.
Yuen AP, Ng RW, Lam PK, Ho A. Preoperative measurement of tumor thickness of oral tongue carcinoma with intraoral ultrasonography. Head Neck. 2008;30:230–4.
McMillan DC. Systemic inflammation, nutritional status and survival in patients with cancer. Curr Opin Clin Nutr Metab Care. 2009;12:223–6.
Roxburgh CS, McMillan DC. Role of systemic inflammatory response in predicting survival in patients with primary operable cancer. Future Oncol. 2010;6:149–63.
Farhan-Alanie OM, McMahon J, McMillan DC. Systemic inflammatory response and survival in patients undergoing curative resection of oral squamous cell carcinoma. Br J Oral Maxillofac Surg. 2015;53:126–31.
Proctor MJ, Morrison DS, Talwar D, Balmer SM, O'Reilly DS, Foulis AK, et al. An inflammation-based prognostic score (mGPS) predicts cancer survival independent of tumour site: a Glasgow Inflammation Outcome Study. Br J Cancer. 2011;104:726–34.
Smith RA, Bosonnet L, Raraty M, Sutton R, Neoptoleamos JP, Campbell F, et al. Preoperative platelet-lymphocyte ratio is an independent significant prognostic marker in resected pancreatic ductal adenocarcinoma. Am J Surg. 2009;197:466–72.
Halazun KJ, Aldoori A, Malik HZ, Al-Mukhtar A, Prasad KR, Toogood GJ, et al. Elevated preoperative neutrophil to lymphocyte ratio predicts survival following hepatic resection for colorectal liver metastases. Eur J Surg Oncol. 2008;34:55–60.
Proctor MJ, Morrison DS, Talwar D, Balmer SM, Fletcher DS, O'Reilly DS, et al. A comparison of inflammation-based prognostic scores in patients with cancer. A Glasgow Inflammation Outcome Study. Eur J Cancer. 2011;47:2633–41.
Feng JF, Huang Y, Liu JS. Combination of neutrophil to lymphocyte ratio and platelet lymphocyte ratio as a useful predictor of postoperative survival in patients with esophageal squamous cell carcinoma. Onco Targets Ther. 2013;6:1605–12.
Teramukai S, Kitano T, Kishida Y, Kawahara M, Kubota K, Komuta K, et al. Pretreatment neutrophil count as an independent prognostic factor in advanced non-small-cell lung cancer: an analysis of Japan Multinational Trial Organisation LC00-03. Eur J Cancer. 2009;45:1950–8.
Halazun KJ, Hardy MA, Rana AA, Woodland DC IV, Luyten EJ, Mahadev S, et al. Negative impact of neutrophil-lymphocyte ratio on outcome after liver transplantation for hepatocellular carcinoma. Ann Surg. 2009;250:141–51.
Wang DS, Luo HY, Qiu MZ, Wang ZQ, Zhang DS, Wang FH, et al. Comparison of the prognostic values of various inflammation based factors in patients with pancreatic cancer. Med Oncol. 2012;29:3092–100.
Pan QX, Su ZJ, Zhang JH, Wang CR, Ke SY. A comparison of the prognostic value of preoperative inflammation-based scores and TNM stage in patients with gastric cancer. Onco Targets Ther. 2015;8:1375–85.
Xue Y, Zhou X, Xue L, Zhou R, Luo J. The role of pretreatment prognostic nutritional index in esophageal cancer: a meta-analysis. J Cell Physiol. 2019;234:19655–62.
Guner A, Kim HI. Biomarkers for evaluating the inflammation status in patients with cancer. J Gastric Cancer. 2019;19:254–77.
McMillan DC. The systemic inflammation based Glasgow Prognostic Score: a decade of experience in patients with cancer. Cancer Treat Rev. 2013;39:534–40.
Moon H, Roh JL, Lee SW, Kim SB, Choi SH, Nam SY, et al. Prognostic value of nutritional and hematologic markers in head and neck squamous cell carcinoma treated by chemoradiotherapy. Radiother Oncol. 2016;118:330–4.
Xue Y, Zhou X, Xue L, Zhou R, Luo J. The role of pretreatment prognostic nutritional index in esophageal cancer: a meta-analysis. J Cell Physiol. 2019;234(11):19655–62.
Seaton K. Albumin concentration controls cancer. J Natl Med Assoc. 2001;93(12):490–3.
Gupta D, Lis CG. Pretreatment serum albumin as a predictor of cancer survival: a systematic review of the epidemiological literature. Nutr J. 2010;9:69.
Yang Y, Gao P, Song Y, Sun J, Chen X, Zhao J, et al. The prognostic nutritional index is a predictive indicator of prognosis and postoperative complications in gastric cancer: a meta-analysis. Eur J Surg Oncol. 2016;42:1176–82.
Fu Y, Chen SW, Chen SQ, Ou-Yang D, Liu WW, Song M, et al. A preoperative nutritional index for predicting cancer-specific and overall survival in Chinese patients with laryngeal cancer: a retrospective study. Medicine (Baltimore). 2016;95:e2962.
Head and Neck Cancer Study Group (HNCSG), Monden N, Asakage T, Kiyota N, Homma A, Matsuura K, et al. A review of head and neck cancer staging system in the TNM classification of malignant tumors (eighth edition). Jpn J Clin Oncol. 2019;49:589–95.
Onodera T, Goseki N, Kosaki G. Prognostic nutritional index in gastrointestinal surgery of malnourished cancer patients. Nihon Geka Gakkai Zasshi. 1984;85:1001–5 ((in Japanese)).
Du XJ, Tang LL, Mao YP, Guo R, Sun Y, Lin AH, Ma J. Value of the prognostic nutritional index and weight loss in predicting metastasis and long-term mortality in nasopharyngeal carcinoma. J Transl Med. 2015;13:364. https://doi.org/10.1186/s12967-015-0729-0.
Feng Z, Wen H, Ju X, Bi R, Chen X, Yang W, Wu X. The preoperative prognostic nutritional index is a predictive and prog- nostic factor of high-grade serous ovarian cancer. BMC Cancer. 2018;18(1):883.
Kang M, Chang CT, Sung HH, Jeon HG, Jeong BC, Seo SI, et al. Prognostic significance of pre- to postoperative dynamics of the prognostic nutritional index for patients with renal cell carcinoma who underwent radical nephrectomy. Ann Surg Oncol. 2017;24(13):4067–75. https://doi.org/10.1245/s10434-017-6065-2.
Okadome K, Baba Y, Yagi T, Kiyozumi Y, Ishimoto T, Iwatsuki M, Baba H. Prognostic nutritional index, tumorinfiltrating lymphocytes, and prognosis in patients with esophageal cancer. Ann Surg. 2018. https://doi.org/10.1097/SLA.0000000000002985.
Pinato DJ, North BV, Sharma R. A novel, externally validated inflammation-based prognostic algorithm in hepatocellular carcinoma: the prognostic nutritional index (PNI). Br J Cancer. 2012;106(8):1439–45. https://doi.org/10.1038/bjc.2012.92.
Nozoe T, Ninomiya M, Maeda T, Matsukuma A, Nakashima H, Ezaki T. Prognostic Nutritional Index: a tool to predict the biological aggressiveness of gastric carcinoma. Surg Today. 2010;40(5):440–3. https://doi.org/10.1007/s00595-009-4065-y.
Kanda M, Fujii T, Kodera Y, et al. Nutritional predictors of postoperative outcome in pancreatic cancer. The British journal of surgery. 2011;98:268–74.
Ishizuka M, Oyama Y, Abe A, Tago K, Tanaka G, Kubota K. Prognostic nutritional index is associated with survival after total gastrectomy for patients with gastric cancer. Anticancer Res. 2014;34:4223–9.
Migita K, Takayama T, Saeki K, Matsumoto S, Wakatsuki K, Enomoto K, et al. The prognostic nutritional index predicts long-term outcomes of gastric cancer patients independent of tumor stage. Ann Surg Oncol. 2013;20:2647–54.
Watanabe M, Iwatsuki M, Iwagami S, Ishimoto T, Baba Y, Baba H. Prognostic nutritional index predicts outcomes of gastrectomy in the elderly. World J Surg. 2012;36:1632–9.
Kinoshita A, Onoda H, Imai N, Iwaku A, Oishi M, Tanaka K, et al. The C-reactive protein/albumin ratio, a novel inflammation-based prognostic score, predicts outcomes in patients with hepatocellular carcinoma. Ann Surg Oncol. 2015;22:803–10.
Ishizuka M, Nagata H, Takagi K, Iwasaki Y, Shibuya N, Kubota K. Clinical significance of the C-reactive protein to albumin ratio for survival after surgery for colorectal cancer. Ann Surg Oncol. 2016;23:900–7.
Pinato DJ, North BV, Sharma R. A novel, externally validated inflammation-based prognostic algorithm in hepatocellular carcinoma: the prognostic nutritional index (PNI). Br J Cancer. 2012;106:1439–45.
Buzby GP, Mullen JL, Matthews DC, Hobbs CL, Rosato EF. Prognostic nutritional index in gastrointestinal surgery. Am J Surg. 1980;139:160–7.
Ray-Coquard I, Cropet C, Van Glabbeke M, Sebban C, Le Cesne A, Judson I, et al. Lymphopenia as a prognostic factor for overall survival in advanced carcinomas, sarcomas, and lymphomas. Cancer Res. 2009;69:5383–91.
Hanahan D, Weinberg RA. The hallmarks of cancer. Cell. 2000;100:57–70.
Hanahan D, Weinberg RA. Hallmarks of cancer: the next generation. Cell. 2011;144:646–74.
Roxburgh CS, McMillan DC. Cancer and systemic inflammation: treat the tumour and treat the host. Br J Cancer. 2014;110:1409–12.
Haqqani AS, Sandhu JK, Birnboim HC. Expression of interleukin-8 promotes neutrophil infiltration and genetic instability in mutatect tumors. Neoplasia. 2000;2:561–8.
Strieter RM, Burdick MD, Mestas J, Gomperts B, Keane MP, Belperio JA. Cancer CXC chemokine networks and tumour angiogenesis. Eur J Cancer. 2006;42:768–78.
Diakos CI, Charles KA, McMillan DC, Clarke SJ. Cancer-related inflammation and treatment effectiveness. Lancet Oncol. 2014;15:e493-503.
Forrest LM, McMillan DC, McArdle CS, Angerson WJ, Dunlop DJ. Comparison of an inflammation-based prognostic score (GPS) with performance status (ECOG) in patients receiving platinum-based chemotherapy for inoperable non-small-cell lung cancer. Br J Cancer. 2004;90:1704–6.
Zhang CX, Wang SY, Chen SQ, Yang SL, Wan L, Xiong B. Association between pretreatment Glasgow prognostic score and gastric cancer survival and clinicopathological features: a meta-analysis. Onco Targets Ther. 2016;9:3883–91.
Shimada H, Takiguchi N, Kainuma O, Soda H, Ikeda A, Cho A, et al. High preoperative neutrophil-lymphocyte ratio predicts poor survival in patients with gastric cancer. Gastric Cancer. 2010;13:170–6.
Proctor MJ, McMillan DC, Morrison DS, Fletcher CD, Horgan PG, Clarke SJ. A derived neutrophil to lymphocyte ratio predicts survival in patients with cancer. Br J Cancer. 2012;107:695–9.
Nishijima TF, Muss HB, Shachar SS, Tamura K, Takamatsu Y. Prognostic value of lymphocyte-to-monocyte ratio in patients with solid tumors: a systematic review and meta-analysis. Cancer Treat Rev. 2015;41:971–8.
Huang Y, Feng JF. Low preoperative lymphocyte to monocyte ratio predicts poor cancer-specific survival in patients with esophageal squamous cell carcinoma. Onco Targets Ther. 2015;8:137–45.
Chen XL, Xue L, Wang W, Chen HN, Zhang WH, Liu K, et al. Prognostic significance of the combination of preoperative hemoglobin, albumin, lymphocyte and platelet in patients with gastric carcinoma: a retrospective cohort study. Oncotarget. 2015;38:41370–82.
Kinoshita A, Onoda H, Imai N, Iwaku A, Oishi M, Fushiya N, et al. Comparison of the prognostic value of inflammation-based prognostic scores in patients with hepatocellular carcinoma. Br J Cancer. 2012;107:988–93.
Taylor BE, McClave SA, Martindale RG, Warren MM, Johnson DR, Braunschweig C, et al. Guidelines for the provision and assessment of nutrition support therapy in the adult critically ill patient: Society of Critical Care Medicine (SCCM) and American Society for Parenteral and Enteral Nutrition (A. S. P. E. N.). Crit Care Med. 2016;44:390–438.
Mohri T, Mohri Y, Shigemori T, Takeuchi K, Itoh Y, Kato T. Impact of prognostic nutritional index on long-term outcomes in patients with breast cancer. World J Surg Oncol. 2016;14:170.
Salas S, Deville JL, Giorgi R, Pignon T, Bagarry D, Barrau K, et al. Nutritional factors as predictors of response to radio-chemotherapy and survival in unresectable squamous head and neck carcinoma. Radiother Oncol. 2008;87:195–200.
Chen HH, Chen IH, Liao CT, Wei FC, Lee LY, Huang SF. Preoperative circulating C-reactive protein levels predict pathological aggressiveness in oral squamous cell carcinoma: a retrospective clinical study. Clin Otolaryngol. 2011;36:147–53.
Fearon KC, Voss AC, Hustead DS, Cancer Cachexia Study Group. Definition of cancer cachexia: effect of weight loss, reduced food intake, and systemic inflammation on functional status and prognosis. Am J Clin Nutr. 2006;83:1345–50
Du Clos TW, Mold C. C-reactive protein: an activator of innate immunity and a modulator of adaptive immunity. Immunol Res. 2004;30:261–77.
Krystek-Korpacka M, Matusiewicz M, Diakowska D, Grabowski K, Blachut K, Kustrzeba-Wojcocka I, et al. Acute-phase response proteins are related to cachexia and accelerated angiogenesis in gastroesophageal cancers. Clin Chem Lab Med. 2008;46:359–64.
de Jong KP, Hoedemakers RM, Fidler V, Bijzet J, Limburg PC, Peeters PM, et al. Portal and systemic serum growth factor and acute-phase response after laparotomy or partial hepatectomy in patients with colorectal liver metastases: a prognostic role for C-reactive protein and hepatocyte growth factor. Scand J Gastroenterol. 2004;39:1141–8.
Chen Z, Malhotra PS, Thomas GR, Ondrey FG, Duffey DC, Smith CW, et al. Expression of proinflammatory and proangiogenic cytokines in patients with head and neck cancer. Clin Cancer Res. 1999;5:1369–79.
We would like to thank Editage Science Communications for English language editing and publication support.
The present research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Department of Oral and Maxillofacial Surgery, Nagoya Ekisaikai Hospital, 4-66 Syounen-cho Nakagawa-ku, Nagoya, 454-8502, Japan
Atsushi Abe, Hiroki Hayashi, Takanori Ishihama & Hiroshi Furuta
Atsushi Abe
Hiroki Hayashi
Takanori Ishihama
Hiroshi Furuta
AA conceived the study, carried out the design and coordination, wrote the manuscript, and gave the final approval of the version to be submitted. HF critically revised the manuscript for important intellectual content. HH and TI collected the clinical data and drafted the article. All authors read and approved the final manuscript.
Correspondence to Atsushi Abe.
All procedures were performed in accordance with the ethical standards of the institutional and/or national research committee and in line with the 1964 Declaration of Helsinki. The present retrospective cohort study was approved by the Nagoya Ekisaikai Hospital Institutional Review Board (approval number 2019–046). The ethics committee approved the procedure of this study and gave us administrative permissions to access the data used in this study. The study was conducted in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement guidelines for reporting observational studies.
Consent to publication
Abe, A., Hayashi, H., Ishihama, T. et al. Prognostic impact of the prognostic nutritional index in cases of resected oral squamous cell carcinoma: a retrospective study. BMC Oral Health 21, 40 (2021). https://doi.org/10.1186/s12903-021-01394-6
|
CommonCrawl
|
Bilevel optimization for calibrating point spread functions in blind deconvolution
A parallel space-time domain decomposition method for unsteady source inversion problems
November 2015, 9(4): 1093-1137. doi: 10.3934/ipi.2015.9.1093
Locally sparse reconstruction using the $l^{1,\infty}$-norm
Pia Heins 1, , Michael Moeller 2, and Martin Burger 3,
Westfälische Wilhelms-Universität Münster, Institute for Computational and Applied Mathematics, Einsteinstrasse 62, D 48149 Münster, Germany
Technische Universität München, Department of Computer Science, Informatik 9, Boltzmannstrasse 3, D 85748 Garching, Germany
Westfälische Wilhelms-Universität Münster, Institute for Computational and Applied Mathematics, Einsteinstr. 62, D 48149 Münster, Germany
Received June 2014 Revised December 2014 Published October 2015
This paper discusses the incorporation of local sparsity information, e.g. in each pixel of an image, via minimization of the $\ell^{1,\infty}$-norm. We discuss the basic properties of this norm when used as a regularization functional and associated optimization problems, for which we derive equivalent reformulations either more amenable to theory or to numerical computation. Further focus of the analysis is put on the locally 1-sparse case, which is well motivated by some biomedical imaging applications.
Our computational approaches are based on alternating direction methods of multipliers (ADMM) and appropriate splittings with augmented Lagrangians. Those are tested for a model scenario related to dynamic positron emission tomography (PET), which is a functional imaging technique in nuclear medicine.
The results of this paper provide insight into the potential impact of regularization with the $\ell^{1,\infty}$-norm for local sparsity in appropriate settings. However, it also indicates several shortcomings, possibly related to the non-tightness of the functional as a relaxation of the $\ell^{0,\infty}$-norm.
Keywords: $\ell^{1, mixed norms, compressed sensing., Local sparsity, \infty}$-regularization, inverse problems, variational methods.
Mathematics Subject Classification: Primary: 65F50, 49N45; Secondary: 68U1.
Citation: Pia Heins, Michael Moeller, Martin Burger. Locally sparse reconstruction using the $l^{1,\infty}$-norm. Inverse Problems & Imaging, 2015, 9 (4) : 1093-1137. doi: 10.3934/ipi.2015.9.1093
F. Bach, R. Jenatton, J. Mairal and G. Obozinski, Convex Optimization with Sparsity-Inducing Norms,, Optimization for Machine Learning (eds. S. Sra, (2011). Google Scholar
F. Bach, R. Jenatton, J. Mairal and G. Obozinski, Optimization with sparsity-inducing penalties,, Foundations and Trends in Machine Learning, 4 (2012), 1. doi: 10.1561/2200000015. Google Scholar
F. Bach, R. Jenatton, J. Mairal and G. Obozinski, Structured sparsity through convex optimization,, Statistical Science, 27 (2012), 450. doi: 10.1214/12-STS394. Google Scholar
J. M. Bioucas-Dias, A. Plaza, N. Dobigeon, M. Parente, Q. Du, P. Gader and J. Chanussot, Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches,, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 5 (2012), 354. Google Scholar
S. Boyd, N. Parikh, E. Chu, B. Peleato and J. Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers,, Foundations and Trends in Machine Learning, 3 (2010), 1. doi: 10.1561/2200000016. Google Scholar
M. Burger and S. Osher, Convergence rates of convex variational regularization,, Inverse problems, 20 (2004), 1411. doi: 10.1088/0266-5611/20/5/005. Google Scholar
E. J. Candès, X. Li, Y. Ma and J. Wrigh, Robust principal component analysis?,, Journal of the ACM, 58 (2011). doi: 10.1145/1970392.1970395. Google Scholar
E. J. Candès and Y. Plan, A probabilistic and ripless theory of compressed sensing,, IEEE Transactions on Information Theory, 57 (2011), 7235. doi: 10.1109/TIT.2011.2161794. Google Scholar
E. J. Candès and T. Tao, Decoding by linear programming,, IEEE Transactions on Information Theory, 51 (2005), 4203. doi: 10.1109/TIT.2005.858979. Google Scholar
G. Chen and M. Teboulle, A proximal-based decomposition method for convex minimization problems,, Mathematical Programming, 64 (1994), 81. doi: 10.1007/BF01582566. Google Scholar
A. Cohen, W. Dahmen and R. Devore, Compressed sensing and best k-term approximation,, Journal of the American Mathematical Society, 22 (2009), 211. doi: 10.1090/S0894-0347-08-00610-3. Google Scholar
S. Cotter, B. Rao, K. Engan, and K. Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors,, IEEE Transactions on Signal Processing, 53 (2005), 2477. doi: 10.1109/TSP.2005.849172. Google Scholar
D. L. Donoho and X. Huo, Uncertainty principles and ideal atomic decomposition,, IEEE Transactions on Information Theory, 47 (2001), 2845. doi: 10.1109/18.959265. Google Scholar
J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra, Efficient projections onto the l1-ball for learning in high dimensions,, in Proceedings of the 25th International Conference on Machine Learning, (2008). Google Scholar
J. Eckstein and M. Fukushima, Some reformulations and applications of the alternating direction method of multipliers,, in Large Scale Optimization: State of the Art, (1994), 115. Google Scholar
I. Ekeland and R. Témam, Convex Analysis and Variational Problems,, corrected reprint edition, (1999). doi: 10.1137/1.9781611971088. Google Scholar
D. Elson, S. Webb, J. Siegel, K. Suhling, D. Davis, J. Lever, D. Phillips, A. Wallace and P. French, Biomedical applications of fluorescence lifetime imaging,, Optics and Photonics News, 13 (2002), 26. doi: 10.1364/OPN.13.11.000026. Google Scholar
H. W. Engl, M. Hanke and A. Neubauer, Regularization of Inverse Problems, Volume 375,, Springer, (1996). doi: 10.1007/978-94-009-1740-8. Google Scholar
E. Esser, M. Möller, S. Osher, G. Sapiro and J. Xin, A convex model for nonnegative matrix factorization and dimensionality reduction on physical space,, IEEE Transactions on Image Processing, 21 (2012), 3239. doi: 10.1109/TIP.2012.2190081. Google Scholar
M. Fornasier and H. Rauhut, Recovery algorithms for vector valued data withjoint sparsity constraints,, SIAM Journal on Numerical Analysis, 46 (2008), 577. doi: 10.1137/0606668909. Google Scholar
M. Fortin and R. Glowinski, Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems,, North-Holland, (1983). Google Scholar
M. Fortin and R. Glowinski, On decomposition-coordination methods using an augmented Lagrangian,, in Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems, (1983), 97. Google Scholar
J.-J. Fuchs, On sparse representations in arbitrary redundant bases,, IEEE Transactions on Information Theory, 50 (2004), 1341. doi: 10.1109/TIT.2004.828141. Google Scholar
M. Fukushima, Application of the alternating direction method of multipliers to separable convex programming problems,, Computational Optimization and Applications, 1 (1992), 93. doi: 10.1007/BF00247655. Google Scholar
D. Gabay, Applications of the method of multipliers to variational inequalities,, in Augmented Lagrangian Methods: Applications to the Solution of Boundary-Value Problems, (1983), 299. Google Scholar
D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinear variational problems via finite element approximations,, Computers and Mathematics with Applications, 2 (1976), 17. doi: 10.1016/0898-1221(76)90003-1. Google Scholar
R. Glowinski and A. Marrocco, Sur l'approximation, par elements finis d'ordre un, et la resolution, par penalisation-dualité, d'une classe de problems de dirichlet non lineares,, Revue Française d'Automatique, 9 (1975), 41. Google Scholar
R. Glowinski and P. L. Tallec, Augmented Lagrangian Methods for the Solution of Variational Problems,, Technical report, (1987). doi: 10.1137/1.9781611970838.ch3. Google Scholar
G. T. Gullberg, B. W. Reutter, A. Sitek, J. S. Maltz and T. F. Budinger, Dynamic single photon emission computed tomography - basic principles and cardiac applications,, Phys. Med. Biol., 55 (2010). doi: 10.1088/0031-9155/55/20/R01. Google Scholar
R. N. Gunn, S. R. Gunn, F. E. Turkheimer, J. A. D. Aston and V. J. Cunningham, Positron emission tomography compartmental models: A basis pursuit strategy for kinetic modeling,, Journal of Cerebral Blood Flow and Metabolism, 22 (2002), 1425. Google Scholar
B. S. He, H. Yang and S. L. Wang, Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities,, Journal of Optimization Theory and Applications, 106 (2000), 337. doi: 10.1023/A:1004603514434. Google Scholar
P. Heins, Reconstruction Using Local Sparsity - A Novel Regularization Technique and an Asymptotic Analysis of Spatial Sparsity Priors,, PhD thesis, (2014). Google Scholar
J. B. Hiriart-Urruty and C. Lemaréchal, Convex Analysis and Minimization Algorithms I, Grundlehren der mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences), A Series of Comprehensive Studies in Mathematics,, Springer, (1993). Google Scholar
G. Huiskamp and F. Greensite, A new method for myocardial activation imaging,, IEEE Transactions on Biomedical Engineering, 44 (1997), 433. doi: 10.1109/10.581930. Google Scholar
A. Juditsky and A. Nemirovski, On verifiable sufficient conditions for sparse signal recovery via l1-minimization,, Mathematical Programming, 127 (2011), 57. doi: 10.1007/s10107-010-0417-z. Google Scholar
S. M. Kakade, D. Hsu and T. Zhang, Robust matrix decomposition with sparse corruptions,, IEEE Transactions on Information Theory, 57 (2011), 7221. doi: 10.1109/TIT.2011.2158250. Google Scholar
M. Kowalski, Sparse regression using mixed norms,, Applied and Computational Harmonic Analysis, 27 (2009), 303. doi: 10.1016/j.acha.2009.05.006. Google Scholar
G.-J. Kremers, E. B. van Munster, J. Goedhart and T. W. J. Gadella Jr., Quantitative lifetime unmixing of multiexponentially decaying fluorophores using single-frequency fluorescence lifetime imaging microscopy,, Biophysical Journal, 95 (2008), 378. doi: 10.1529/biophysj.107.125229. Google Scholar
A. Quattoni, X. Carreras, M. Collins and T. Darrell, An efficient projection for $l_{1,\infty}$ regularization,, in Proceedings of the 26th Annual International Conference on Machine Learning, (2009), 857. Google Scholar
B. Rao and K. Kreutz-Delgado, An affine scaling methodology for best basis selection,, IEEE Transactions on Signal Processing, 47 (1999), 187. doi: 10.1109/78.738251. Google Scholar
A. J. Reader, Fully 4d image reconstruction by estimation of an input function and spectral coefficients,, IEEE Nuclear Science Symposium Conference Record, 5 (2007), 3260. doi: 10.1109/NSSMIC.2007.4436834. Google Scholar
R. T. Rockafellar, Monotone operators and the proximal point algorithm,, SIAM Journal on Control and Optimization, 14 (1976), 877. doi: 10.1137/0314056. Google Scholar
H. Roozen and A. Van Oosterom, Computing the activation sequence at the ventricular heart surface from body surface potentials,, Medical and Biological Engineering and Computing, 25 (1987), 250. doi: 10.1007/BF02447421. Google Scholar
A. Sawatzky, (Nonlocal) Total Variation in Medical Imaging,, PhD thesis, (2011). Google Scholar
M. Schmidt, K. Murphy, G. Fung, and R. Rosales, Structure learning in random fields for heart motion abnormality detection,, in IEEE Conference on Computer Vision & Pattern Recognition (CVPR), (2008), 1. doi: 10.1109/CVPR.2008.4587367. Google Scholar
T. Schuster, B. Kaltenbacher, B. Hofmann and K. S. Kazimierski, Regularization Methods in Banach spaces, Volume 10,, Walter de Gruyter, (2012). doi: 10.1515/9783110255720. Google Scholar
G. Teschke and R. Ramlau, An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector-valued regimes and an application to color image inpainting,, Inverse Problems, 23 (2007), 1851. doi: 10.1088/0266-5611/23/5/005. Google Scholar
R. Tibshirani, Regression shrinkage and selection via the lasso,, Journal of the Royal Statistical Society: Series B (Methodological), 58 (1996), 267. Google Scholar
R. Tibshirani, Regression shrinkage and selection via the lasso: A retrospective,, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73 (2011), 273. doi: 10.1111/j.1467-9868.2011.00771.x. Google Scholar
A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems,, Winston, (1977). Google Scholar
J. A. Tropp, Greed is good: Algorithmic results for sparse approximation,, IEEE Transactions on Information Theory, 50 (2004), 2231. doi: 10.1109/TIT.2004.834793. Google Scholar
J. A. Tropp, Algorithms for simultaneous sparse approximation part II: Convex relaxation,, Signal Processing - Sparse approximations in signal and image processing, 86 (2006), 589. doi: 10.1016/j.sigpro.2005.05.031. Google Scholar
J. A. Tropp, Just relax: Convex programming methods for identifying sparse signals in noise,, IEEE Transactions on Information Theory, 52 (2006), 1030. doi: 10.1109/TIT.2005.864420. Google Scholar
P. Tseng, Applications of a splitting algorithm to decomposition in convex programming and variational inequalities,, SIAM Journal on Control and Optimization, 29 (1991), 119. doi: 10.1137/0329006. Google Scholar
J. E. Vogt and V. Roth, The group-lasso: $l_{1,\infty}$ regularization versus $l_{1,2}$ regularization,, in Pattern Recognition - 32nd DAGM Symposium, (6376), 252. doi: 10.1007/978-3-642-15986-2_26. Google Scholar
S. L. Wang and L. Z. Liao, Decomposition method with a variable parameter for a class of monotone variational inequality problems,, Journal of Optimization Theory and Applications, 109 (2001), 415. doi: 10.1023/A:1017522623963. Google Scholar
G. A. Watson, Characterization of the subdifferential of some matrix norms,, Linear Algebra and its Applications, 170 (1992), 33. doi: 10.1016/0024-3795(92)90407-2. Google Scholar
M. N. Wernick and J. N. Aarsvold, editors, Emission Tomography: The Fundamentals of PET and SPECT,, Elsevier Academic Press, (2004). Google Scholar
M. Yuan and Y. Lin, Model selection and estimation in regression with grouped variables,, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68 (2006), 49. doi: 10.1111/j.1467-9868.2005.00532.x. Google Scholar
C.-H. Zhang and T. Zhang, A General Framework of Dual Certificate Analysis for Structured Sparse Recovery Problems,, Technical report, (2012). Google Scholar
Gabriel Peyré, Sébastien Bougleux, Laurent Cohen. Non-local regularization of inverse problems. Inverse Problems & Imaging, 2011, 5 (2) : 511-530. doi: 10.3934/ipi.2011.5.511
Yingying Li, Stanley Osher. Coordinate descent optimization for l1 minimization with application to compressed sensing; a greedy algorithm. Inverse Problems & Imaging, 2009, 3 (3) : 487-503. doi: 10.3934/ipi.2009.3.487
Song Li, Junhong Lin. Compressed sensing with coherent tight frames via $l_q$-minimization for $0 < q \leq 1$. Inverse Problems & Imaging, 2014, 8 (3) : 761-777. doi: 10.3934/ipi.2014.8.761
Steven L. Brunton, Joshua L. Proctor, Jonathan H. Tu, J. Nathan Kutz. Compressed sensing and dynamic mode decomposition. Journal of Computational Dynamics, 2015, 2 (2) : 165-191. doi: 10.3934/jcd.2015002
Deren Han, Zehui Jia, Yongzhong Song, David Z. W. Wang. An efficient projection method for nonlinear inverse problems with sparsity constraints. Inverse Problems & Imaging, 2016, 10 (3) : 689-709. doi: 10.3934/ipi.2016017
Michael Herty, Giuseppe Visconti. Kinetic methods for inverse problems. Kinetic & Related Models, 2019, 12 (5) : 1109-1130. doi: 10.3934/krm.2019042
Ying Zhang, Ling Ma, Zheng-Hai Huang. On phaseless compressed sensing with partially known support. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-8. doi: 10.3934/jimo.2019014
Hui Huang, Eldad Haber, Lior Horesh. Optimal estimation of $\ell_1$-regularization prior from a regularized empirical Bayesian risk standpoint. Inverse Problems & Imaging, 2012, 6 (3) : 447-464. doi: 10.3934/ipi.2012.6.447
Martin Hanke, William Rundell. On rational approximation methods for inverse source problems. Inverse Problems & Imaging, 2011, 5 (1) : 185-202. doi: 10.3934/ipi.2011.5.185
Daijun Jiang, Hui Feng, Jun Zou. Overlapping domain decomposition methods for linear inverse problems. Inverse Problems & Imaging, 2015, 9 (1) : 163-188. doi: 10.3934/ipi.2015.9.163
Jussi Korpela, Matti Lassas, Lauri Oksanen. Discrete regularization and convergence of the inverse problem for 1+1 dimensional wave equation. Inverse Problems & Imaging, 2019, 13 (3) : 575-596. doi: 10.3934/ipi.2019027
Raffaella Servadei, Enrico Valdinoci. Variational methods for non-local operators of elliptic type. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2105-2137. doi: 10.3934/dcds.2013.33.2105
Qia Li, Na Zhang. Capped $\ell_p$ approximations for the composite $\ell_0$ regularization problem. Inverse Problems & Imaging, 2018, 12 (5) : 1219-1243. doi: 10.3934/ipi.2018051
Huiqing Zhu, Runchang Lin. $L^\infty$ estimation of the LDG method for 1-d singularly perturbed convection-diffusion problems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1493-1505. doi: 10.3934/dcdsb.2013.18.1493
O. Chadli, Z. Chbani, H. Riahi. Recession methods for equilibrium problems and applications to variational and hemivariational inequalities. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 185-196. doi: 10.3934/dcds.1999.5.185
Frederic Weidling, Thorsten Hohage. Variational source conditions and stability estimates for inverse electromagnetic medium scattering problems. Inverse Problems & Imaging, 2017, 11 (1) : 203-220. doi: 10.3934/ipi.2017010
Stanisław Migórski, Biao Zeng. Convergence of solutions to inverse problems for a class of variational-hemivariational inequalities. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4477-4498. doi: 10.3934/dcdsb.2018172
Yulong Xing, Ching-Shan Chou, Chi-Wang Shu. Energy conserving local discontinuous Galerkin methods for wave propagation problems. Inverse Problems & Imaging, 2013, 7 (3) : 967-986. doi: 10.3934/ipi.2013.7.967
Jesús Ildefonso Díaz. On the free boundary for quenching type parabolic problems via local energy methods. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1799-1814. doi: 10.3934/cpaa.2014.13.1799
Bernadette N. Hahn. Dynamic linear inverse problems with moderate movements of the object: Ill-posedness and regularization. Inverse Problems & Imaging, 2015, 9 (2) : 395-413. doi: 10.3934/ipi.2015.9.395
Pia Heins Michael Moeller Martin Burger
|
CommonCrawl
|
Heat and Thermal Physics
Problems on Specific Heat with Answers for AP Physics
Category : Heat and Thermal Physics
All topics about thermodynamics, heat, and thermal process are presented in a problem-solution method.
Some practice problems about specific heat with solutions are provided for high school students. In each problem, the definition and formula of specific heat are discussed.
Problem (1): An chunk of steel with a mass of 1.57 kg absorbs net thermal energy of $2.5\times 10^{5}$ J and rises its temperature by 355°C. What is the specific heat of the steel?
Solution: the specific heat of a substance is defined as the energy transferred $Q$ to a system divided by the mass $m$ of the system and change in its temperature $\Delta T$ and its formula is \[c\equiv \frac {Q}{m\Delta T}\] so we have \begin{align*} c&=\frac {Q}{m\Delta T}\\ \\ &=\frac{2.5\times 10^{5}}{(1.57)(273+355)}\\ \\&=4485.51\quad{\rm J/kg\cdot\,^{\circ}C}\end{align*} Where ${\rm J/kg\cdot\,^{\circ}C}$ is the SI unit of specific heat.
Problem (2): To one kilogram of water, 4190 J thermal energy is added so that its temperature raises by one degree Kelvin. Find the specific heat capacity of water?
Solution: The specific heat capacity is the amount of heat energy required to raise the temperature of a system with mass of $m$ by $\Delta T$ so applying its formula we get \begin{align*} c&=\frac {Q}{m\Delta T}\\ \\&=\frac{4190\,J}{(1\,kg)(1\,K)}\\ \\&=4190 \quad{\rm J/kg\cdot K}\end{align*}
Problem (3): m kilogram of ice is heated and its temperature increases from -10°C to -5°C. A larger amount of energy is added to the same mass of water and the change in its temperature is from 15°C to 20°C. From these observations, compare the specific heat capacity of ice and water.
Solution: the thermal heats added to the ice and water are $Q_i=m_ic_i \Delta T_i$ and $Q_w=m_wc_w \Delta T_w$, respectively. Now, solving for mass m and equating them (since in the question said that the mass of both samples are the same), we can get \begin{align*} m_i &= m_w\\\frac{Q_i}{c_i\Delta T_i}&=\frac{Q_w}{c_w \Delta T_w}\\ \\ \Rightarrow \frac{c_i}{c_w} &= \Big(\frac{Q_i}{Q_w}\Big)\Big(\frac{\Delta T_w}{\Delta T_i}\Big)\\ \\ &=\Big(\frac{Q_i}{Q_w}\Big) \Big(\frac{20-15}{-5-(-10)}\Big)\\ \\&=\Big(\frac{Q_i}{Q_w}\Big)\end{align*}Since $Q_i < Q_w$, we conclude that the specific heat capacity of ice is less than that of water.
A look at the specific heat tables can confirm this. $c_i= 2090\,{\rm J/kg.K}$ and $c_w=4190\,{\rm J/kg.K}$.
Problem (4): The temperature of a sample of iron with a mass of 10.0 g changed from 50.4°C to 25.0°C with the release of 47 calories of heat. What is the specific heat of iron?
Solution: using the specific heat equation, we get \begin{align*} c&=\frac{Q}{m\Delta T}\\ \\&=\frac{47\, {\rm cal}}{1\,{\rm g}(50.4-25)}\\ \\ &=1.850\quad {\rm cal/g\cdot K}\end{align*}
Problem (5): the temperature of a 200-g sample of an unknown substance changed from 40°C to 25°C. In the process, the substance released 569 calories of energy. What is the specific heat capacity of the substance?
Solution: again applying the specific heat definition, we have \begin{align*} c&=\frac{Q}{m\Delta T}\\ \\ &=\frac{-569\, {\rm cal}}{200\,{\rm g}\,(25-40)}\\ \\&=0.19 \quad {\rm cal/g\cdot K}\end{align*} since thermal energy was leaved the substance so a negative sign is placed.
Problem (6): A 200-g sample of an unknown object is heated using 100 J such that its temperature rises 2°C. What is the specific heat of this unknown object?
Solution: in this problem, the heat transferred $Q$ and the change in temperature $\Delta T$ are given. Using specific heat capacity formula we have \begin{align*} c&=\frac{Q}{m\Delta T}\\ \\ &=\frac{100\, {\rm J}}{0.2\,{\rm kg}\,(2\,^\circ {\rm C})}\\ \\&=500 \quad {\rm J/kg\cdot ^\circ C}\end{align*}
Problem (7): What is the specific heat of metal if its mass is 27 g and it requires 420 J of heat energy to increase its temperature from 25°C to 50°C?
Solution: applying the specific heat formula, we get \begin{align*} c&=\frac{Q}{m\Delta T}\\&=\frac{420\,{\rm J}}{0.27\,{\rm kg}\times (50-25)}\\&=62.2\quad {\rm \frac{J}{kg\cdot\,^\circ C}\end{align*}
Be sure to convert grams to kilograms.
In another type of problem, the specific heat capacity is determined by calorimetry.
In calorimetry problems, the specific heat of a sample is determined by inserting it into a certain amount of water with a known temperature.
This system is isolated so the sum of all energy gains or losses by all objects in the system is zero i.e. $\Sigma Q_i=0$ where $Q_i$ is the energy of the ith object.
If we measure the temperature of the equilibrium point, then we can solve the equation above for the unknown specific heat to find it.
Below, some problems for finding the heat capacity using this method are presented.
Problem (8): A 125-g block of a substance with unknown specific heat with temperature 90°C is placed in an isolated box containing 0.326 kg of water at 20°C. The equilibrium temperature of the system is 22.4°C. What is the block's specific heat?
Solution: since the temperature of the block is more than the water so the block loses the thermal energy and water gains it. Equate these, $Q_{gain}=-Q_{lose}$, and solve for the unknown specific heat. \begin{align*} m_w c_w (T-T_w) &= -m_x c_x (T-T_x)\\ \\ \Rightarrow c_x&=\frac{m_w c_w (T-T_w)}{m_x (T_x-T)}\\ \\ &=\frac{(0.326)(4190)(22.4-20)}{(0.125)(90-22.4)}\\ \\&=388\quad {\rm J/kg\cdot\, ^\circ C}\end{align*}
In calorimetry problems, thermal energy is transferred from the warmer substance to the cooler object.
Therefore, a negative is placed in front of the warmer object to ensures that both sides of the above equality become positive.
Problem (9): A piece of metal of unknown specific heat, weighing 25 g and temperature 90°C, is dropped into 150 g of water at 10°C. They finally reach thermal equilibrium at a temperature of 24°C. Calculate the unknown specific heat capacity? (Specific heat of water is 1.0 cal/g°C)
Solution: Since the equilibrium temperature $T_f$ is lower than the metal one, so it loses as much heat as $Q_m=-m_m c_m (T_f-T_m)$ and the water gains heat as much as $Q_w=m_w c_w (T_f-T_w)$.
As a rule of thumb keep in mind that, once any objects lose heat we should place a negative in front of it.
Conservation of energy says that $Q_w=Q_m$, therefore we have \begin{align*} m_w c_w (T_f-T_w)&= -m_m c_m (T_f-T_m) \\ \\ \Rightarrow c_m &= \frac{m_w c_w (T_f-T_w)}{m_m \,(T_m-T_f)}\\ \\ &=\frac{(150)(1)(24-10)}{25\times(90-24)}\\ \\&=1.27\quad {\rm cal/g\cdot\,^\circ C}\end{align*}
Problem (10): a piece of unknown substance with a mass of 2 kg and a temperature of 280 Kelvins is brought in thermally contact with a 5-kg block of copper initially at 320 K.
This system is in isolation. After the system reaches the thermal equilibrium, its temperature becomes 290 K.
What is the specific heat of the unknown substance in cal/g.°C? (Specific heat of the copper is 0.093 cal/g.°C).
Solution: copper loses heat and unknown substance gains that amount of lost heat, by considering complete isolation. As previous questions, we get \begin{align*} m_s c_s (T_f-T_s)&= -m_c c_c (T_f-T_c) \\ \\ \Rightarrow c_s &= \frac{m_c c_c (T_c-T_f)}{m_s \,(T_f-T_s)}\\ \\&=\frac{(5000)(0.093)(320-290)}{2000\times(290-280)}\\ \\ &=0.6975 \quad {\rm cal/g\cdot K}\end{align*} In above, the subscripts s and c denote the substance and copper, respectively.
Note: in calorimetry equations where a change in temperature appears, it's possible to use either Celsius or Kelvin temperatures because a change in Celsius equals a change in Kelvin.
Problem (11): a 20-g block of a solid initially at 70°C is placed in 100 g of a fluid with a temperature of 20°C. After a while, the system reaches thermal equilibrium at a temperature of 30°C. What is the ratio of the specific heat of the solid to that of fluid?
Solution: solid loses heat and fluid absorbs it so $Q_f=-Q_s$ where f and s denote the fluid and solid, respectively. From the definition of specific heat, we can find the heat transferred during a process as below \begin{align*} m_f c_f (T-T_f) &= -m_s c_s (T-T_s)\\ \\ \Rightarrow \frac{c_f}{c_s} &=\frac{m_s (T_s-T)}{m_f (T-T_f)}\\ \\ &= \frac{(20)(70-30)}{(100)(30-20)}\\ \\ &=0.8\end{align*} Therefore, the specific heat capacity of fluid is less than that of solid.
Problem (12): A chunk of metal with mass 245.7 g at 75.2°C is placed in 115.43 g of water initially at temperature 22.6°C. The metal and water reach the final equilibrium temperature of 34.6°C. If no heat is exchanged between the system and its surrounding, what is the specific heat of the metal? (specific heat of water is 1 cal/g °C).
Solution: since no heat lost to the surrounding, so by conservation of energy we have $Q_{water}=-Q_{metal}$
\begin{align*}Q_{water}&=-Q_{metal}\\m_w c_w (T_f-T_w) &=-m_{metal} \, c_{metal} (T_f-T_{metal})\\ &\\ \Rightarrow c_{metal}&=\frac{m_w c_w (T_f-T_w)}{m_{metal}(T_{metal}-T_f)}\\&\\ &=\frac{(115.43)(1)(34.6-22.6)}{(245.7)(75.2-34.6)}\\ \\&=0.13 \quad {\rm cal/g\cdot^\circ C}\end{align*} where $T_f$ is the equilibrium temperature.
Now, we solve some example problems with specific heat capacity using calorimetry.
Example Problem (13): how many liters of water at 80°C should be mixed with 40 liters of water at 10°C to have a mixture with a final temperature of 40°C?
Solution: conservation of energy tells us that $\Sigma Q=0$ that is $Q_{lost}+Q_{gain}=0$. By considering no heat exchange with environment, hotter objects lost heat and colder ones gain it. Thus, the amount of heat lost by 80°C - water is \begin{align*} Q_{lost}&=mc(T_f-T_i)\\&=mc(40-80)\\&=-40mc\end{align*} and also the heat gained by 40°C - water is \begin{align*} Q_{gain}&=m'c(T_f-T_i)\\&=40\times c\times (40-10)\\&=1200c\end{align*}Therefore, by equating those two heats, we get the unknown mass of the water \begin{align*} Q_{gain}&=-Q_{lost}\\1200c&=-(-40mc)\\1200&=40m \Rightarrow m&=30,{\rm Lit}\end{align*}
Example Problem (14): 200 g of water at 22.5°C is mixed with 150 grams of water at 40°C, after reaching the thermal equilibrium, what is the final temperature of the water?
Solution: using the principle of conservation of energy, we have $Q_{lost}+Q_{gain}=0$. Substituting the specific heat equation into it, we get \begin{gather*} m_1 c_w (T_f-T_1)+m_2 c_w (T_f-T_2)=0\\ \\ 200 (T_f-22.5)+150 (T_f-40)=0\\ \\ \Rightarrow 350T_f=10500 \\ \\ \Rightarrow T_f= 30^{\circ}C \end{gather*} where $T_f$ is the equilibrium (final) temperature of the mixture. Also, the specific heat capacity of the water is dropped from both sides.
Example Problem (15): in a container, 1000 g of water and 200 g of ice are in thermal equilibrium. A piece of metal with a specific heat capacity of $c_m=400\,{\rm J/kg.K}$ and temperature of 250°C are dropped into the mixture.
How much should be a minimum mass of the metal to melt down all of the ice? (heat of fusion of ice is $L_f=336\,{\rm \frac{kJ}{kg}}$.
Solution: minimum mass means that the metal has enough heat energy to only melt down the ice completely and not to raise the temperature of the system. Thus, the amount of heat energy released by $m$ gram of the metal to reach from 250°C to 0°C is $Q_{lost}=mc(T_f-T_i)$ where $T_f$ is the equilibrium temperature which is 0°C.
In the other hand, the required energy for melting 200 g of ice is also obtained as below \[Q_{gain}=m_{ice} L_f=200\times 336\times 10^{3}\,{\rm \frac{J}{kg}}\] where $L_f$ is the latent heat of the ice. Now, conservation of energy tells us that equate them and find the unknown metal's mass \begin{gather*}m_{metal}\,c(T_f-T_i)+m_{ice}L_f=0 \\ \\ m_{metal} \times 400\times (0-250)+200\times 336 \times 10^{3}=0 \\ \\ \Rightarrow m_{metal}=672 \quad {\rm g} \end{gather*} If the metal's mass is greater than this value, the excess heat will cause the final temperature of the system goes higher.
In the end, we provided some specific heat capacities at room temperature.
J/kg.K
cal/g.K
Copper 386 0.0923
Glass 840 0.20
Aluminum 900 0.215
Mercury 140 0.033
Water 4187 1.00
Sea Water 3900 0.93
Ice (-10°C) 2220 0.530
Ethyl Alcohol 2430 0.58
Author: Ali Nemati
Last Update: 1/9/2021
Welcome to Physexams
Physics problems and solutions aimed for high school and college students are provided. In addition, there are hundreds of problems with detailed solutions on various physics topics.
The individuals who are preparing for Physics GRE Subject, AP, SAT, ACT exams in physics can make the most of this collection.
© 2015 All rights reserved. by Physexams.com
|
CommonCrawl
|
pdgLive Home > Supersymmetric Particle Searches > R-parity violating heavy ${{\widetilde{\mathit g}}}$ (Gluino) mass limit
R-parity violating heavy ${{\widetilde{\boldsymbol g}}}$ (Gluino) mass limit INSPIRE search
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit j}}{{\mathit j}}{{\mathit j}}$
ATLS ${}\geq{}4{{\mathit \ell}}$, ${{\mathit \lambda}_{{12k}}}{}\not=$ 0, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ $>$ 1000 GeV
ATLS ${}\geq{}4{{\mathit \ell}}$, ${{\mathit \lambda}_{{i33}}}{}\not=$ 0, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ $>$ 500 GeV
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}{{\mathit s}}$ , ${{\mathit \lambda}_{{332}}^{''}}$ coupling
$\text{none 100 - 1410}$ 95 5
2018 EA
CMS 2 large jets with four-parton substructure, ${{\widetilde{\mathit g}}}$ $\rightarrow$ 5 ${{\mathit q}}$
ATLS ${}\geq{}1{{\mathit \ell}}$+ ${}\geq{}$8 jets, Tglu3A and ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit u}}{{\mathit d}}{{\mathit s}}$ , ${{\mathit \lambda}_{{112}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$=1000 GeV
ATLS ${}\geq{}1{{\mathit \ell}}$+ ${}\geq{}$8 jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}}$ , ${{\widetilde{\mathit t}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit s}}$ , ${{\mathit \lambda}_{{323}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit t}}}}$=1000 GeV
ATLS ${}\geq{}1{{\mathit \ell}}$+ ${}\geq{}$8 jets, Tglu1A and ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit l}}$ , ${{\mathit \lambda}^{\,'}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$=1000 GeV
ATLS same-sign ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$ $/$ 3 ${{\mathit \ell}}$ + jets + $\not E_T$, Tglu3A, ${{\mathit \lambda}_{{112}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 50 GeV
ATLS same-sign ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$ $/$ 3 ${{\mathit \ell}}$ + jets + $\not E_T$, Tglu1A and ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit \ell}}$ , ${{\mathit \lambda}^{\,'}}$ coupling
ATLS same-sign ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$ $/$ 3 ${{\mathit \ell}}$ + jets + $\not E_T$, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit s}}{{\mathit d}}$ , ${{\mathit \lambda}_{{321}}^{''}}$ coupling
ATLS same-sign ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$ $/$ 3 ${{\mathit \ell}}$ + jets + $\not E_T$, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit d}}$ , ${{\mathit \lambda}_{{313}}^{''}}$ coupling
ATLS same-sign ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$ $/$ 3 ${{\mathit \ell}}$ + jets + $\not E_T$, ${{\widetilde{\mathit d}}_{{R}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}$( ${{\mathit t}}{{\mathit s}}$), ${{\mathit \lambda}_{{313}}^{''}}$ (${{\mathit \lambda}_{{321}}^{''}}$) coupling
$\text{none 625 - 1375}$ 95 14
ATLS ${}\geq{}$7 jets+$\not E_T$, large R-jets and/or ${{\mathit b}}$-jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit s}}$ , ${{\mathit \lambda}_{{323}}^{''}}$ coupling
$\text{none 600 - 650}$ 95 15
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}{{\mathit q}}{{\mathit q}}$ , ${{\mathit \lambda}_{{212}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit q}}}}$ = 100 GeV
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}{{\mathit q}}{{\mathit b}}$ , ${{\mathit \lambda}_{{213}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit q}}}}$ = 100 GeV
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}{{\mathit b}}{{\mathit b}}$ , ${{\mathit \lambda}_{{212}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit q}}}}$ = 100 GeV
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit b}}{{\mathit b}}{{\mathit b}}$ , ${{\mathit \lambda}_{{213}}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit q}}}}$ = 100 GeV
ATLS ${}\geq{}4{{\mathit \ell}^{\pm}}$, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\overline{\mathit q}}}{{\widetilde{\mathit \chi}}_{{1}}^{0}}$ , ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\mp}}{{\mathit \nu}}$
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit j}}{{\mathit j}}$
2018 CF
ATLS jets and large R-jets, Tglu2RPV and ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}$ , ${{\mathit \lambda}^{''}}$ coupling, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$=1000 GeV
CMS ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\widetilde{\mathit \chi}}_{{1}}^{0}}$ , ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit \ell}}{{\mathit \nu}}$ , ${{\mathit \lambda}_{{121}}}$ or ${{\mathit \lambda}_{{122}}}{}\not=$0, ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}>$ 400 GeV
ATLS jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\widetilde{\mathit t}}_{{1}}}{{\mathit t}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit s}}{{\mathit b}}$ , 400 $<$ ${\mathit m}_{{{\widetilde{\mathit t}}_{{1}}}}$ $<$ 1000 GeV
ATLS ${{\mathit \ell}}$, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ( ${{\mathit e}}$ $/$ ${{\mathit \mu}}$) ${{\mathit q}}{{\mathit q}}$ , benchmark gluino, neutralino masses
ATLS ${{\mathit \ell}}{{\mathit \ell}}$ /${{\mathit Z}}$, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ( ${{\mathit e}}{{\mathit e}}$ $/$ ${{\mathit \mu}}{{\mathit \mu}}$ $/$ ${{\mathit e}}{{\mathit \mu}}$) ${{\mathit q}}{{\mathit q}}$ , ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 400 GeV and 0.7 $<$ c$\tau _{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ $<$ $3 \times 10^{5}$ mm
ATLS ${}\geq{}$10 jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\overline{\mathit q}}}{{\widetilde{\mathit \chi}}_{{1}}^{0}}$ , ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}$ , ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$=500 GeV
ATLS ${}\geq{}$6,7 jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}$ , (light-quark, $\lambda {}^{''}$ couplings)
ATLS ${}\geq{}$6,7 jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}$ , (b-quark, $\lambda {}^{''}$ couplings)
ATLS ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$( ${{\mathit \ell}^{\mp}}$) + jets, ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ with ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit s}}$ simplified model
CMS same-sign ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\pm}}$ , ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}{{\mathit s}}$ simplified model
1 SIRUNYAN 2019F searched in 35.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for three-jet resonances produced in the decay of a gluino in R-parity violating supersymmetric models. The mass range from 200 to 2000GeV is explored in four separate mass regions. The observations show agreement with standard model expectations. The results are interpreted within the framework of R-parity violating SUSY, where pair-produced gluinos decay to a six quark final state. Gluino masses below 1500GeV are excluded at 95$\%$ C.L. See their Fig.5.
2 AABOUD 2018Z searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events containing four or more charged leptons (electrons, muons and up to two hadronically decaying taus). No significant deviation from the expected SM background is observed. Limits are set on the Higgsino mass in simplified models of general gauge mediated supersymmetry Tn1n1A/Tn1n1B/Tn1n1C, see their Figure 9. Limits are also set on the wino, slepton, sneutrino and gluino mass in a simplified model of NLSP pair production with R-parity violating decays of the LSP via ${{\mathit \lambda}_{{12k}}}$ or ${{\mathit \lambda}_{{i33}}}$ to charged leptons, see their Figures 7, 8.
3 SIRUNYAN 2018AK searched in 35.9 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events containing a single lepton, large jet and ${{\mathit b}}$-quark jet multiplicities, coming from R-parity-violating decays of gluinos. No excess over the expected background is observed. Limits are derived on the gluino mass, assuming the RPV ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}{{\mathit s}}$ decay, see their Figure 9.
4 SIRUNYAN 2018D searched in 35.9 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events containing identified hadronically decaying top quarks, no leptons, and $\not E_T$. No significant excess above the Standard Model expectations is observed. Limits are set on the stop mass in the Tstop1 simplified model, see their Figure 8, and on the gluino mass in the Tglu3A, Tglu3B, Tglu3C and Tglu3E simplified models, see their Figure 9.
5 SIRUNYAN 2018EA searched in 38.2 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for the pair production of resonances, each decaying to at least four quarks. Reconstructed particles are clustered into two large jets of similar mass, each consistent with four-parton substructure. No statistically significant excess over the Standard Model expectation is observed. Limits are set on the squark and gluino mass in RPV supersymmetry models where squarks (gluinos) decay, through intermediate higgsinos, to four (five) quarks, see their Figure 4.
6 AABOUD 2017AI searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with one or more isolated lepton, at least eight jets, either zero or many ${{\mathit b}}$-jets, for evidence of R-parity violating decays of the gluino. No significant excess above the Standard Model expectations is observed. Limits up to 2.1 TeV are set on the gluino mass in R-parity-violating supersymmetry models as Tglu3A with LSP decay through the non-zero ${{\mathit \lambda}_{{112}}^{''}}$ coupling as ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit u}}{{\mathit d}}{{\mathit s}}$ . See their Figure 9.
7 AABOUD 2017AI searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with one or more isolated lepton, at least eight jets, either zero or many ${{\mathit b}}$-jets, for evidence of R-parity violating decays of the gluino. No significant excess above the Standard Model expectations is observed. Limits up to 1.65 TeV are set on the gluino mass in R-parity-violating supersymmetry models with ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}}$ , ${{\widetilde{\mathit t}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit s}}$ through the non-zero ${{\mathit \lambda}_{{323}}^{''}}$ coupling. See their Figure 9.
8 AABOUD 2017AI searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with one or more isolated lepton, at least eight jets, either zero or many ${{\mathit b}}$-jets, for evidence of R-parity violating decays of the gluino. No significant excess above the Standard Model expectations is observed. Limits up to 1.8 TeV are set on the gluino mass in R-parity-violating supersymmetry models as Tglu1A with the LSP decay through the non-zero ${{\mathit \lambda}^{\,'}}$ coupling as ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit \ell}}$ . See their Figure 9.
9 AABOUD 2017AJ searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with two same-sign or three leptons, jets and large missing transverse momentum. No significant excess above the Standard Model expectations is observed. Limits up to 1.8 TeV are set on the gluino mass in R-parity-violating supersymmetry models as Tglu3A with LSP decaying through the non-zero ${{\mathit \lambda}_{{112}}^{''}}$ coupling as ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit u}}{{\mathit d}}{{\mathit s}}$ . See their Figure 5(d).
10 AABOUD 2017AJ searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with two same-sign or three leptons, jets and large missing transverse momentum. No significant excess above the Standard Model expectations is observed. Limits up to 1.75 TeV are set on the gluino mass in R-parity-violating supersymmetry models as Tglu1A with LSP decaying through the non-zero ${{\mathit \lambda}^{\,'}}$ coupling as ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit \ell}}$ . See their Figure 5(c).
11 AABOUD 2017AJ searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with two same-sign or three leptons, jets and large missing transverse momentum. No significant excess above the Standard Model expectations is observed. Limits up to 1.45 TeV are set on the gluino mass in R-parity-violating supersymmetry models where ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit s}}{{\mathit d}}$ through the non-zero ${{\mathit \lambda}_{{321}}^{''}}$ coupling. See their Figure 5(b).
12 AABOUD 2017AJ searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with two same-sign or three leptons, jets and large missing transverse momentum. No significant excess above the Standard Model expectations is observed. Limits up to 1.45 TeV are set on the gluino mass in R-parity-violating supersymmetry models where ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit d}}$ through the non-zero ${{\mathit \lambda}_{{313}}^{''}}$ coupling. See their Figure 5(a).
13 AABOUD 2017AJ searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with two same-sign or three leptons, jets and large missing transverse momentum. No significant excess above the Standard Model expectations is observed. Limits up to 400 GeV are set on the down type squark ( ${{\widetilde{\mathit d}}_{{R}}}$ mass in R-parity-violating supersymmetry models where ${{\widetilde{\mathit d}}_{{R}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}$ through the non-zero ${{\mathit \lambda}_{{313}}^{''}}$ coupling or ${{\widetilde{\mathit d}}_{{R}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit s}}$ through the non-zero ${{\mathit \lambda}_{{321}}^{''}}$. See their Figure 5(e) and 5(f).
14 AABOUD 2017AZ searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with at least seven jets and large missing transverse momentum. Selected events are further classified based on the presence of large R-jets or ${{\mathit b}}$-jets and no leptons. No significant excess above the Standard Model expectations is observed. Limits are set for R-parity violating decays of the gluino assuming ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\widetilde{\mathit t}}_{{1}}}$ and ${{\widetilde{\mathit t}}_{{1}}}$ $\rightarrow$ ${{\mathit b}}{{\mathit s}}$ through the non-zero ${{\mathit \lambda}_{{323}}^{''}}$ couplings. The range $625 - 1375$ GeV is excluded for ${\mathit m}_{{{\widetilde{\mathit t}}_{{1}}}}$ = 400 GeV. See their Figure 7b.
15 KHACHATRYAN 2017Y searched in 19.7 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events containing at least 8 or 10 jets, possibly ${{\mathit b}}$-tagged, coming from R-parity-violating decays of supersymmetric particles. No excess over the expected background is observed. Limits are derived on the gluino mass, assuming various RPV decay modes, see Fig. 7.
17 KHACHATRYAN 2016BX searched in 19.5 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events containing 0 or 1 leptons and ${{\mathit b}}$-tagged jets, coming from R-parity-violating decays of supersymmetric particles. No excess over the expected background is observed. Limits are derived on the gluino mass, assuming the RPV ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}{{\mathit s}}$ decay, see Fig. 7 and 10.
19 AAD 2014X searched in 20.3 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events with at least four leptons (electrons, muons, taus) in the final state. No significant excess above the Standard Model expectations is observed. Limits are set on the gluino mass in an R-parity violating simplified model where the decay ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit q}}{{\overline{\mathit q}}}{{\widetilde{\mathit \chi}}_{{1}}^{0}}$ , with ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}^{\pm}}{{\mathit \ell}^{\mp}}{{\mathit \nu}}$ , takes place with a branching ratio of 100$\%$, see Fig. 8.
20 CHATRCHYAN 2014P searched in 19.4 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for three-jet resonances produced in the decay of a gluino in R-parity violating supersymmetric models. No excess over the expected SM background is observed. Assuming a 100$\%$ branching ratio for the gluino decay into three light-flavour jets, limits are set on the cross section of gluino pair production, see Fig. 7, and gluino masses below 650 GeV are excluded at 95$\%$ C.L. Assuming a 100$\%$ branching ratio for the gluino decaying to one b-quark jet and two light-flavour jets, gluino masses between 200 GeV and 835 GeV are excluded at 95$\%$ C L.
21 AABOUD 2018CF searched in 36.1 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 13 TeV for events with several jets, possibly ${{\mathit b}}$-jets, and large-radius jets for evidence of R-parity violating decays of the gluino. No significant excess above the Standard Model expectations is observed. Limits between 1000 and 1875 GeV are set on the gluino mass in R-parity-violating supersymmetry models as Tglu2RPV with the LSP decay through the non-zero ${{\mathit \lambda}^{''}}$ coupling as ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit q}}{{\mathit q}}{{\mathit q}}$ . The most stringent limit is obtained for ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 1000 GeV, the weakest for ${\mathit m}_{{{\widetilde{\mathit \chi}}_{{1}}^{0}}}$ = 50 GeV. See their Figure 7(b). Figure 7(a) presents results for gluinos directly decaying into 3 quarks, Tglu1RPV.
22 KHACHATRYAN 2016BX searched in 19.5 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events containing 4 leptons coming from R-parity-violating decays of ${{\widetilde{\mathit \chi}}_{{1}}^{0}}$ $\rightarrow$ ${{\mathit \ell}}{{\mathit \ell}}{{\mathit \nu}}$ with ${{\mathit \lambda}_{{121}}}{}\not=$ 0 or ${{\mathit \lambda}_{{122}}}{}\not=$ 0. No excess over the expected background is observed. Limits are derived on the gluino, squark and stop masses, see Fig. 23.
24 AAD 2015X searched in 20.3 ${\mathrm {fb}}{}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events containing large number of jets, no requirements on missing transverse momentum and no isolated electrons or muons. The sensitivity of the search is enhanced by considering the number of ${{\mathit b}}$-tagged jets and the scalar sum of masses of large-radius jets in an event. No evidence was found for excesses above the expected level of Standard Model background. Exclusion limits at 95$\%$ C.L. are set on the gluino mass assuming the gluino decays to various quark flavors, and for various neutralino masses. See their Fig. $11 - 16$.
27 CHATRCHYAN 2014H searched in 19.5 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ collisions at $\sqrt {s }$ = 8 TeV for events with two isolated same-sign dileptons and jets in the final state. No significant excess above the Standard Model expectations is observed. Limits are set on the gluino mass in simplified models where the R-parity violating decay ${{\widetilde{\mathit g}}}$ $\rightarrow$ ${{\mathit t}}{{\mathit b}}{{\mathit s}}$ takes place with a branching ratio of 100$\%$, see Fig. 8.
SIRUNYAN 2019F
PR D99 012010 Search for pair-produced three-jet resonances in proton-proton collisions at $\sqrt s$ =13 TeV
AABOUD 2018Z
PR D98 032009 Search for supersymmetry in events with four or more leptons in $\sqrt{s}=13$ TeV $pp$ collisions with ATLAS
AABOUD 2018CF
PL B785 136 Search for R-parity-violating supersymmetric particles in multi-jet final states produced in $p$-$p$ collisions at $\sqrt{s} =13$ TeV using the ATLAS detector at the LHC
SIRUNYAN 2018AK
PL B783 114 Search for $R$-parity violating supersymmetry in pp collisions at $\sqrt{s} = $ 13 TeV using b jets in a final state with a single lepton, many jets, and high sum of large-radius jet masses
SIRUNYAN 2018EA
PRL 121 141802 Search for pair-produced resonances each decaying into at least four quarks in proton-proton collisions at $\sqrt{s}=$ 13 TeV
AABOUD 2017AI
JHEP 1709 088 Search for New Phenomena in a Lepton Plus High Jet Multiplicity Final State with the ATLAS Experiment using $\sqrt {s }$ = 13 TeV Proton-Proton Collision Data
KHACHATRYAN 2017Y
PL B770 257 Search for New Phenomena in Events with High Jet Multiplicity and Low Missing Transverse Momentum in Proton-Proton Collisions at $\sqrt {s }$ = 8 TeV
KHACHATRYAN 2016BX
PR D94 112009 Searches for $\mathit R$-Parity-Violating Supersymmetry in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV in Final States with $0 - 4$ Leptons
PR D91 112016 Search for Massive Supersymmetric Particles Decaying to Many Jets using the ATLAS Detector in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV
CHATRCHYAN 2014P
PL B730 193 Searches for Light- and Heavy-Flavour Three-Jet Resonances in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV
|
CommonCrawl
|
JMD Home
This Volume
2022, 18: 1-11. doi: 10.3934/jmd.2022001
A new dynamical proof of the Shmerkin–Wu theorem
Tim Austin
Department of Mathematics, University of California, Los Angeles, Los Angeles, CA 90095-1555, USA
Received October 09, 2020 Revised September 29, 2021 Published December 2021
Let $ a < b $ be multiplicatively independent integers, both at least $ 2 $. Let $ A,B $ be closed subsets of $ [0,1] $ that are forward invariant under multiplication by $ a $, $ b $ respectively, and let $ C : = A\times B $. An old conjecture of Furstenberg asserted that any planar line $ L $ not parallel to either axis must intersect $ C $ in Hausdorff dimension at most $ \max\{\dim C,1\} - 1 $. Two recent works by Shmerkin and Wu have given two different proofs of this conjecture. This note provides a third proof. Like Wu's, it stays close to the ergodic theoretic machinery that Furstenberg introduced to study such questions, but it uses less substantial background from ergodic theory. The same method is also used to re-prove a recent result of Yu about certain sequences of sums.
Keywords: Entropy, Hausdorff dimension, multiplication-invariant sets, intersections of Cantor sets, Furstenberg intersection conjecture, Shannon–McMillan–Breiman theorem.
Mathematics Subject Classification: Primary: 11K55, 37A45; Secondary: 28A50, 28A80, 37C45.
Citation: Tim Austin. A new dynamical proof of the Shmerkin–Wu theorem. Journal of Modern Dynamics, 2022, 18: 1-11. doi: 10.3934/jmd.2022001
M. Einsiedler, E. Lindenstrauss and T. Ward, Entropy in Ergodic Theory and Topological Dynamics., Book draft, available online at https://tbward0.wixsite.com/books/entropy. Google Scholar
K. Falconer, Fractal Geometry: Mathematical Foundations and Applications, 3rd edition, John Wiley & Sons, Ltd., Chichester, 2014. Google Scholar
H. Furstenberg, Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation, Math. Systems Theory, 1 (1967), 1-49. doi: 10.1007/BF01692494. Google Scholar
H. Furstenberg, Intersections of Cantor sets and transversality of semigroups, in Problems in Analysis (Sympos. Salomon Bochner, Princeton Univ., Princeton, N.J., 1969), Princeton Univ. Press, Princeton, N.J., 1970, 41–59. Google Scholar
M. Hochman, Lectures on fractal geometry and dynamics., Unpublished lecture notes, available online at http://math.huji.ac.il/ mhochman/courses/fractals-2012/course-notes.june-26.pdf. Google Scholar
P. Shmerkin, On Furstenberg's intersection conjecture, self-similar measures, and the $L^q$ norms of convolutions, Ann. of Math., 189 (2019), 319-391. doi: 10.4007/annals.2019.189.2.1. Google Scholar
M. Wu, A proof of Furstenberg's conjecture on the intersections of $\times p$- and $\times q$-invariant sets, Ann. of Math., 189 (2019), 707-751. doi: 10.4007/annals.2019.189.3.2. Google Scholar
H. Yu, Bernoulli decomposition and arithmetical independence between sequences, Ergodic Theory Dynam. Systems, 41, (2021), 1560–1600. doi: 10.1017/etds.2019.117. Google Scholar
Doug Hensley. Continued fractions, Cantor sets, Hausdorff dimension, and transfer operators and their analytic extension. Discrete & Continuous Dynamical Systems, 2012, 32 (7) : 2417-2436. doi: 10.3934/dcds.2012.32.2417
Luis Barreira and Jorg Schmeling. Invariant sets with zero measure and full Hausdorff dimension. Electronic Research Announcements, 1997, 3: 114-118.
Krzysztof Barański. Hausdorff dimension of self-affine limit sets with an invariant direction. Discrete & Continuous Dynamical Systems, 2008, 21 (4) : 1015-1023. doi: 10.3934/dcds.2008.21.1015
Lisa Hernandez Lucas. Properties of sets of subspaces with constant intersection dimension. Advances in Mathematics of Communications, 2021, 15 (1) : 191-206. doi: 10.3934/amc.2020052
Krzysztof Barański, Michał Wardal. On the Hausdorff dimension of the Sierpiński Julia sets. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3293-3313. doi: 10.3934/dcds.2015.35.3293
Silvére Gangloff, Alonso Herrera, Cristobal Rojas, Mathieu Sablik. Computability of topological entropy: From general systems to transformations on Cantor sets and the interval. Discrete & Continuous Dynamical Systems, 2020, 40 (7) : 4259-4286. doi: 10.3934/dcds.2020180
S. Astels. Thickness measures for Cantor sets. Electronic Research Announcements, 1999, 5: 108-111.
Lulu Fang, Min Wu. Hausdorff dimension of certain sets arising in Engel continued fractions. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2375-2393. doi: 10.3934/dcds.2018098
Carlos Matheus, Jacob Palis. An estimate on the Hausdorff dimension of stable sets of non-uniformly hyperbolic horseshoes. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 431-448. doi: 10.3934/dcds.2018020
Serafin Bautista, Carlos A. Morales. On the intersection of sectional-hyperbolic sets. Journal of Modern Dynamics, 2015, 9: 203-218. doi: 10.3934/jmd.2015.9.203
Mehdi Pourbarat. On the arithmetic difference of middle Cantor sets. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4259-4278. doi: 10.3934/dcds.2018186
V. V. Chepyzhov, A. A. Ilyin. On the fractal dimension of invariant sets: Applications to Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 117-135. doi: 10.3934/dcds.2004.10.117
S. Bautista, C. Morales, M. J. Pacifico. On the intersection of homoclinic classes on singular-hyperbolic sets. Discrete & Continuous Dynamical Systems, 2007, 19 (4) : 761-775. doi: 10.3934/dcds.2007.19.761
François Blanchard, Wen Huang. Entropy sets, weakly mixing sets and entropy capacity. Discrete & Continuous Dynamical Systems, 2008, 20 (2) : 275-311. doi: 10.3934/dcds.2008.20.275
Rasul Shafikov, Christian Wolf. Stable sets, hyperbolicity and dimension. Discrete & Continuous Dynamical Systems, 2005, 12 (3) : 403-412. doi: 10.3934/dcds.2005.12.403
Frank D. Grosshans, Jürgen Scheurle, Sebastian Walcher. Invariant sets forced by symmetry. Journal of Geometric Mechanics, 2012, 4 (3) : 271-296. doi: 10.3934/jgm.2012.4.271
Catalin Badea, Bernhard Beckermann, Michel Crouzeix. Intersections of several disks of the Riemann sphere as $K$-spectral sets. Communications on Pure & Applied Analysis, 2009, 8 (1) : 37-54. doi: 10.3934/cpaa.2009.8.37
Weiyuan Qiu, Fei Yang, Yongcheng Yin. Quasisymmetric geometry of the Cantor circles as the Julia sets of rational maps. Discrete & Continuous Dynamical Systems, 2016, 36 (6) : 3375-3416. doi: 10.3934/dcds.2016.36.3375
Boris Hasselblatt and Jorg Schmeling. Dimension product structure of hyperbolic sets. Electronic Research Announcements, 2004, 10: 88-96.
Adriano Da Silva, Christoph Kawan. Invariance entropy of hyperbolic control sets. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 97-136. doi: 10.3934/dcds.2016.36.97
|
CommonCrawl
|
An optimization approach to capacity evaluation and investment decision of hybrid cloud: a corporate customer's perspective
In Lee1
Journal of Cloud Computing volume 8, Article number: 15 (2019) Cite this article
11 Altmetric
While the rapid growth of cloud computing is driven by the surge of big data, the Internet of Things, and social media applications, an evaluation and investment decision for cloud computing has been challenging for corporate managers due to a lack of proper decision models. This paper attempts to identify critical variables for making a cloud capacity decision from a corporate customer's perspective and develops a base mathematical model to aid in a hybrid cloud investment decision under probabilistic computing demands. The identification of the critical variables provides a means by which a corporate customer can effectively evaluate various cloud capacity investment opportunities. Critical variables included in this model are an actual computing demand, the amount of private cloud capacity purchased, the purchase cost of the private cloud capacity, the price of the public cloud, and the default downtime loss/penalty cost. Extending the base model developed, this paper also takes into consideration the interoperability cost incurred in cloud bursting to the public cloud and derives the optimal investment. The interoperable cloud systems require time and investment by the users and/or cloud providers and there exists a diminishing return on the investment. Hence, the relationship between the interoperable cloud investment and return on investment is also investigated.
The evolution of the Internet, storage technologies, service-oriented architecture, and grid computing has led to the development of cloud computing. Cloud computing has emerged as a disruptive innovation offering a variety of computing services and resources to individual users and corporate customers. Cloud infrastructure that was traditionally limited to single provider data centers is now evolving to the use of infrastructure from multiple providers [1]. A wide variety of deployment models, service models, and pricing schemes are flexibly integrated with each other to help enterprises transform the way they conduct business and meet their idiosyncratic computing needs.
Cloud computing complements traditional client-server computing to support the computing needs of businesses with a range of benefits such as lower IT expenditures, resource pooling, single source application updates, and scalability. Many cloud computing providers such as Google, Microsoft, IBM, and Amazon have been heavily investing in cloud technology and are leading various cloud services markets [2]. However, the increasing dissemination of cloud services and the growing number of cloud service providers (CSPs) have resulted in uncertainty for corporate customers in adopting cloud services [3]. Opara-Martins, Sahandi, and Tian [4] find that the most cited reasons for adopting cloud computing include better scalability of IT resources (45.9%), collaboration (40.5%), cost savings (39.6%), and increased flexibility (36.9%).
The most cited challenge among corporate customers is managing costs, but they underestimate the amount of wasted cloud expense [5]. Respondents estimate 27% waste, while RightScale [5] has measured actual waste at 35%. 84% of the respondents with more than 1000 employees are using a multi-cloud strategy and 58% are using the hybrid cloud. Cloud cost saving is the top priority across all corporate customers (64%). Therefore, this study cannot emphasize enough the importance of the solid justification of the economic capacity evaluation and management of cloud computing as a capacity evaluation and investment decision.
Capacity planning is a challenging task when there is an unpredictable, fluctuating computing demand with many peaks and troughs [6]. Without a solid evaluation model, estimating the tradeoff between the benefits and costs incurred in order to cover peak computing demand is challenging. Therefore, overcapacity or under-capacity is a common phenomenon in the investment of cloud capacity. Overcapacity puts companies at a cost disadvantage due to a low utilization of cloud resources. On the other hand, under-capacity puts them at a strategic disadvantage due to customer/user dissatisfaction, high penalty costs, and potential sales loss.
The hybrid cloud enjoys the benefits of both private and public clouds, but the combination of the two has introduced challenging issues such as data security, performance, and cost optimization [7]. According to RightScale's 2019 State of the Cloud Report [5], the hybrid cloud is the most preferred cloud for corporate cloud capacity investment. The hybrid cloud is flexible enough to handle the spike of computing demand while at the same time reducing computing costs. The hybrid cloud is a cloud environment in which an organization owns and manages their internal private cloud and uses a public cloud or a community cloud externally when necessary. In the hybrid cloud, typically, non-critical computing resources are outsourced to the public cloud, while mission-critical applications and data are kept in the private cloud under the strict control of the organization. Researchers used the MapReduce paradigm to split a data-intensive workload into mapping tasks sorted by the sensitivity of the data, with the most sensitive data being processed at on-premise servers and the least sensitive processed in a public cloud [8]. Many leading enterprises such as Uber, GM, and JP Morgan are adopters of the hybrid cloud. For example, Uber relies on a hybrid cloud infrastructure that combines the use of public cloud services with standardized on-premise server racks in its datacenters [9].
Whether to utilize cloud computing or internal IT resources for business purposes is a critical decision for organizations and decision makers [10]. For companies, the cloud evaluation and investment decision to minimize the total computing costs requires a basic understanding of the relationships between the cost of computing resources, a penalty for computer downtime, and a probability distribution of the computing demand. In light of the lack of studies in the cloud investment area and the corporate trend of migrating to the hybrid cloud, this paper proposes an optimization approach to cloud evaluation and an investment decision of the hybrid cloud for corporate decision makers. The remainder of this paper includes a literature review on cloud capacity investment decisions, the base decision model, an illustration of the model operation, a model extension with an interoperability cost for cloud bursting, and return on investment (ROI) management.
While exact modeling of cloud centers is not feasible due to the nature of cloud centers and the diversity of user requests [11], workload modeling has been used to increase the understanding of typical cloud workload patterns and has led to more informed resource management decisions [12]. Workload modeling and characterization is especially challenging when applied in a highly dynamic environment for reasons such as heterogeneous hardware in a single data center and complex workloads composed of a wide variety of applications with different characteristics and user profiles [13]. In a large-scale system faced by time varying and regionally distributed demands for various resources, there is a tradeoff between optimizing the resource placement to meet its demand and minimizing the number of added and removed resources to the placement. Novel analytic techniques utilizing graph theory methodologies are proposed that overcome this difficulty and yield very efficient dynamic placements with bounded repositioning costs [14].
A number of studies on models of computing resources assume the exponential distribution or the Weibull distribution of computing requests and conduct mathematical analyses of system performance [11, 15, 16]. For example, a cloud center is modeled as a classic open network with a single arrival from which the distribution of response time is obtained, assuming that both interarrival and service times are exponential [17]. A recent study of Patch and Taimre [18] also assumes that tasks require an exponentially distributed service time for transient provisioning and performance evaluation of cloud computing platforms.
Wolski and Brevik [19] conduct a simulation of workload patterns with real data and compare the efficacy of the lognormal distribution, the 3-phase hyper exponential distribution, and the pareto distribution and find that the use of the 3-phase hyper exponential distribution is the most appropriate based on the Kolmolgrov-Smirnov statistic which has been widely used to compute the goodness-of-fit between the observed data and a distribution fit to it. With the goal of providing a repository where practitioners and researchers can exchange grid workload traces, a research team from the Delft University of Technology maintains an archive that contains an extensive set of grid workload data as well as the results of the Kolmolgrov-Smirnov (KS) tests [20]. The Kolmolgrov-Smirnov static of the various workloads show that overall, the Weibull, the lognormal, and the gamma distributions have a better fit than others. While a workload analysis can be used for scheduler design, it is also useful for capacity planning.
The efficient and accurate assessment of cloud-based infrastructure is essential for guaranteeing both business continuity and uninterrupted services for computing jobs [21]. However, efficient resource management of the cloud infrastructures is challenging [22]. In the capacity planning and management area, most studies focus on micro-level scheduling such as dynamic resource allocation and prioritization of computing jobs. Widely used resource management methods such as Amazon Web Services (AWS) Auto Scaling and Windows Azure Autoscaling Application Block, are reactive. While these reactive approaches are an effective way to improve system availability, optimize costs, and reduce latency, it exhibits a mismatch between resource demands and service provisions, which could lead to under or over provisioning [23]. Hence, predictive resource scaling approaches have been proposed to overcome the limitations of the reactive approaches most often used [24,25,26].
Misra and Monda [27] point out that a challenge arises when companies with an existing data center make a decision on whether cloud computing would be helpful for them or if they should stick to their own in-house infrastructure for the expansion and consolidation of the existing data centers. They analyze the cost side aspect with a simulation model that not only helps assess the suitability of the cloud computing but also measures its profitability. Recently, Patch and Taimre [18] demonstrate that taking into account each of the characteristics of fluctuating user demand of cloud services can result in a substantial reduction of losses. Their transient provisioning framework allows for a wide variety of system behaviors to be modeled in the presence of various model characteristics through the use of matrix analytic methods to minimize the revenue loss over a finite time horizon.
de Assunção, di Costanzo, and Buyya [28] investigate the benefits that organizations can reap by using a public cloud to augment their local computing capacity. They evaluate six scheduling strategies to understand how these strategies achieve a balance between performance and usage cost, and how much they improve response time. Gmach, Rolia, Cherkasova, and Kemper [29] describe a capacity management process for resource pools. They use a trace-based approach which predicts future per-server required capacity and achieve a 35% reduction in processor usage as compared to the benchmark best practice.
The hybrid cloud can potentially reduce the financial burden of overcapacity investment and technological risks related to a full ownership of computing resources and can allow companies to operate at a cost-optimal scale and scope under demand uncertainty [30]. The hybrid cloud allows users to scale computing requirements beyond the private cloud and into the public cloud - a capability called cloud bursting in which an application runs in its own private resources for the majority of its computing and bursts into the public cloud when its private resources are insufficient for the peaks in computing demand [31]. For example, a popular and cost-effective way to deal with the temporary computational demand of big data analytics is hybrid cloud bursting that leases temporary off-premise cloud resources to boost the overall capacity during peak utilization [32]. Academic research into the hybrid cloud has focused on the middleware and abstraction layers for creating, managing, and using the hybrid cloud [8]. Commercial support for the hybrid cloud is growing in response to the increasing demand for the hybrid cloud. Container technologies will improve portability between different cloud providers.
While potential benefits of the hybrid cloud arise in the presence of variable demand for many real-world computing workloads, additional costs related to hybrid cloud management, data transfer, and development complexity must be taken into account [33]. Bandwidth, latency, location of data, and communication performance need to be considered for integrating a public cloud with a private cloud [34]. The interoperability and portability issues between the public cloud and the on-premise private cloud also continue to be barriers to its adoption. These issues arise when cloud providers develop services with non-compatible proprietary technologies [4]. While various standardized solutions have been developed for diverse cloud computing services [35], cloud providers often develop their own proprietary services as a way to lock in clients, differentiate their services, and achieve a market monopoly in the early stages of innovation. A lack of standardization poses challenges to cloud users who need to integrate diverse cloud services obtained from multiple cloud providers [36] and perform cloud bursting in the hybrid cloud environment.
While the above-mentioned studies investigate micro-level scheduling schemes for cloud resources, few studies have analyzed how cloud capacity for the public, private, and hybrid cloud interacts with the service level and prices in a given decision horizon. Furthermore, most previous studies did not attempt to develop any closed form solution for an optimal cloud capacity decision.
Cloud computing is in an expansion stage of technological diffusion and is projected to grow more rapidly with the advances in big data, artificial intelligence, and the Internet of Things. Given the growing interest in the cloud capacity decision by managers, researchers need to develop cost-benefit evaluation models and tools that will help managers make a judicious cloud capacity decision. The development of a quantifiable cost-benefit decision model is of vital importance to avoid intuition-based gut-feeling investment decisions. In an attempt to fill a gap in the current studies, the next section provides a normative model for the cloud capacity decision.
A capacity evaluation and investment decision model of hybrid cloud
This section proposes a capacity evaluation and investment decision model of the hybrid cloud and examines the relationships between model parameters and cloud investments. This model derives the optimal capacity decision of the private cloud and the public cloud to minimize the total computing costs. The optimal decision is based on the cost structure and a probability density function for computing demand in a given period. This model reflects customers' perspectives and utilizes a model presented by Henneberger [37] which extends the classic newsvendor problem and derives a critical fractile formula using an inverse cumulative distribution function of computing demand. This newsvendor problem has been widely used in inventory management. For example, Fan et al. [38] apply the newsvendor model to analyze the reduction of inventory shrinkage with the use of RFID. While Henneberger's model did not present a closed-form solution, the proposed model derives a closed-form solution for a capacity evaluation and decision problem with an exponential probability distribution of computing demand. Later, the base model is extended to find the optimal solution for two investment decision variables simultaneously: (1) the optimal cloud capacity for the private cloud and (2) the optimal investment in the interoperability enhancement for cloud bursting in the hybrid cloud environment.
A hypothetical cloud capacity evaluation and investment decision problem
A company needs to decide how much private cloud they need to invest in and how much public cloud they need to use to minimize the total computing costs in a given decision horizon. While the company can choose to use a private cloud only, a public cloud only, or a hybrid cloud, they realize that the hybrid cloud has a cost advantage when a company has a wide range of peaks and troughs in computing demand. The company can purchase a private cloud capacity in its own virtual machine (VM) environment, and can use the pay-as-you-go public cloud for peak-time computing demands. The proposed model helps find an optimal mix of the private and public cloud to minimize the total computing costs. The following is the nomenclature used throughout this paper.
x: an actual computing demand occurring in each time unit with an exponential probability distribution function
λe−λx: an exponential probability distribution function for computing demand
1-e−λx: a cumulative exponential distribution function for computing demand
c: units of private cloud capacity purchased at the beginning of a decision period
k: one-time purchase cost per unit of the private cloud capacity for a decision horizon
p: the price of the public cloud per time unit
q: the guaranteed service level
ka: the default downtime loss/penalty cost
t: the number of time units in a decision horizon
This section presents a base model for the cloud evaluation and investment decision problem. The following several assumptions used in this paper are from Henneberger [37]. It is assumed that private cloud resources are purchased or contracted at the beginning of the decision horizon, that the public cloud provider has a sufficient capacity to satisfy the peak demand of the company, and that the probability distribution for computing demand can be estimated based on real demand data and each demand can be split into the private and public clouds if cloud bursting is necessary. The unit price of the public cloud remains the same over the planning horizon.
Figure 1 shows a simulated demand distribution based on an exponential probability distribution. It is noted that a number of previous studies on models of computing resources assume the exponential distribution or the Weibull distribution of computing requests [11, 15, 16]. It is possible to use any probability distribution with real data, but no closed form solution may be obtainable and a simulation approach may be more appropriate over an analytical approach. x represents an actual computing demand and c represents the private cloud capacity purchased. When the actual computing demand x is below the purchased capacity c, the private cloud will be used. In this situation, c-x is an excess private cloud capacity. When x is the same as c, the private cloud is fully utilized without the need for the public cloud. When the computing demand x exceeds c, then x-c units of the public cloud are purchased from a public cloud provider via cloud bursting. In this model, the decision variable is the units of private cloud capacity to be purchased, c. Among the model variables, critical model variables include a probability distribution function for the computing demand, the price of the public cloud, and the purchase cost per unit of the private cloud.
An Example of a Demand Distribution based on an Exponential Probability Distribution
The base model
The objective of the base model is to decide the optimal private cloud capacity to minimize the total cloud costs. The base model of the hybrid cloud is given as the total cost minimization function (1).
$$ \operatorname{Min}\kern0.5em TC=k\cdot c+\left(q\cdot {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdot p+\left(1-q\right)\cdot {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdot {k}_a\right)\cdot t $$
where k ⋅ c is the total private cloud costs at the beginning of the investment horizon, \( q\cdot {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdot p \) is the costs of the public cloud per time unit, and \( \left(1-q\right)\cdot {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdot {k}_a \) is the default downtime penalty cost related to the out-of-service situation from the public cloud provider. The cost structure follows Henneberger [37]. Applying integration techniques, Eq. (1) is transformed into Eq. (2).
$$ TC=k\cdotp c+\left( qpt+\left(1-q\right){k}_at\right)\left(\frac{1}{\lambda }{e}^{-\lambda c}\right) $$
Differentiating Eq. (2) in terms of c leads to:
$$ \frac{dTC}{dc}=k-{e}^{-\lambda c}\left( qpt+\left(1-q\right){k}_at\right) $$
To get the closed form optimal value of c, Eq. (3) is set to zero. Then, the optimal private cloud capacity required is:
$$ {c}^{\ast }=\frac{\ln \left(\frac{k}{\left( qpt+\left(1-q\right){k}_at\right)}\right)}{-\lambda } $$
Equation (5) is the second derivative of Eq. (2). As the second derivative is always greater than zero for the value of c, the objective function (2) is convex with a minimum at c*.
$$ \frac{d^2 TC}{dc^2}=\lambda {e}^{-\lambda c}\left( qpt+\left(1-q\right){k}_at\right) $$
The utilization rate of the private cloud c is given by:
$$ u=\frac{-\frac{1}{\lambda }{e}^{-\lambda c}+\frac{1}{\lambda }}{c} $$
An illustration of the base model operation
As an illustration of the model operation, assume the following: λ = 0.001; k = $10,000; p = $1.9; q = 0.9945; ka = $100; t = 10,000. In this scenario, the unit price of the private cloud capacity is set to about 53% of the equivalent usage of the public cloud (i.e., $10,000/($1.9*10,000)). This pricing assumption reflects the current cloud market. As of January 2018, Microsoft Azure provides cost savings of 40% to 68% for an advanced purchase of a virtual machine for one or 3 years compared to the hourly-based pay-as-you-go services (see https://azure.microsoft.com/en-us/pricing/details/virtual-machines/linux/). 451 Research [39] also reported significant cost savings with the private cloud. The expected demand is 1000 capacity units (e.g., virtual machines or physical machines) with the exponential probability (λ = 0.001). Then, the optimal private cloud capacity, c*, is 891.8 units, and the total computing cost, TC, is $18,918,135. The optimal utilization rate of the private cloud is 66.17%.
Analysis of model behaviors
This section analyzes the relationship between the capacity decision and model variables. The experiment considers a set of possible pricing scenarios and determines the optimal cloud capacity to minimize the total costs. Figures 2 and 3 show that as the price of the private cloud increases, the optimal private cloud capacity decreases rapidly from over 1400 to less than 600, but the optimal utilization rate of the private cloud increases slowly from 0.54 to 0.77. It is conventional wisdom that a higher utilization rate is desirable. A high utilization rate is not necessarily desirable since it is possible to achieve the high utilization rate with an under-capacity investment in the private cloud. It is interesting to note that the slight improvement of the utilization rate comes with a much greater reduction rate of the private cloud capacity.
Optimal Capacity of Private Cloud
Optimal Utilization Rate of Private Cloud
Figure 4 shows that as the price of the public cloud increases, the use of the public cloud decreases, but, the use of the private cloud increases to meet the computing demand. Note that the total usage of both private and public cloud is 1000 units. When the price of the public cloud is $1.46, the usage of the public cloud is approximately equal to the usage of the private cloud. It is interesting to note that the usage lines of the public and the private cloud have the same distance from the straight line of the 500 units.
Comparison of Cloud Usage
An evaluation and investment in interoperability for cloud bursting
The hybrid cloud requires interoperability of the private and the public cloud to support cloud bursting. Since cloud providers often offer their own proprietary applications, interfaces, and infrastructures, users have difficulty in migrating to cloud providers [40]. Two major approaches were proposed to improve cloud interoperability: provider-centric and user-centric approaches [41]. The provider-centric approaches are driven by a service provider who offers specific services and is willing to develop standards and technologies for customers to achieve a specified level of interoperability between the private and public cloud. On the other hand, the user-centric approaches are driven by the cloud users who either rely on their own in-house IT personnel or a third party (e.g., a cloud broker) to achieve the desired level of interoperability. For example, cloud users may develop a separate layer to handle heterogeneity in cloud environments [42]. In the hybrid computing environment, it is also challenging to decide which applications should be transitioned into the public cloud in time of cloud bursting. Lowering the bursting time and automating the movement process at the proper time require interoperable cloud systems. However, existing commercial tools rely heavily on the system administrator's knowledge to answer key questions such as when a cloud burst is needed and which applications must be moved to the cloud [31].
One benefit of interoperable cloud systems is that it reduces the cost of cloud bursting for users. The interoperable cloud systems may be achieved by using standardized data formats, open source cloud systems, application program interface (API), and specialized middleware. All these require time and investment by the users and/or cloud providers and there may exist a diminishing return on the investments. For example, cloud bursting requires that the company's software is configured to run multiple instances simultaneously and they may need to retrofit existing applications to accommodate multiple instances [43]. Therefore, it is important to evaluate the value of the interoperability-enhancing technologies to support cloud bursting.
This section focuses on the user-centric approaches where a corporate cloud customer decides to invest in interoperability enhancement such as the development of a layer to handle cloud bursting. Mathematical procedures are developed to evaluate investment decisions for the hybrid cloud capacity and the interoperability enhancement simultaneously. The base model is extended to include the investment decision variable for the interoperability enhancement. The simultaneous cloud-interoperability decisions refer to the investment decisions for which both the variable for the cloud capacity and the variable for the interoperability enhancement are solved at the same time in the hope of reducing the total cloud costs synergistically. The cloud-interoperability evaluation and investment model is given as the total cost minimization function (7).
$$ \operatorname{Min}\kern0.5em TC=k\cdotp c+\left(q\cdotp {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdotp p+q\cdotp {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdotp z+\left(1-q\right)\cdotp {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdotp {k}_a\right)\cdotp t+G $$
Where z is a per-unit interoperability cost, \( q\cdotp {\int}_c^{\infty}\lambda {e}^{-\lambda x}\left(x-c\right) dx\cdotp z \) is the total interoperability cost of the public cloud per time unit when cloud bursting occurs, and G is the investment to enhance the interoperability. Note that all other terms are the same as those in the base model. The total cost minimization function (7) is transformed to Eq. (8).
$$ TC=k\cdotp c+\left( qpt+ qzt+\left(1-q\right){k}_at\right)\left(\frac{1}{\lambda }{e}^{-\lambda c}\right)+G $$
Now, the per-unit interoperability cost, z, is defined as an exponential function of the investment, G, as follows.
$$ z=U+\left(W-U\right){e}^{-\beta G},\kern0.5em G\ge 0,\mathrm{and}\kern0.5em 0\le U\le z\le W $$
where W is the highest interoperability cost per time unit of the public cloud incurred when there is no investment in the interoperability enhancement and U is the lowest interoperability cost achievable with the investment of G.
Differentiating Eq. (8) in terms of c and G, respectively leads to:
$$ \frac{dTC}{dc}=k-{e}^{-\lambda c}\left( qpt+ qzt+\left(1-q\right){k}_at\right) $$
$$ \frac{dTC}{dG}=\left(q{z}^{\prime }t\right)\left(\frac{1}{\lambda }{e}^{-\lambda c}\right)+1 $$
To simplify the mathematical procedures,
set e−λc = g
Equation (11) leads to:
$$ \frac{dz}{dG}=\frac{-\lambda }{qtg} $$
The first derivative of Eq. (9) is taken with regard to G. The result is given by:
$$ \frac{dz}{dG}=-\beta \left(z-U\right)<0 $$
Setting two Eqs. (12) and (13) equal leads to:
$$ z=\frac{\lambda + U\beta qtg}{\beta qtg} $$
If the optimal private cloud capacity c*is given, the optimal z* is:
$$ {z}^{\ast }=\frac{\lambda + U\beta qt{e}^{-\lambda {c}^{\ast }}}{\beta qt{e}^{-\lambda {c}^{\ast }}} $$
To find the simultaneous solution for both the private cloud capacity and the interoperability enhancement, set Eq. (10) to zero, and plug Eq. (14) in the equation to get:
$$ g=\frac{\left(\beta k-\lambda \right)}{\left(\beta q pt+ U\beta qt+\beta {k}_at-\beta q{k}_at\right)} $$
Given the optimal g* (i.e., e-λc), z*, c*, and G* are obtained as follows:
$$ {z}^{\ast }=\frac{\lambda + U\beta qt{g}^{\ast }}{\beta qt{g}^{\ast }} $$
$$ {c}^{\ast }=\frac{-\ln {g}^{\ast }}{\lambda } $$
$$ {G}^{\ast }=\frac{-\left(\ln \frac{\left({z}^{\ast }-U\right)}{\left(W-U\right)}\right)}{\beta } $$
Table 1 shows the improvement made by the simultaneous cloud-interoperability investment decision over the sequential cloud-interoperability investment decision and no investment decision. The sequential cloud-interoperability investment decision refers to the evaluation process in which only one decision variable is solved at a time. First, using Eq. (4), the optimal private cloud capacity, c*, is determined without considering the optimal interoperability investment, G*. Second, based on the optimal private cloud capacity, using Eq. (15), the investment in the optimal interoperability enhancement is determined. For example, assume the same base model's parameters. Then, the optimal private cloud capacity c* ≈ 435. Then, g = e−λc = e(−0.001 ∗ 435) = 0.64746. The optimal interoperability cost, z*, is $0.4883. The optimal investment in the interoperability enhancement, G*, is $1,807,338. Table 1 shows that the sequential cloud-interoperability investment decision results in a lower interoperability cost per computing unit than the simultaneous cloud-interoperability investment decision, but it brings about overinvestment in the interoperability enhancement and consequently larger total cloud costs, as it does not take into consideration the synergy effect. On the other hand, the simultaneous cloud-interoperability investment decision leads to an optimal investment, considering the synergy effect between the interoperability cost and the cloud capacity cost. Surprisingly, the sequential interoperability investment generates a higher total cost than no interoperability investment.
Table 1 Comparison of investment approaches
Sensitivity analysis of cloud-interoperability investment decision
This section further evaluates the performance of the sequential and simultaneous investment approaches discussed above and conducts sensitivity analyses to understand model behaviors of the investment decisions by changing parameter values. The base parameter values assumed are presented below.
Base parameters
k: a cost of $10,000 per unit private cloud capacity
p: the unit price of $1.0 for public cloud service per time unit
z: the interoperability cost of $0.9 without investment
q: the guaranteed service level of 99.45%
ka: a default downtime loss/penalty cost of $100 per time unit
t: 10,000 time units in a decision horizon
U = $0.1
W = $0.9
λ = 0.001
β = 0.0000004
Figures 5 shows that as the computing demand increases, the optimal interoperability cost, z*, decreases with more investment in the interoperability enhancement, G. The decline of the per-unit interoperability cost is more rapid when the demand is smaller and the decline slows down as the demand increases.
Change of Computing Demand and and Optimal Interoperability Cost
Figure 6 shows that as the demand level increases, the difference gets wider between the total cost of the simultaneous cloud-interoperability and that of the no investment decision. The simultaneous cloud-interoperability dominates no investment at all demand levels. Figure 7 shows that the total cost of the sequential cloud-interoperability investment is higher than that of the no investment decision when the demand level is small, but becomes lower than that of the no investment decision as the demand gets larger.
Change of Computing Demand and Total Costs by Simultaneous Investment and No Investment
Change of Computing Demand and Total Costs by Sequenctial Investment and No Investment
The analysis indicates that an organization with a large cloud demand level would achieve a greater cost advantage from the interoperability enhancement than an organization with a small demand, since a marginal increase of the investment in the interoperability enhancement generates large savings for the total cloud cost when the demand is high. On the other hand, many small and medium-sized companies may find the use of a third-party cloud broker to be more cost effective than an in-house development of interoperability technologies. Small and medium-sized companies may also reap benefits from cloud providers which offer standardized interoperable cloud technologies.
Return on investment of an interoperability project
As in many other IT projects, one of the barriers to the investment in cloud technologies is difficulty in measuring the potential return on investment (ROI). ROI is one of the most crucial criteria for companies to consider when investing in a new technology. ROI analyses have been used in various technology evaluations [44]. For example, Lee [45] develops cost/benefit models to measure the impact of RFID-integrated quality improvement programs and to predict the ROI. Using these models, the decision makers decide whether and how much to invest in quality improvement projects. An ROI model was developed for an economic evaluation of cloud computing investment [27].
While a formula for calculating ROI is relatively straightforward, the ROI method is more suitable when both the benefits and the costs of an investment are easily traceable and tangible. Since an investment in cloud technologies is a capital expenditure, the investment is likely to be under scrutiny of senior management for the budget approval. The investment evaluation and decision model in the previous sections identifies the optimal investment point where the marginal cost saving equals the marginal investment. However, it is possible that the optimal solution does not necessarily meet the ROI threshold rate imposed by the organization. The ROI threshold rate is the minimum ROI level for which a company will make investments. The ROI threshold rate is usually tied to the cost of capital and is used to screen out low productivity projects under financial constraints. This section derives a formula to identify the target ROI of the interoperability enhancement project.
To illustrate the calculation of the ROI, suppose the total cloud cost without investment in interoperability is $100 m. Assume that with the investment of $10 million, the total cloud cost including the investment is $99 m. In this case the ROI is 10% (i.e., ($100 m-$99 m)/$10 m). If the ROI threshold rate of the company is 20%, the project would be rejected even though the investment of $10 million reduces the total cloud cost. The basic ROI formula is given as:
$$ ROI=\left(\frac{Total\ Cost\ with out\ investment- Total\ cost\ with\ investment}{Invesment}\right)\cdot 100 $$
Since the optimal investment, G*, is the point where the marginal cost saving equals the marginal investment, it is expected that the decrease of the investment amount increases the ROI. Equation (21) is used to measure the ROI.
$$ \frac{I-\left(k\cdotp c+\left( qpt+ qzt+\left(1-q\right){k}_at\right)\left(\frac{1}{\lambda }{e}^{-\lambda c}\right)+G\right)}{G}=r $$
where I is the total cloud cost without investment in interoperability and r is the ROI target threshold.
To find the target ROI using the Newton-Raphson method in one variable, the following is formulated:
$$ f(G)=I-\left(k\cdotp c+\left( qpt+ qzt+\left(1-q\right){k}_at\right)\left(\frac{1}{\lambda }{e}^{-\lambda c}\right)+G\right)- rG $$
Then, the first derivative of f(G) is:
$$ {f}^{\prime }(G)=-\left(q{z}^{\prime }t\right)\left(\frac{1}{\lambda }{e}^{-\lambda c}\right)-1-r $$
where z′ = − β(W − U)e−βG.
To begin the search for the target ROI, r, set G0 at G* found in Eq. (19).
$$ {G}_1={G}_0-\frac{f\left({G}_0\right)}{f^{\prime}\left({G}_0\right)} $$
The process is repeated as:
$$ {G}_{n+1}={G}_n-\frac{f\left({G}_n\right)}{f^{\prime}\left({G}_n\right)} $$
until a sufficiently accurate value of G is reached.
As an illustration of the above Newton-Raphson method, assume that the target ROI is 20%. Using the base parameters given in "Sensitivity analysis of cloud-interoperability investment decision" section, G0 is set at $932,129. G1 is $690,836 and ROI is 14.83%. G2 is $620,350 and ROI is 18.72%. G3 is $612,780 and ROI is 19.87%. G4 is $612,690 and ROI is 19.9985%. After only four iterations of Eqs. (24) and (25), a sufficiently accurate value of G is obtained to achieve the target ROI of 20%. Compared to a time-consuming linear search for the G for the target ROI, the Newton-Raphson method is shown to achieve tremendous efficiency in the cloud investment decision and can be a useful performance management technique.
A sensitivity analysis is conducted to investigate the relationship between the ROI and G. Note that to minimize the total cloud cost, the optimal investment should be $932,129 and the ROI is 14.82%. Figure 8 shows that the decrease of the investment from $932,129 generates higher ROIs, but the increase of the investment from $932,129 decreases the ROIs. If the ROI threshold rate of the company is higher than 14.82%, they need to decrease the investment even though the total cloud cost is the lowest at the investment of $932,129. With the ROI, managers would be able to develop a strong justification for the investment.
Change of the ROI over the Investment in Interoperability
Diminishing return applies to the ROI of the interoperability investment. A corporate cloud customer can find that for their given computing demand distribution, there is an optimal level of interoperability investment that minimizes the total cloud cost. Up to the optimal level of investment, any additional investment will increase the return. However, over that optimal level of investment, any additional investment will continue to diminish the return. The corporate cloud customer may have an ROI threshold rate which is equal to the company's minimum acceptable ROI rate. If the ROI at the optimal total cloud cost is greater than or equal to the ROI threshold rate, then the interoperability investment level needed for the optimal total cloud cost is justified. However, if the ROI at the optimal total cloud cost is less than the ROI threshold rate, the company needs to lower the interoperability investment to the point where the ROI equals their ROI threshold rate.
According to Forrester [46], in 2018, cloud computing became a must-have technology for every enterprise. Nearly 60% of North American enterprises are using some type of public cloud platform. Furthermore, private clouds are also growing fast, as companies not only move workloads to the public cloud but also develop on-premises private cloud in their own data centers. Therefore, this paper predicts that the corporate adoption of hybrid cloud computing for corporate cloud capacity management is an irreversible trend. The demand for the hybrid cloud will continue to grow in the future as big data, smart phones, and the Internet of Things (IoT) technologies require highly scalable infrastructure to meet the growing but fluctuating computing demand.
Advances in cloud computing hold great promise for lowering computing costs and speeding up new business developments. However, despite the popularity of cloud computing, no significant investment evaluation models are available. The existing body of studies are mostly descriptive in nature. The optimal cloud capacity investment has been elusive and therefore investments have been based on gut feelings rather than solid decision models. Hence, this paper presented a capacity evaluation and investment decision model for the hybrid cloud. A closed form solution was derived for the optimal capacity decision. This model provides a solid foundation to evaluate investment decisions for various cloud technologies, moving beyond the existing descriptive valuation studies to normative studies, the outcomes of which should be able to guide cloud professionals to plan on how much they need to invest in different cloud technologies and how they can enhance the investment benefits from a corporate customer's perspective.
While there is widespread cloud adoption and technological breakthrough, the interoperability issue remains a barrier. To address the interoperability issue in the hybrid cloud, this paper also developed the cloud-interoperability evaluation and investment model by extending the basic model. This model derives the optimal investment levels for both the cloud capacity and the interoperability enhancement simultaneously. The relationship between the investment in the interoperability enhancement and ROI was also investigated.
This study is the first effort in developing an analytical cloud capacity evaluation and investment decision model for the hybrid cloud, and analyzing the impact of the interoperability investment on the cost savings and ROI. This model can be easily adapted to a variety of investment situations in both for-profit organizations and non-profit organizations such as hospitals, governments, and libraries that have similar computing demand and cost structures.
Like many studies, this study has several limitations. First, it is noted that the successful use of these models would require accurate estimation of the model parameters and the use of various modeling techniques. Future research may explore various estimation techniques of the model parameters to develop more realistic models. Second, future studies need to investigate additional variables for the model. For example, different cloud providers employ different schemes and models for pricing [47] and the diversity in the pricing models makes price comparison difficult [48]. Incorporation of this variable price may be a worthwhile future research. The interoperability costs may include not only tech architecture or wrappers, but also data transport costs. The future model may include the data transport costs in the cloud capacity decision. Lastly, despite an increased focus on cloud cost management, only a minority of companies have implemented policies to minimize wasted cloud resources such as shutting down unused workloads or rightsizing instances [5]. Implementation of well-defined cloud governance and adaptive cloud scheduling techniques may help companies minimize the wasted cloud resources and avoid overinvestment in cloud capacity. Researchers are encouraged to explore the above-mentioned limitations of this study in their future research.
Data sharing not applicable to this article as no datasets were generated or analyzed during the current study.
RFID:
Varghese B, Buyya R (2018) Next generation cloud computing: new trends and research directions. Future Gener Comput Syst 79(Part 3):849–861
Birje MN, Challagidad PS, Goudar RH, Tapale MT (2017) Cloud computing review: concepts, technology, challenges and security. Int J Cloud Comput 6(1):32–57
Hentschel R, Leyh C, Petznick A (2018) Current cloud challenges in Germany: the perspective of cloud service providers. J Cloud Comput 7:5. https://doi.org/10.1186/s13677-018-0107-6
Opara-Martins J, Sahandi R, Tian F (2016) Critical analysis of vendor lock-in and its impact on cloud computing migration: a business perspective. J Cloud Comput Adv Syst Appl 5:4. https://doi.org/10.1186/s13677-016-0054-z
RightScale (2019). State of the cloud report. Available from: https://www.rightscale.com/lp/state-of-the-cloud?campaign=RS-HP-SOTC2019
Lee I (2017) Determining an optimal mix of hybrid cloud computing for enterprises. UCC '17 companion. In: Proceedings of the10th international conference on utility and cloud computing, pp 53–58
Azumah KK, Sørensen LT, Tadayoni R (2018) Hybrid cloud service selection strategies: a qualitative meta-analysis. In: 2018 IEEE 7th international conference on adaptive science & technology (ICAST), Accra, Ghana
Litoiu M, Wigglesworth J, Mateescu R (2016) The 8th CASCON workshop on cloud computing. In: Proceeding CASCON '16 proceedings of the 26th annual international conference on computer science and software engineering, pp 300–303
Computerweekly (2018) Available from: https://www.computerweekly.com/news/252452059/Uber-backs-hybrid-cloud-as-route-to-business-and-geographical-expansion. Accessed 21 January 2019.
Alashoor T (2014) Cloud computing: a review of security issues and solutions. Int J Cloud Comput 3(3):228–244
Khazaei H, Misic J, Misic VB (2011) Modelling of cloud computing centers using M/G/m queues. In: 2011 31st international conference on distributed computing systems workshops, Minneapolis, MN, USA, pp 87–92
Moreno I, Garraghan P, Townend P, Xu J (2013) An approach for characterizing workloads in google cloud to derive realistic resource utilization models. In: Proceedings of 7th international symposium on service oriented system engineering (SOSE), Redwood City, CA, pp 49–60
Magalhães D, Calheiros RN, Buyya R, Gomes DG (2015) Workload modeling for resource usage analysis and simulation in cloud computing. Comput Electr Eng 47:69–81
Rochman Y, Levy H, Brosh E (2017) Dynamic placement of resources in cloud computing and network applications. Perform Eval 115:1–37
Gupta V, Harchol-Balter M, Sigman K, Whitt W (2007) Analysis of join-the-shortest-queue routing for web server farms. Perform Eval 64(9–12):1062–1081
Yang B, Tan F, Dai Y, Guo S (2009) Performance evaluation of cloud service considering fault recovery. In: First Int'l conference on cloud computing CloudCom, Beijing, China, pp 571–576
Xiong K, Perros H (2009) Service performance and analysis in cloud computing. In: IEEE 2009 world conference on services, Los Angeles, CA, pp 693–700
Patch B, Taimre T (2018) Transient provisioning and performance evaluation for cloud computing platforms: a capacity value approach. Perform Eval 118:48–62
Wolski R, Brevik J (2014) Using parametric models to represent private cloud workloads. IEEE Trans Serv Comput 7(4):714–725
Grid Workloads Archive (2017) Available from: http://gwa.ewi.tudelft.nl/. Accessed 21 January 2019.
Araujo J, Maciel P, Andrade E, Callou G, Alves V, Cunha P (2018) Decision making in cloud environments: an approach based on multiple-criteria decision analysis and stochastic models. J Cloud Comput Adv Syst Appl 7:7. https://doi.org/10.1186/s13677-018-0106-7
López-Pires F, Barán B (2017) Cloud computing resource allocation taxonomies. Int J Cloud Computing 6(3):238–264
Balaji M, Kumar A, Rao SVRK (2018) Predictive cloud resource management framework for enterprise workloads. J King Saud Univ Comput Inf Sci 30(3):404–415
Wang CF, Hung WY, Yang CS (2014) A prediction based energy conserving resources allocation scheme for cloud computing. In: 2014 IEEE international conference on granular computing (GrC), Noboribetsu, Japan, pp 320–324
Han Y, Chan J, Leckie C (2013) Analysing virtual machine usage in cloud computing. In: 2013 IEEE ninth world congress on services, Santa Clara, CA, USA, pp 370–377
Deng D, Lu Z, Fang W, Wu J (2013) CloudStreamMedia: a cloud assistant global video on demand leasing scheme. In: 2013 IEEE international conference on services computing, Santa Clara, CA, USA, pp 486–493
Misra SC, Monda A (2011) Identification of a company's suitability for the adoption of cloud computing and modelling its corresponding return on investment. Math Comput Model 5(3/4):504–521
de Assunção MD, di Costanzo A, Buyya R (2010) A cost-benefit analysis of using cloud computing to extend the capacity of clusters. Clust Comput 13(3):335–347
Gmach D, Rolia J, Cherkasova L, Kemper A (2007) Capacity management and demand prediction for next generation data centers. In: IN: IEEE international conference on web services, Salt Lake City, UT, USA, pp 43–50
Laatikainen G, Mazhelis O, Tyrvainen P (2016) Cost benefits of flexible hybrid cloud storage: mitigating volume variation with shorter acquisition cycle. J Syst Softw 122:180–201
Guo T, Sharma U, Shenoy P, Wood T, Sahu S (2014) Cost-aware cloud bursting for enterprise applications. ACM Trans Internet Technol (TOIT) 13(3):10 22 pages
Clemente-Castelló FJ, Mayo R, Fernández JC (2017) Cost model and analysis of iterative MapReduce applications for hybrid cloud bursting. In: CCGrid '17 Proceedings of the 17th IEEE/ACM international symposium on cluster, cloud and grid computing, Madrid, Spain, pp 858–864
Weinman J (2016) Hybrid cloud economics. IEEE Cloud Comput 3(1):18–22
Toosi AN, Sinnott RO, Buyya R (2018) Resource provisioning for data-intensive applications with deadline constraints on hybrid clouds using Aneka. Future Gener Comput Syst 79(Part 2):765–775
Petcu D (2011) Portability and interoperability between clouds: challenges and case study. In: Abramowicz W, Llorente IM, Surridge M, Zisman A, Vayssière J (eds) Towards a service-based internet. ServiceWave 2011. Lecture notes in Computer Science, vol 6994. Springer, Berlin, pp 62–74
Edmonds A, Metsch T, Papaspyrou A, Richardson A (2012) Toward an open cloud standard. IEEE Internet Comput 16(4):15–25
Henneberger M (2016) Covering peak demand by using cloud services - an economic analysis. J Decis Syst 25(2):118–135
Fan TJ, Chang XY, Gu CH, Yi JJ, Deng S (2014) Benefits of RFID technology for reducing inventory shrinkage. Int J Prod Econ 147(Part C):659–665
451 Research (2017) Can private cloud be cheaper than public cloud? Available from: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/products/vrealize-suite/vmware-paper1-can-private-cloud-be-cheaper-than-public-cloud.pdf
Mansour I, Sahandi R, Cooper K, Warman A (2016) Interoperability in the heterogeneous cloud environment: a survey of recent user-centric approaches. In: ICC '16 proceedings of the international conference on internet of things and cloud computing, Article No. 62, Cambridge: ACM, New York.
Toosi AN, Calheiros RN, Buyya R (2014) Interconnected cloud computing environments: challenges, taxonomy, and survey. ACM Comput Surv 47(1):7 47 pages
Zhang Z, Wu C, Cheung DWL (2013) A survey on cloud interoperability: taxonomies, standards, and practice. ACM SIGMETRICS Perform Eval Rev 40(4):13–22
Morpheus (2016) If cloud bursting is so great, why aren't more companies doing it? Available from: https://www.morpheusdata.com/blog/2016-10-26-if-cloud-bursting-is-so-great-why-aren-t-more-companies-doing-it
Sarac A, Absi N, Dauzère-Pérès S (2010) A literature review on the impact of RFID technologies on supply chain management. Int J Prod Econ 128(1):77–95
Lee HH (2008) The investment model in preventive maintenance in multi-level production systems. Int J Prod Econ 112(2):816–828
Forrester (2018) Predictions 2019: Cloud computing comes of age as the foundation for enterprise digital transformation. Available from: https://go.forrester.com/blogs/predictions-2019-cloud-computing/
Al-Roomi M, Al-Ebrahim S, Buqrais S, Ahmad I (2013) Cloud computing pricing models: a survey. Int J Grid Distr Comput 6(5):93–106
Lehmann S, Buxmann P (2009) Pricing strategies of software vendors. Bus Inf Syst Eng 1(6):452–462
Cecil P. McDonough Endowed Professor in Business, School of Computer Sciences, Western Illinois University, Macomb, IL, 61455, USA
In Lee
Search for In Lee in:
In Lee is the sole author of this study. The author read and approved the final manuscript.
Correspondence to In Lee.
Lee, I. An optimization approach to capacity evaluation and investment decision of hybrid cloud: a corporate customer's perspective. J Cloud Comp 8, 15 (2019) doi:10.1186/s13677-019-0140-0
Accepted: 15 October 2019
Decision model
Utility, big data and cloud computing: Applications and technologies
|
CommonCrawl
|
Engora Data Blog
Mike Woodward's thoughts on data and analytics. I cover management, law, statistics, company culture, hiring, data visualization, Python, SQL and a whole bunch of other areas.
Showing posts with label data visualization. Show all posts
W.E.B. Du Bois - data scientist
Changing the world through charts
Two of the key skills of a data scientist are informing and persuading others through data. I'm going to show you how one man, and his team, used novel visualizations to illustrate the lives of African-Americans in the United States at the start of the 20th century. Even though they created their visualizations by hand, these visualizations still have something to teach us over 100 years later. The team's lack of computers freed them to try different forms of data visualizations; sometimes their experimentation was successful, sometimes less so, but they all have something to say and there's a lesson here on communication for today's data scientists.
I'm going to talk about W.E.B. Du Bois and the astounding charts his team created for the 1900 Paris exhibition.
(W.E.B. Du Bois in 1904 and one of his 1900 data visualizations.)
Who was W.E.B. Du Bois?
My summary isn't going to do his amazing life justice so I urge you to read any of these short descriptions of who he was and what he did:
To set the scene here's just a very brief list of some of the things he did. Frankly, summarizing his life in a few lines is ridiculous.
Born 1868, Great Barrington, Massachusetts
Graduate of Fisk University and Harvard - the first African-American to gain a Ph.D. from Harvard
Conducted ground-breaking sociological work in Philadelphia, Virginia, Alabama, and Georgia
His son died in 1899 because no white doctor would treat him and black doctors were unavailable
Was the primary organizer of "The Exhibit of American Negroes" at the Exposition Universelle held in Paris between April and November 1900
NAACP director and editor of the NAACP magazine The Crisis
Debated Lothrop Stoddard, a "scientific racist" in 1929 and thoroughly bested him.
Opposed US involvement in World War I and II.
Life-long peace activist and campaigner, which led to the FBI investigating him in the 1950s as a suspected communist. They withheld his passport for 8 years.
Died in Ghana in 1963.
Visualizing Black America at the start of the twentieth century
In 1897, Du Bois was a history professor at Atlanta University. His former classmate and friend, Thomas Junius Calloway, asked him to produce a study of African-Americans for the 1900 Paris world fair, the "Exposition Universelle". With the help of a large team of Atlanta University students and alumni, Du Bois gathered statistics on African-American life over the years and produced a series of infographics to bring the data to life. Most of the names of the people who worked on the project are unknown, and it's a mystery who originated the form of the plots, but the driving force behind the project was undoubtedly Du Bois. Here are some of my favorite infographics from the Paris exhibition.
The chart below shows where African-Americans lived in Georgia in 1890. There are four categories:
Red - country and villages
Yellow - cities 2,500-5,000
Blue - cities 5,000-10,000
Green - cities over 10,000
the length of the lines is proportional to the population and obviously, the chart shows the huge majority of the population lived in the country and villages. I find the chart striking for three reasons: it doesn't follow any of the modern charting conventions, it clearly represents the data, and it's visually very striking. My criticism is that the design makes it hard to visually quantify the differences, for example, how many more people live in the country and villages compared to cities 5,000-10,000? If I were drawing a chart with the same data today, I might use an area chart to represent the same data; it would quantify things better, but it would be far less visually interesting.
The next infographic is two choropleth charts that show the African-American population of Georgia counties in 1870 and 1880. Remember that the US civil war ended in 1865, and with the Union victory came freedom for the slaves. As you might expect, there was a significant movement of the now-free people. Looking at the charts in detail raises several questions, for example, why did some areas see a growth in the African-American population while other areas did not? Why did the highest populated areas remain the highest populated? The role of any good visualization is to prompt meaningful questions.
This infographic shows the income and expenditure of 150 African-American families in Georgia. The income bands are on the left-hand side, and the bar chart breaks down the families' expenses by category:
Black - rent
Purple - food
Pink - clothes
Dark blue - direct taxes
Light blue - other expenses and savings
There are several notable observations from this chart: the disappearance of rent above a certain income level, the rise in other expenses and savings with rising income, and the declining fraction spent on clothing. There's a lot on this chart and it's worthy of greater study; Du Bois' team crammed a great deal of meaning into a single page. For me, the way the key is configured at the top of the chart doesn't quite work, but I'm willing to give the team a pass on this because it was created in the 19th century. A chart like this wouldn't look out of place in a 2022 report - which of itself is startling.
My final example is a comparison of the occupations of African-Americans and the white population in Georgia. It's a sort-of pie chart, with the upper quadrant showing African Americans and the bottom quadrant showing the white population. Occupations are color-coded:
Red - agriculture, fishery, and mining
Yellow - domestic and personal service
Blue - manufacturing and mechanical industries
Grey - trade and transportation
Brown - professions
The fraction of the population in these employment categories is written on the slices, though it's hard to read because the contrast isn't great. Notably, the order of the occupations is reversed from the top to the bottom quadrant, which has the effect of making the sizes of the slices easier to compare - this can't be an accident. I'm not a fan of pie charts, but I do like this presentation.
Influences on later European movements - or not?
Two things struck me about Du Bois' charts: how modern they looked and how similar they were to later art movements like the Italian Futurists and Bauhaus.
At first glance, his charts look to me like they'd been made in the 1960s. The typography and coloring were obviously pre-computerization, but everything else about them suggests modernity, from the typography to the choice of colors to the layout. The experimentation with form is striking and is another reason why this looks very 1960s to me; perhaps the use of computers to visualize data has constrained us too much. Remember, Du Bois's mission was to explain and convince and he chose his charts and their layout to do so, hence the experimentation with form. It's quite astonishing how far ahead of his time he was.
Italian Futurism started in 1909 and largely fizzled out at the end of the second world war due to its close association with fascism. The movement emphasized the abstract representation of dynamism and technology among other things. Many futurist paintings used a restricted color palette and have obvious similarities with Du Bois' charts, here are just a few examples (below). I couldn't find any reliable articles that examined the links between Du Bois' work and futurism.
Numbers In Love - Giacomo Balla
Image from WikiArt Music - Luigi Russolo
Image from WikiArt
The Bauhaus design school (1919-1933) sought to bring modernity and artistry into mass production and had a profound and lasting effect on the design of everyday things, even into the present day. Bauhaus designs tend to be minimal ("less is more") and focus on functionality ("form follows function") but can look a little stark. I searched, but I couldn't find any scholarly study of the links between Du Bois and Bauhaus, however, the fact the Paris exposition charts and the Bauhaus work use a common visual language is striking. Here's just one example, a poster for the Bauhaus school from 1923.
(Joost Schmidt, Public domain, via Wikimedia Commons)
Du Bois' place in data visualization
I've read a number of books on data visualization. Most of them include Nightingale's coxcomb plots and Playfair's bar and pie charts, but none of them included Du Bois charts. Du Bois didn't originate any new chart types, which is maybe why the books ignore him, but his charts are worth studying because of their experimentation with form, their use of color, and most important of all, their ability to communicate meaning clearly. Ultimately, of course, this is the only purpose of data visualization.
Reading more
W. E. B. Du Bois's Data Portraits: Visualizing Black America, Whitney Battle-Baptiste, Britt Rusert. This is the book that brought these superb visualizations to a broader audience. It includes a number of full-color plates showing the infographics in their full glory.
The Library of Congress has many more infographics from the Paris exhibition, it also has photos too. Take a look at it for yourself here https://www.loc.gov/collections/african-american-photographs-1900-paris-exposition/?c=150&sp=1&st=list - but note the charts are towards the end of the list. I took all my charts in this article from the Library of Congress site.
"W.E.B. Du Bois' Visionary Infographics Come Together for the First Time in Full Color" article in the Smithsonian magazine that reviews the Battle-Baptiste book (above).
"W. E. B. Du Bois' Hand-Drawn Infographics of African-American Life (1900)" article in Public Domain Review that reviews the Battle-Baptiste book (above).
Posted by Mike Woodward at 9:39 AM No comments:
Labels: data visualization, w.e.b. du bois
Boston, MA Boston, MA, USA
Prediction, distinction, and interpretation: the three parts of data science
What does data science boil down to?
Data science is a relatively new discipline that means different things to different people (most notably, to different employers). Some organizations focus solely on machine learning, while other lean on interpretation, and yet others get close to data engineering. In my view, all of these are part of the data science role.
I would argue data science generally is about three distinct areas:
Prediction. The ability to accurately extrapolate from existing data sets to make forecasts about future behavior. This is the famous machine learning aspect and includes solutions like recommender systems.
Distinction. The key question here is: "are these numbers different?". This includes the use of statistical techniques to decide if there's a difference or not, for example, specifying an A/B test and explaining its results.
Interpretation. What are the factors that are driving the system? This is obviously related to prediction but has similarities to distinction too.
(A similar view of data science to mine: Calvin.Andrus, CC BY-SA 3.0, via Wikimedia Commons)
I'm going to talk through these areas and list the skills I think a data scientist needs. In my view, to be effective, you need all three areas. The real skill is to understand what type of problem you face and to use the correct approach.
Distinction - are these numbers different?
This is perhaps the oldest area and the one you might disagree with me on. Distinction is firmly in the realm of statistics. It's not just about A/B tests or quasi-experimental tests, it's also about evaluating models too.
Confidence intervals.
Sample size calculations. This is crucial and often overlooked by experienced data scientists. If your data set is too small, you're going to get junk results so you need to know what too small is. In the real world. increasing the sample size is often not an option and you need to know why.
Hypothesis testing. You should know the difference between a t-test and a z-test and when a z-test is appropriate (hint: sample size).
α, β, and power. Many data scientists have no idea what statistical power is. If you're doing any kind of statistical testing, you need to have a firm grasp of power.
The requirements for running a randomized control trial (RCT). Some experienced data scientists have told me they were analyzing results from an RCT, but their test just wasn't an RCT - they didn't really understand what an RCT was.
Quasi-experimental methods. Sometimes, you just can't run an RCT, but there are other methods you can use including difference-in-difference, instrumental variables, and regression discontinuity. You need to know which method is appropriate and when.
Regression to the mean. This is why you almost always need a control group. I've seen experienced data scientists present results that could almost entirely be explained by regression to the mean. Don't be caught out by one of the fundamentals of statistics.
Prediction - what will happen next?
This is the piece of data science that gets all the attention, so I won't go into too much detail.
The basics of machine learning models, including:
Generalized linear modeling
Random forests (including knowing why they are often frowned upon)
k-nearest neighbors/k-means clustering
Support Vector Machines
Gradient boosting.
Cross-validation, regularization, and their limitations.
Variable importance and principal component analysis.
Loss functions, including RMSE.
The confusion matrix, accuracy, sensitivity, specificity, precision-recall and ROC curves.
There's one topic that's not on any machine learning course or in any machine learning book that I've ever read, but it's crucially important: knowing when machine learning fails and when to stop a project. Machine learning doesn't work all the time.
Interpretation - what's going on?
The main techniques here are often data visualization. Statistical summaries are great, but they can often mislead. Charts give a fuller picture.
Here are some techniques all data scientists should know:
Violin plots
Scatter plots and curve fitting
Regression and curve fitting.
They should also know why pie charts in all their forms are bad.
A good knowledge of how charts work is very helpful too (the psychology of visualization).
What about SQL and R and Python...?
You need to be able to manipulate data to do data science, which means SQL, Python, or R. But plenty of people use these languages without being data scientists. In my view, despite their importance, they're table stakes.
Book knowledge vs. street knowledge
People new to data science tend to focus almost exclusively on machine learning (prediction in my terminology) which leaves them very weak on data analysis and data exploration; even worse, their lack of statistical knowledge sometimes leads them to make blunders on sample size and loss functions. No amount of cross-validation, regularization, or computing power will save you from poor modeling choices. Even worse, not knowing statistics can lead people to produce excellent models of regression to the mean.
Practical experience is hugely important; way more important than courses. Obviously, a combination of both is best, which is why PhDs are highly sought after; they've learned from experience and have the theoretical firepower to back up their practical knowledge.
Posted by Mike Woodward at 3:35 PM No comments:
Labels: data science, data visualization, statistics
What's a violin plot and how to make one?
What's a violin plot?
Over the last few years, violin plots and their derivatives have become a lot more common; they're a 'sort of' smoothed histogram. In this blog post, I'm going to explain what they are and why you might use them.
To give you an idea of what they look like, here's a violin plot of attendance at English Premier League (EPL) matches during the 2018-2019 season. The width of the plot indicates the relative proportion of matches with that attendance; we can see attendance peaks around 27,000, 55,000, and 75,000, but no matches had zero attendance.
Violin plots get their name because they sort of resemble a violin (you'll have to allow some creative license here).
As we'll see, violin plots avoid the problems of box and whisker plots and the problems of histograms. The cost is greatly increased computation time, but for a modern computer system, violin plots are calculated and plotted in the blink of an eye. Despite their advantages, the computational cost is why these plots have only recently become popular.
Summary statistics - the mean etc.
We can use summary statistics to give a numerical summary of the data. The most obvious statistics are the mean and standard deviation, which are 38,181 and 16,709 respectively for the EPL 2018 attendance data. But the mean can be distorted by outliers and the standard deviation implies a symmetrical distribution. These statistics don't give us any insight into how attendance was distributed.
The median is a better measure of central tendency in the presence of outliers, and quartiles work fine for asymmetric distributions. For this data, the median is 31,948 and the upper and lower quartiles are 25,034 and 53,283. Note the median is a good deal lower than the mean and the upper and lower quartiles are not evenly spaced, suggesting a skewed distribution. The quartiles give us an indication of how skewed the data is.
So we should be just fine with median and quartiles - or are there problems with these numbers too?
Box and whisker plots
Box and whisker plots were introduced by Tukey in the early 1970s and evolved since then, currently, there are several slight variations. Here's a box and whisker plot for the EPL data for four seasons in the most common format; the median, upper and lower quartiles, and the lowest and highest values are all indicated. We have a nice visual representation of our distribution.
The box and whisker plot encodes a lot of data and strongly indicates if the distribution is skewed. Surely this is good enough?
The problem with boxplots
Unfortunately, box and whisker plots can hide important features of the data. This article from Autodesk Research gives a great example of the problems. I've copied one of their key figures here.
("Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing", Justin Matejka, George Fitzmaurice, ACM SIGCHI Conference on Human Factors in Computing Systems, 2017)
The animation shows the problem nicely; different distributions can give the same box and whisker plots. This doesn't matter most of the time, but when it does matter, it can matter a lot, and of course, the problem is hidden.
Histograms and distributions
If boxplots don't work, what about histograms? Surely they won't suffer from the same problem? It's true they don't, but they suffer from another problem; bin count, meaning the number of bins.
Let's look at the EPL data again, this time as a histogram.
Here's the histogram with a bin count of 9.
Here it is with a bin count of 37.
And finally, with a bin count of 80.
Which of these bin counts is better? The answer is, it depends on what you want to show. In some cases, you want a large number of bins, in other cases, a small number of bins. As you can appreciate, this isn't helpful, especially if you're at the start of your career and you don't have a lot of experience to call on. Even later in your career, it's not helpful when you need to move quickly.
An evolution of the standard histogram is using unequal bin sizes. Under some circumstances, this gives a better representation of the distribution, but it adds another layer of complexity; what should the bin sizes be? The answer again is, it depends on what you want to do and your experience.
Can we do better?
Enter the violin plot
The violin plot does away with bin counts by using probability density instead. It's a plot of the probability density function (pdf) that could have given rise to the measured data.
In a handful of cases, we could estimate the pdf using parametric means when the underlying data follows a well-known distribution. Unfortunately, in most real-world cases, we don't know what distribution the data follows, so we have to use a non-parametric method, the most popular being kernel density estimation (kde). This is almost always done using a Gaussian estimator, though other estimators are possible (in my experience, the Gaussian calculation gives the best result anyway, but see this discussion on StackOverflow). The key parameter is the bandwidth, though most kde algorithms attempt to size their bandwidth automatically. From the kde, the probability density is calculated (a trivial calculation).
Here's the violin plot for the EPL 2018 data.
It turns out, violin plots don't have the problems of box plots as you can see from this animation from Autodesk Research. The raw data changes, the box plot doesn't, but the violin plot changes dramatically. This is because all of the data is represented in the violin plot.
Variations on a theme, types of violin plot, ridgeline plots, and mirror density plots
The original form of the violin plot included median and quartile data and you often see violin plots presented like this - a kind of hybrid of violin and box and whisker. This is how the Seaborn plotting package draws Violin plots (though this chart isn't a Seaborn visualization).
Now, let's say we want to compare the EPL attendance data over several years. One way of doing it is to show violin plots next to each other, like this.
Another way is to show two plots back-to-back, like this. This presentation is often called a mirror density plot.
We can show the results from multiple seasons with another form of violin plot called the ridgeline plot. In this form of plot, the charts are deliberately overlapped, letting you compare the shape of the distributions better. They're called ridgeline plots because they sort of look like a mountain ridgeline.
This plot shows the most obvious feature of the data, the 2019-2020 season was very different from the seasons that went before; a substantial number of matches were held with zero attendees.
When should you use violin plots?
If you need to summarize a distribution and represent all of the data, a violin plot is a good choice. Depending on what you're trying to do, it may be a better choice than a histogram and it's always a better choice than box and whisker plots.
Aesthetically, a violin plot or one of its variations is very appealing. You shouldn't underestimate the importance of clear communication and visual appeal. If you think your results are important, surely you should present them in the best way possible - and that may well be a violin plot.
"Statistical computing and graphics, violin plots: a box plot-density trace synergism", Jerry Hintze, Ray Nelson, American Statistical Association, 1998. This paper from Hintze and Nelson is one of the earlier papers to describe violin plots and how they're defined, it goes into some detail on the process for creating them.
"Violin plots explained" is a great blog post that provides a nice explanation of how violin plots work.
"Violin Plots 101: Visualizing Distribution and Probability Density" is another good blog post proving some explanation on how to build violin plots.
Posted by Mike Woodward at 12:48 PM No comments:
Labels: data visualization
Reconstructing an unlabelled chart
What were the numbers?
Often in business, we're presented with charts where the y-axis is unlabeled because the presenter wants to conceal the numbers. Are there ways of reconstructing the labels and figuring out what the data is? Surprisingly, yes there are.
Given a chart like this:
you can often figure out what the chart values should be.
The great Evan Miller posted on this topic several years ago ("How To Read an Unlabeled Sales Chart"). He discussed two methods:
Poisson distribution
In this blog post, I'm going to take his gcd work a step further and present code and a process for reconstructing numbers under certain circumstances. In another blog post, I'll explain the Poisson method.
The process I'm going to describe here will only work:
Where the underlying data is integers
Where there's 'enough' range in the underlying data.
Where the maximum underlying data is less than about 200.
Where the y-axis includes zero.
Let's start with some results and the process.
I generated this chart without axes labels, the goal being to recreate the underlying data. I measured screen y-coordinates of the top and bottom plot borders (187 and 677) and I measured the y coordinates of the top of each of the bars. Using the process and code I describe below, I was able to correctly recreate the underlying data values, which were \([33, 30, 32, 23, 32, 26, 18, 59, 47]\).
How plotting packages work
To understand the method, we need to understand how a plotting package will render a set of integers on a chart.
Let's take the list of numbers \([1, 2, 3, 5, 7, 11, 13, 17, 19, 23]\) and call them \(y_o\).
When a plotting package renders \(y_o\) on the screen, it will put them into a chart with screen x-y coordinates. It's helpful to think about the chart on the screen as a viewport with x and y screen dimensions. Because we only care about the y dimensions, that's what I'll talk about. On the screen, the viewport might go from 963 pixels to 30 pixels on the y-axis, a total range of 933 y-pixels.
Here's how the numbers \(y_o\) might appear on the screen and how they map to the viewport y-coordinates. Note the origin is top left, not bottom right. I'll "correct" for the different origin.
The plotting package will translate the numbers \(y_o\) to a set of screen coordinates I'll call \(y_s\). Assuming our viewport starts from 0, we have:
\[y_s = my_o\]
Let's just look at the longest bar that corresponds to the number 23. My measurements of the start and end are 563 and 27, which gives a length of 536. \(m\) in this case is 536/23, or 23.3.
There are three things to bear in mind:
The set of numbers \(y_o\) are integers
The set of numbers \(y_s\) are integers - we can't have half a pixel for example.
The scalar \(m\) is a real number
Integer only solutions for \(m\)
In Evan Miller's original post, he only considered integer values of \(m\). If we restrict ourselves to integers, then most of the time:
\[m = gcd(y_s)\]
where gcd is the greatest common divisor.
To see how this works, let's take:
\[y_o = [1 , 2, 3]\]
\[m = 8\]
These numbers give us:
\[y_s = [8, 16, 24]\]
To find the gcd in Python:
np.gcd.reduce([8, 16, 24])
which gives \(m = 8\), which is correct.
If we could guarantee \(m\) was an integer, we'd have an answer; we'd be able to reconstruct the original data just using the gcd function. But we can't do that in practice for three reasons:
\(m\) isn't always an integer.
There are measurement errors which means there will be some uncertainty in our \(y_s\) values.
It's possible the original data set \(y_o\) has a gcd which is not 1.
In practice, we gather screen coordinates using a manual process which will introduce errors. At most, we're likely to be off by a few pixels for each measurement, however, even the smallest error will mean the gcd method won't work. For example, if the value on the screen should be 500 but we might incorrectly measure it as 499, this small error means the method fails (there is a way around this failure that will work for small measurement errors.)
If our original data set has a gcd greater than 1, the method won't work. Let's say our data was:
\[y_o = [2, 4, 6] \]
\[m=8\]
we would have:
\[y_s = [16, 32, 48]\]
which has a gcd of 16, which is an incorrect estimate of \(m\). In practice, the odds of the original data set \(y_o\) having a gcd > 1 are low.
The real killer for this approach is the fact that \(m\) is highly likely in practice to be a real number.
Real solutions for \(m\)
The only way I've found for solving for \(m\) is to try different values for \(m\) to see what succeeds. To get this to work, we have to constrain \(m\) because otherwise there would be an infinite number of values to try. Here's how I constrain \(m\):
I limit the steps for different \(m\) values to 0.01.
I start my m values from just over 1 and I stop at a maximum \(m\) value. My maximum \(m\) value I get from assuming the smallest value I measure on the screen corresponds to a data value of 1, for example, if the smallest measurement is 24 pixels, the smallest possible original data is 1, so the maximum value for \(m\) is 24.
Now we've constrained \(m\), how do we evaluate \(y_s = my_o\)? First off, we define an error function. We want our estimates of the original data \(y_o\) to be integers, so the further away we are from an integer, the worse the error. For the \(i\)th element of our estimate of \(y_o\), the error estimate is:
\[\frac{y_{si}}{m_{estimate}} - \frac{y_{si}}{m_{estimate}}\]
we're choosing the least square error, which means minimizing:
\[ \frac{1}{n} \sum \left ( round \left ( \frac{y_{si}}{m_{estimate}} \right ) - \frac{y_{si}}{m_{estimate}} \right )^2 \]
in code, this comes out as:
sum([(round(_y/div) - _y/div)**2 for _y in y])/len(y)
Our goal is to try different values of \(m\) and choose the solution that yields the lowest error estimate.
The solution in practice
Before I show you how this works, there are two practicalities. The first is that \(m=1\) is always a solution and will always give a zero error, but it's probably not the right solution, so we're going to ignore \(m=1\). Secondly, there will be an error in our measurements due to human error. I'm going to assume the maximum error is 3 pixels for any measurement. To calculate a length, we take a measurement of the start and end of the bar (if it's a bar chart), which means our maximum uncertainty is 2*3. That's why I set my maximum \(m\) to be min(y) + 2*MAX_ERROR.
To show you how this works, I'll talk you through an example.
The first step is measurement. We need to measure the screen y-coordinates of the plot borders and the top of the bars (or the position of the points on a scatter chart). If the plot doesn't have borders, just measure the position of the bottom of the bars and the coordinate of the highest bar. Here are some measurements I took.
Here are the measurements of the top of the bars (_y_measured): \([482, 500, 489, 541, 489, 523, 571, 329, 399]\)
Here are the start and stop coordinates of the plot borders (_start, _stop): \(677, 187\)
To convert these to lengths, the code is just: [_start - _y_m for _y_m in _y_measured]
The length of the screen from the top to the bottom is: _start - _stop = \(490\)
This gives us measured length (y_measured): \([195, 177, 188, 136, 188, 154, 106, 348, 278]\)
Now we run this code:
MAX_ERROR = 3
STEP = 0.01
ERROR_THRESHOLD = 0.01
def mse(y, div):
"""Means square error calculation."""
return sum([(round(_y/div) - _y/div)**2 for _y in y])/len(y)
def find_divider(y):
"""Return the non-integer that minimizes the error function."""
error_list = []
for _div in np.arange(1 + STEP,
min(y) + 2*MAX_ERROR,
STEP):
error_list.append({"divider": _div,
"error":mse(y, _div)})
df_error = pd.DataFrame(error_list)
df_error.plot(x='divider', y='error', kind='scatter')
_slice = df_error[df_error['error'] == df_error['error'].min()]
divider = _slice['divider'].to_list()[0]
error = _slice['error'].to_list()[0]
if error > ERROR_THRESHOLD:
raise ValueError('The estimated error is {0} which is '
'too large for a reliable result.'.format(error))
return divider
def find_estimate(y, y_extent):
"""Make an estimate of the underlying data."""
if (max(y_measured) - min(y_measured))/y_extent < 0.1:
raise ValueError('Too little range in the data to make an estimate.')
m = find_divider(y)
return [round(_e/m) for _e in y_measured], m
estimate, m = find_estimate(y_measured, y_extent)
This gives us this output:
Original numbers: [33, 30, 32, 23, 32, 26, 18, 59, 47]
Measured y values: [195, 177, 188, 136, 188, 154, 106, 348, 278]
Divider (m) estimate: 5.900000000000004
Estimated original numbers: [33, 30, 32, 23, 32, 26, 18, 59, 47]
Which is correct.
Limitations of this approach
Here's when it won't work:
If there's little variation in the numbers on the chart, then measurement errors tend to overwhelm the calculations and the results aren't good.
In a similar vein, if the numbers are all close to the top or the bottom of the chart, measurement errors lead to poor results.
\(m < 1\), which as the maximum y viewport range is usually in the range 500-900 pixels, it won't work for numbers greater than about 500.
I've found in practice that if \(m < 3\) the results can be unreliable. Arbitrarily, I call any error greater than 0.01 too high to protect against poor results. Maybe, I should limit the results to \(m > 3\).
I'm not entirely convinced my error function is correct; I'd like an error function that better discriminates between values. I tried a couple of alternatives, but they didn't give good results. Perhaps you can do better.
Notice that the error function is 'denser' closer to 1, suggesting I should use a variable step size or a different algorithm. It might be that the closer you get to 1, the more errors and the effects of rounding overwhelm the calculation. I've played around with smaller step sizes and not had much luck.
If the data is Poisson distributed, there's an easier approach you can take. In a future blog post, I'll talk you through it.
Where to get the code
I've put the code on my Github page here: https://github.com/MikeWoodward/CodeExamples/blob/master/UnlabeledChart/approxrealgcd.py
Labels: analytics, data visualization
Unknown Pleasures: pulsars, pop, and plotting
The echoes of history
Sometimes, there are weird connections between very different cultural areas and we see the echoes of history playing out. I'm going to tell you how pulsars, Nobel Prizes, an iconic album cover, Nazi atrocities, and software chart plotting all came to be connected.
The discovery of pulsars
In 1967, Jocelyn Bell was working on her Ph.D. as a post-graduate researcher at the Mullard Radio Astronomy Observatory, near Cambridge in the UK. She had helped build a new radio telescope and now she was operating it. On November 28, 1967, she saw a strikingly unusual and regular signal, which the team nicknamed "little green men". The signal turned out to be a pulsar, a type of star new to science.
This was an outstanding discovery that shook up astronomy. The team published a paper in Nature, but that wasn't the end of it. In 1974, the Nobel committee awarded the Nobel Physics Prize to the team. To everyone on the team except Jocelyn Bell.
Over the years, there's been a lot of controversy over the decision, with many people thinking she was robbed of her share of the prize, either because she was a Ph.D. student or because she was a woman. Bell herself has been very gracious about the whole thing; she is indeed a very classy lady.
The pulsar and early computer graphics
In the late 1960s, a group of Ph.D. students from Cornell University were analyzing data from the pulsar Bell discovered. Among them was Harold Craft, who used early computer systems to visualize the results. Here's what he said to the Scientific American in 2015: "I found that it was just too confusing. So then, I wrote the program so that I would block out when a hill here was high enough, then the stuff behind it would stay hidden."
Here are three pages from Craft's Ph.D. thesis. Take a close look at the center plot. If Craft had made every line visible, it would have been very difficult to see what was going on. Craft re-imagined the data as if he were looking at it at an angle, for example, as if it were a mountain range ridgeline he was looking down on. With a mountain ridgeline, the taller peaks hide what's behind them. It was a simple idea, but very effective.
(Credit: JEN CHRISTIANSEN/HAROLD D. CRAFT)
The center plot is very striking. So striking in fact, that it found its way into the Cambridge Encyclopaedia of Astronomy (1977 edition):
(Cambridge Encyclopedia of Astronomy, 1977 edition, via Tim O'Riley)
England in the 1970s was not a happy place, especially in the de-industrialized north. Four young men in Manchester had formed a band and recorded an album. The story goes that one of them, Bernard Sumner, was working in central Manchester and took a break in the city library. He came across the pulsar image in the encyclopedia and liked it a lot.
The band needed an image for their debut album, so they selected this one. They gave it to a recently graduated designer called Peter Saville, with the instructions it was to be a black-on-white image. Saville felt the image would look better white-on-black, so he designed this cover.
This is the iconic Unknown Pleasures album from Joy Division.
The starkness of the cover, without the band's name or the album's name, set it apart. The album itself was critically acclaimed, but it never rose high in the charts at the time. However, over time, the iconic status of the band and the album cover grew. In 1980, the lead singer, Ian Curtis, committed suicide. The remaining band members formed a new band, New Order, that went on to massive international fame.
By the 21st century, versions of the album cover were on beach towels, shoes, and tattoos.
Joy plots
In 2017, Claus Wilke created a new charting library for R, ggjoy. His package enabled developers to create plots like the famous Unknown Pleasures album cover. In honor of the album cover, he called these plots joy plots.
Ridgeline plots
This story has a final twist to it. Although joy plots sound great, there's a problem.
Joy Division took their name from a real Nazi atrocity fictionalized in a book called House of Dolls. In some of their concentration camps, the Nazis forced women into prostitution. The camp brothels were called Joy Division.
The name joy plots was meant to be fun and a callback to an iconic data visualization, but there's little joy in evil. Given this history, Wilke renamed his package ggridges and the plots ridgeline plots.
Here's an example of the great visualizations you can produce with it.
If you search around online, you can find people who've re-created the pulsar image using ggridges.
(Andrew B. Collier via Twitter)
It's not just R programmers who are playing with Unknown Pleasures, Python programmers have got into the act too. Nicolas P. Rougier created a great animation based on the pulsar data set using the venerable Matplotlib plotting package - you can see the animation here.
If you liked this post
You might like these ones:
Soldiers on the moon: Project Horizon
Why we have runs on toiler paper
A masterclass in information visualization: the tube map
The London Underground tube map is a master class in information visualization. It's been described in detail in many, many places, so I'm just going to give you a summary of why it's so special and what we can learn from it. Some of the lessons are about good visual design principles, some are about the limitations of design, and some of them are about wealth and poverty and the unintended consequences of abstraction.
(London Underground map.)
From its start in 1863, the underground train system in London grew in a haphazard fashion. With different railway companies building different lines there was no sense of creating a coherent system.
Despite the disorder, when it was first built it was viewed as a marvel and had a cultural impact beyond just transport; Conan Doyle wove it into Sherlock Holmes stories, H.G. Wells created science fiction involving it, and Virginia Woolf and others wrote of it too.
After various financial problems, the system was unified under government control. The government authority running it wanted to promote its use to reduce street-level congestion but the problem was, there were many different lines that only served part of the capital. Making it easy to use the system was hard.
Here's an early map of the system so you can see the problem.
(1908 tube map. Image source: Wikimedia Commons.)
The map's hard to read and it's hard to follow. It's visually very cluttered and there are lots of distracting details; it's not clear why some things are marked on the map at all (why is ARMY & NAVY AND AUXILLARY STORES marked so prominently?). The font is hard to read, the text orientation is inconsistent, and the contrast of station names with the background isn't high enough.
The problem gets even worse when you zoom out to look at the entire system. Bear in mind, stations in central London are close together but they get further apart as you go into the suburbs. Here's an early map of the entire system, do you think you could navigate it?
(1931 whole system tube map.)
Of course, the printing technology of the time was more limited than it is now, which made information representation harder.
Design ideas in culture
To understand how the tube map as we know it was created, we have to understand a little of the design culture of the time (the early 1930s).
Electrical engineering was starting as a discipline and engineers were creating circuit diagrams for new electrical devices. These circuit diagrams showed the connection between electrical components, not how they were laid out on a circuit board. Circuit diagrams are examples of topological maps.
(Example circuit diagram. Show electrical connections between components, not how they're laid out on a circuit board. Image source: Wikimedia Commons, License: Public domain.)
The Bauhaus school in Germany was emphasizing art and design in mass-produced items, bringing high-quality design aesthetics into everyday goods. Ludwig Mies van der Rohe, the last director of the Bauhaus school, used a key aphorism that summarized much of their design philosophy: "less is more".
(Bauhaus kitchen design 1928 - they invented much of the modern design world. Image source: Wikimedia Commons, License: Public domain)
The modern art movement was in full swing, with the principles of abstraction coming very much to the fore. Artists were abstracting from reality in an attempt to represent an underlying truth about their subjects or about the world.
(Piet Mondrian, Composition 10. Image source: Wikimedia Commons, License: Public Domain.)
To put it simply, the early 1930s were a heyday of design that created much of our modern visual design language.
Harry Beck's solution - form follows function
In 1931, Harry Beck, a draughtsman for London Underground, proposed a new underground map. Beck's map was clearly based on circuit diagrams: it removed unnecessary detail to focus on what was necessary. In Beck's view, what was necessary for the tube was just the stations and the lines, plus a single underlying geographical detail, the river Thames.
Here's his original map. There's a lot here that's very, very different from the early geographical maps.
The design grammar of the tube map
The modern tube map is a much more complex beast, but it still retains the ideas Harry Beck created. For simplicity, I'm going to use the modern tube map to explain Beck's design innovations. There is one underlying and unifying idea behind everything I'm going to describe: consistency.
Topological not geographical. This is the key abstraction and it was key to the success of Beck's original map. On the ground, tube lines snake around and follow paths determined by geography and the urban landscape. This makes the relationship between tube lines confusing. Beck redrew the tube lines as straight lines without attempting to preserve the geographic relations of tube lines to one another. He made the stations more or less equidistant from each other, whereas, on the ground, the distance between stations varies widely.
The two images below show the tube map and a geographical representation of the same map. Note how the tube map substantially distorts the underlying geography.
(The tube map. Image source: BBC.)
(A geographical view of the same system. Image source: Wikimedia Commons.)
Removal of almost all underlying geographical features. The only geographical feature on tube maps is the river Thames. Some versions of the tube map removed it, but the public wanted it put back in, so it's been a consistent feature for years now.
(The river Thames, in blue, is the only geographic feature on the map.)
A single consistent font. Station names are written with the same orientation. Using the same font and the same text orientation makes reading the map easier. The tube has its own font, New Johnston, to give a sense of corporate identity.
(Same text orientation, same font everywhere.)
High contrast. This is something that's become easier with modern printing technology and good quality white paper. But there are problems. The tube uses a system of fare zones which are often added to the map (you can see them in the first two maps in this section, they're the gray and white bands). Although this is important information if you're paying for your tube ticket, it does add visual clutter. Because of the number of stations on the system, many modern maps add a grid so you can locate stations. Gridlines are another cluttering feature.
Consistent symbols. The map uses a small set of symbols consistently. The symbol for a station is a 'tick' (for example, Goodge Street or Russell Square). The symbol for a station that connects two or more lines is a circle (for example, Warren Street or Holborn).
Graphical rules. Angles and curves are consistent throughout the map, with few exceptions - clearly, the map was constructed using a consistent set of layout rules. For example, tube lines are shown as horizontal, vertical, or 45-degree lines in almost all cases.
The challenge for the future
The demand for mass transit in London has been growing for very many years which means London Underground is likely to have more development over time (new lines, new stations). This poses challenges for map makers.
The latest underground maps are much more complicated than Harry Beck's original. Newer maps incorporate the south London tram system, some overground trains, and of course the new Elizabeth Line. At some point, a system becomes so complex that even an abstract simplification becomes too complex. Perhaps we'll need a map for the map.
A trap for the unwary
The tube map is topological, not geographical. On the map, tube stations are roughly the same distance apart, something that's very much not the case on the ground.
Let's imagine you had to go from Warren Street to Great Portland Street. How would you do it? Maybe you would get the Victoria Line southbound to Oxford Circus, change to the Bakerloo Line northbound, change again at Baker Street, and get the Circle Line eastbound to Great Portland Street. That's a lot of changes and trains. Why not just walk from Warren Street to Great Portland Street? They're less than 500m apart and you can do the walk in under 5 minutes. The tube map misleads people into doing stuff like this all the time.
Let's imagine it's a lovely spring day and you're traveling to Chesham on the Metropolitan Line. If Great Portland Street and Warren Street are only 482m apart, then it must be a nice walk between Chalfont & Latimer and Chesham, especially as they're out in the leafy suburbs. Is this a good idea? Maybe not. These stations are 6.19km apart.
Abstractions are great, but you need to understand that's exactly what they are and how they can mislead you.
Using the map to represent data
The tube map is an icon, not just of the tube system, but of London itself. Because of its iconic status, researchers have used it as a vehicle to represent different data about the city.
James Cheshire of University College London mapped life expectancy data to tube stations, the idea being, you can spot health disparities between different parts of the city. He produced a great map you can visit at tubecreature.com. Here's a screenshot of part of his map.
You go from a life expectancy of 78 at Stockwell to 89 at Green Park, but the two stations are just 4 stops apart. His map shows how disparities occur across very short distances.
Mark Green of the University of Sheffield had a similar idea, but this time using a more generic deprivation score. Here's his take on deprivation and the tube map, the bigger circles representing higher deprivation.
Once again, we see the same thing, big differences in deprivation over short distances.
What the tube map hides
Let me show you a geographical layout of the modern tube system courtesy of Wikimedia. Do you spot what's odd about it?
(Geographical arrangement of tube lines. Image source: Wikimedia Commons, License: Creative Commons.)
Look at the tube system in southeast London. What tube system? There are no tube trains in southeast London. North London has lots of tube trains, southwest London has some, and southeast London has none at all. What part of London do you think is the poorest?
The tube map was never designed to indicate wealth and poverty, but it does that. It clearly shows which parts of London were wealthy enough to warrant underground construction and which were not. Of course, not every area in London has a tube station, even outside the southeast of London. Cricklewood (population 80,000) in northwest London doesn't have a tube station and is nowhere to be seen on the tube map.
The tube map leaves off underserved areas entirely, it's as if southeast London (and Cricklewood and other places) don't exist. An abstraction meant to aid the user makes whole communities invisible.
Now look back at the previous section and the use of the tube map to indicate poverty and inequality in London. If the tube map is an iconic representation of London, what does that say about the areas that aren't even on the map? Perhaps it's a case of 'out of sight, out of mind'.
This is a clear reminder that information design is a deeply human endeavor. A value-neutral expression of information doesn't exist, and maybe we shouldn't expect it to.
Takeaways for the data scientist
As data scientists, we have to visualize data, not just for our fellow data scientists, but more importantly for the businesses we serve. We have to make it easy to understand and easy to interpret data. The London Underground tube map shows how ideas from outside science (circuit diagrams, Bauhaus, modernism) can help - information representation is, after all, a human endeavor. But the map shows the limits to abstraction and how we can be unintentionally led astray.
The map also shows the hidden effects of wealth inequality and the power of exclusion - what we do does not exist in a cultural vacuum, which is true for both the tube map and the charts we produce too.
Labels: data science, data visualization, london underground, tube map
3D plotting: how hard can it be?
Why aren't 2D plots good enough?
Most data visualization problems involve some form of two-dimensional plotting, for example plotting sales by month. Over the last two hundred years, analysts have developed several different types of 2D plots, including scatter charts, line charts, and bar charts, so we have all the chart types we need for 2D data. But what happens if we have a 3D dataset?
The dataset I'm looking at is English Premier League (EPL) results. I want to know how the full-time scores are distributed, for example, are there more 1-1 results than 2-1 results? I have three numbers, the full-time home goals (FTHG), the full-time away goals (FTAG). and the number of games that had that score. How can I present this 3D data in a meaningful way?
(You can't rely on 3D glasses to visualize 3D data. Image source: Wikimedia Commons, License: Creative Commons, Author: Oliver Olschewski)
Just the text
The easiest way to view the data is to create a table, so here it is. The columns are the away goals, the rows are the home goals, and the cell values are the number of matches with that result, so 778 is the number of matches with a score of 0-1.
This presentation is easy to do, and relatively easy to interpret. I can see 1-1 is the most popular score, followed by 1-0. You can also see that some scores just don't occur (9-9) and results with more than a handful of goals are very uncommon.
This is OK for a smallish dataset like this, but if there are hundreds of rows and/or columns, it's not really viable. So what can we do?
A heatmap is a 2D map where the 3rd dimension is represented as color. The more intense (or lighter) the color, the higher the value. For this kind of plot to work, you do have to be careful about your color map. Usually, it's best to choose the intensity of just one color (e.g. shades of blue). In a few cases, multiple colors can work (colors for political parties), but those are the exceptions.
Here's the same data plotted as a heatmap using the Brewer color palette "RdPu" (red-purple).
The plot does clearly show the structure. It's obvious there's a diagonal line beyond which no results occur. It's also obvious which scores are the most common. On the other hand, it's hard to get a sense of how quickly the frequency falls off because the human eye just isn't that sensitive to variations in color, but we could probably play around with the color scale to make the most important color variation occur over the range we're interested in.
This is an easy plot to make because it's part of R's ggplot package. Here's my code:
plt_goal_heatmap <- goal_distribution %>%
ggplot(aes(FTHG, FTAG, fill=Matches)) +
geom_tile() +
scale_fill_distiller(palette = "RdPu") +
ggtitle("Home/Away goal heatmap")
Perspective scatter plot
Another alternative is the perspective plot, which in R, you can create using the 'persp' function. This is a surface plot as you can see below.
You can change your perspective on the plot and view it from other angles, but even from this perspective, it's easy to see the very rapid falloff in frequency as the scores increase.
However, I found this plot harder to use than the simple heatmap, and I found changing my viewing angle was awkward and time-consuming.
Here's my code in case it's useful to you:
persp(x = seq(0, max(goal_distribution$FTHG)),
y = seq(0, max(goal_distribution$FTAG)),
z = as.matrix(
unname(
spread(
goal_distribution, FTAG, Matches, fill=0)[,-1])),
xlab = "FTHG", ylab = "FTAG", zlab = "Matches",
main = "Distribution of matches by score",
theta = 60, phi = 20,
expand = 1,
col = "lightblue")
3D scatter plot
We can go one stage further and create a 3D scatter chart. On this chart, I've plotted the x, y, and z values and color-coded them so you get a sense of the magnitude of the z values. I've also connected the points to the axis (the zero plane if you like) to emphasize the data structure a bit more.
As with the persp function, you can change your perspective on the plot and view it from another angle.
The downside with this approach is it requires the 'plot3D' library in R and it requires you to install a new graphics server (XQuartz). It's a chunk of work to get to a visualization. The function to draw the plot is 'scatter3D'. Here's my code:
scatter3D(x=goal_distribution$FTHG,
y=goal_distribution$FTAG,
z=goal_distribution$Matches,
phi = 5,
theta = 40,
bty = "g",
type = "h",
pch = 19,
main="Distribution of matches by score",
cex = 0.5)
What's my choice?
My goal was to understand the distribution of goals in the EPL, so what presentations of the data were most useful to me?
The simple table worked well and was the most informative, followed by the heatmap. I found both persp and scatter3D to be awkward to use and both consumed way more time than they were worth. The nice thing about the heatmap is that it's available as part of the wonderful ggplot library.
Bottom line: keep it simple.
Posted by Mike Woodward at 11:54 AM No comments:
Labels: data analytics, data science, data visualization
Mike Woodward
I'm a senior leader in the data and analytics world. I'm experienced in management, data analytics, software development, leadership, selling, and marketing. Visit engora.com to find out more about me.
Going underground The London Underground tube map is a master class in information visualization. It's been described in detail in many ...
The $10 screwdriver: a cautionary management tale
Managers gone mild I've told this story to friends several times. It's a simple story, but the lessons are complex and it touches on...
Football crazy: predicting Premier League football match results
I can get a qualification and be rich! A long time ago, I was part of a gambling syndicate. A friend of mine had some software that predicte...
Cultural add or fit?
What does cultural fit mean? At a superficial level, company culture can be presented as free food and drink, table tennis and foosball, and...
COVID and the base rate fallacy
COVID and the base rate fallacy Should we be concerned that vaccinated people are getting COVID? I've spoken to people who're worried that t...
corporate failures
bayes
survivor bias
The longest journeys in the world
My advice to students looking for a job
|
CommonCrawl
|
Solving Problems with Dynamic Programming
Dynamic Programming is a programming technique used to solve large, complex problems that have overlapping sub-problems. It does this by computing each sub-problem only once, ensuring non-exponential behavior. It differs from memoization (no, that's not a typo) in that instead of computing the result of a function then storing it in a cache, it instead generates a certain result before calculating any other result that relies on it.
Dynamic Programming problems often require a different way of thinking than typical problems. Most often, dynamic programming becomes useful when you want to reverse engineer a tree recursive function. By that I mean you want to start of with the base case(s), then use those to generate the next few cases, and repeat this process until all points of interest have been calculated. Most often, at least in imperative languages, these dynamic programming algorithms are iterative, meaning they don't have recursive calls. That being said, it's still possible to have a recursive dynamic programming algorithm, it's just not as common from what I've seen.
I will discuss a particular problem that is a good example of when and how to use dynamic programming. I programmed my solution with Java, but I will also show a few code bits from Haskell just because of how elegant it is. The problem is this:
Using only the denominations $ { 50, 25, 10, 5, 1 } $, how many ways can you make change for $ 100 $?
The above problem is not very large so it's easy enough to solve without dynamic programming. That being said, however, as the problem scales up (i.e., more denominations or bigger starting amount of money), dynamic programming soon becomes necessary.
The first thing to do is try to think up a way to count how many possibilities ensuring that you cover every case and never cover the same case more than once. This can be done with a carefully thought out recurrence relation. Let D be the denomination array and Let $C(a,b)$ be the number of ways to make change for a using only denominations less than or equal to the D[b]. Then
$$ C(a,b) =
\begin{array}{ll}
0 & a < 0 \: or \: b < 0 \\
1 & a = 0 \: and \: b = 0\\
C(a-D[b], b) + C(a, b-1) & otherwise
\end{array}
\right. $$ The first 2 piece-wise definitions in the recurrence relation are base cases, but the true magic occurs in the general case. It is basically saying if you count the number of ways to achieve a - D[b] using denominations less than or equal to D[b], and add that to the number of ways to achieve a using denominations less than or equal to D[b-1]., you will get the number of ways to make change for a using denominations less than or equal to D[b]. I will leave it to the reader to convince yourself of it's correctness.
So how would we go about coding this? Well here is a Haskell implementation:
count :: Int -> [Int] -> Int
count 0 _ = 1
count _ [] = 0
count a (d:ds)
| a < 0 = 0
| otherwise = count (a-d) (d:ds) + count a ds
I slightly modified the above recurrence relation but the general essence is the same. The natural question to ask next is: How well does this solution scale? This implementation does not scale well because you end up doing a lot of duplicate work. For instance,
$$ C(50, 2) = C(40, 2) + C(50, 1)\\ C(40, 3) = C(35, 3) + C(40, 2) $$ As you can see both of these rely on the same result, namely $C(40,2)$. When this re-computation accumulates down the execution tree, you end up with exponential time which is never good.
We can improve the running time considerably by adding in memoization. That is, whenever we calculate $C(a, b)$, we store it in a cache and look it up when we need it again. While there are ways to achieve memoization with Haskell, it requires a bit of extra work because functions can not produce side effects by definition. As a result, I am going to move to Java, because these types of things become trivial in imperative languages. Here is the non-memoized Java code, which has the same asymptotic time complexity as the haskell code from earlier.
static int[] D = { 1, 5, 10, 25, 50 };
static long C(int a, int b) {
if(a < 0 || b < 0)
if(a == 0 && b == 0)
return C(a-D[b], b) + C(a, b-1);
Now incorporating a cache is very simple, and we've achieved memoization:
static long[][] cache = new long[10000][5];
static { cache[0][0] = 1; }
else if(cache[a][b] == 0)
cache[a][b] = C(a-D[b], b) + C(a, b-1);
return cache[a][b];
This improved algorithm runs much faster but it requires extra space, so now memory becomes the main bottleneck rather than time. An alternate solution is to use dynamic programming, it requires a lot more thought on how to implement than straight forward memoization, but I get a lot more satisfaction from implementing it. Asymtotically, the time complexity is no different, but in practice dynamic programming algorithms out perform algorithms that use recursion with memoization. Here are some questions we need to answer before we can write a dynamic programming algorithm:
How can we reverse engineer the recurrence relation?
How do we implement tree recursion iteratively?
In what order should we calculate small values in order to guarantee the correct result?
Reverse engineering the recurrence relation is the most satisfying part about solving these problems with dynamic programming. Here is the original recurrence relation that we derived, with the first condition omitted because the function is only well defined if a and b are non-negative integers
$$ C(a,b) = \left\{ \begin{array}{ll} 1 & a = 0 \: and \: b = 0\\ C(a-D[b], b) + C(a, b-1) & otherwise \end{array} \right. $$ The idea behind the dynamic programming algorithm is to update related values on the fly and hope that by the time we reach any given configuration, it's been fully evaluated. If we choose our order carefully, we should be able to achieve this. Assume that we know $C(a,b)$. What does that say about other values of $C$? Well, from the recurrence relation above, exactly 2 other functions rely on this function as a partial result. Namely, $C(a+D[b],b)$ relies on the result of $C(a,b)$ and $C(a,b+1)$ also relies on the result of $C(a,b)$. Thus, whenever we get to cell $(a, b)$, we can update these 2 dependent cells.
What order should we evaluate these partial counts in? Naturally, it makes sense to start at the single known value, $C(0,0)=1$, but where should we go after that? Well, it turns out that the correct ordering is to evaluate all the denominations (in consecutive order) for low values then iteratively work your way up. Here is a Java snippet demonstrating this idea:
static long C(int n, int[] D) {
long[][] C = new long[n+1][D.length+1];
C[0][0] = 1;
for(int a = 0; a <= n; a++) {
for(int b=0; b < D.length; b++) {
if(a + D[b] <= amount) //safety check for IndexOutOfBounds
C[a+D[b]][b] += C[a][b];
C[a][b+1] += C[a][b];
return C[n][D.length-1];
The time complexity of this algorithm is $O(ab)$, because in order to calculate $C(a,b)$, we need to calculate $ C(m,n) \: \forall \: m \leq a, \: \forall \: n \leq b $ and each of these calculations takes constant time. The space complexity is also $O(ab)$ because we allocated a matrix to store partial results in. Overall, this has a much better performance than the naive tree recursive function described earlier. However, it is still possible to improve the space complexity of this algorithm further to $ \min{(m,n)} $, but I will leave that as an exercise for the reader.
coding dynamic programming
Game Steam Giveaway
Free Steam Game Giveaway
Rahul Jain September 4, 2017 at 11:19 AM
Guys just sharing, I've found this interesting! Check it out! http://www.syntaxdefinition.com
Markov Chains and Baseball
Functional vs. Imperative Programming
Weighing Post-College Decisions
|
CommonCrawl
|
Research article | Open | Published: 18 October 2018
Functional EEG connectivity during competition
Michela Balconi ORCID: orcid.org/0000-0002-8634-19511,2 &
Maria Elide Vanutelli1,2
BMC Neurosciencevolume 19, Article number: 63 (2018) | Download Citation
Social behavior and interactions pervasively shape and influence our lives and relationships. Competition, in particular, has become a core topic in social neuroscience since it stresses the relevance and salience of social comparison processes between the inter-agents that are involved in a common task. The majority of studies, however, investigated such kind of social interaction via one-person individual paradigms, thus not taking into account relevant information concerning interdependent participants' behavioral and neural responses. In the present study, dyads of volunteers participated in a hyperscanning paradigm and competed in a computerized attention task while their electrophysiological (EEG) activity and performance were monitored and recorded. Behavioral data and inter-brain coupling measures based on EEG frequency data were then computed and compared across different experimental conditions: a control condition (individual task, t0), a first competitive condition (pre-feedback condition, t1), and a second competitive condition following a positive reinforcing feedback (post-feedback condition, t2).
Results showed that during competitive tasks participants' performance was improved with respect to control condition (reduced response times and error rates), with a further specific improvement after receiving a reinforcing feedback. Concurrently, we observed a reduction of inter-brain functional connectivity (primarily involving bilateral prefrontal areas) for slower EEG frequency bands (delta and theta). Finally, correlation analyses highlighted a significant association between cognitive performance and inter-brain connectivity measures.
The present results may help identifying specific patterns of behavioral and inter-brain coupling measures associated to competition and processing of social reinforcements.
Social behavior and social interactions pervasively shape and influence our lives and relationships, it is then not surprising that investigation of the so-called "social brain" and of the neural bases of human social skills is attracting more and more attention [1]. Within this scenario, cooperation and competition are the primary (and opposite) interaction dynamics that define different ways to jointly execute a common task.
Previous studies underlined the importance to explore cooperative interactions since, considering mankind social organization, it constitutes a source of positive social feedback. In fact, driven by empathic and prosocial concern, the satisfaction of affiliative, shared needs can often become a social reward per se [2]. Competition, on the other hand, stresses the relevance and salience of social comparison processes between the inter-agents that are involved in the task, and includes other psycho-social issues related, for example, to the adoption of social hierarchies as a landmark. Thus, it is possible to imagine that the behavioral and neural effects corresponding to these two mechanisms are reflected by different and specific cognitive, neural, and behavioral patterns [3]. Few previous works directly compared these two conditions. A previous fMRI study [4] showed that, although the two conditions share some neural correlates related to social cognition, they are anyhow associated with different networks. In detail, cooperative actions seem to recruit orbitofrontal areas, while prefrontal and more posterior (parietal) cortices are involved during competition. The authors interpreted such result starting from evolutionary and developmental psychology and stressed the highly rewarding effect of cooperation and a sort of merging of the two partners. Conversely, competition seems to involve less inclusion and a clear separation between the self and the other. Interestingly, a recent hyperscanning study seems to be in line with such evidence, since it revealed that two cooperative partners show increased behavioral and neural synchrony than competitive ones during a joint task [5]. This result was motivated as a sort of disengagement from the members of the couple, and a similar effect was also observed in the case of inefficient joint interactions [6,7,8,9]. Thus, although it is significant to explore cooperation as a highly gratifying, positive, and rewarding condition, the effects related to disengagement, social exclusion, social differentiation and hierarchic mechanisms deserve greater attention.
From an experimental point of view, given the intrinsic complexity of the phenomenon, recent theoretical advances in social neuroscience lead to a change in perspective and underlined the importance of considering interacting agents as inter-dependent parts of a system in order to properly understand social behavior [10, 11]. Nonetheless the majority of studies on social interaction skills is based on "one-person paradigms" where an individual participant perform actions addressed to human or non-human agents, or where two participants are asked to participate in the same task but do actually act just one at a time, following turn-taking rules [4, 12].
For example, previous studies [12, 13] required subjects to perform the task while their performance was compared to that of a peer group. Of course, these competitors did not exist, and specific fake feedbacks were displayed about subjects' performance compared to the others. In this case, results showed that the social manipulation in terms of both performance and ranking position was able to modulate subjects' behavioral and neural responses. In detail, a better performance with a left frontal lateralized pattern emerged in the case of a positive and proficient self-perception (win condition), while a worse performance with a right asymmetry was revealed in the lose condition, connoted by negative emotions and poor self-perception.
For what concerns turn-based paradigms, instead, a recent functional near-infrared (fNIRS) study compared cooperative and competitive dynamics during a game. Participants were assigned to two different roles: game builder or partner. Results showed that the builder's activation in the right inferior frontal gyrus (IFG) was increased or reduced while interacting with a cooperative or competitive partner, respectively.
However, given the turn-based structure of classical investigation paradigms, even when different participants take part in the same experimental procedure, neural activation data related to their thoughts, choices and behaviors refer to different phases of the interaction and then cannot be analyzed to explore proper inter-brain synchronization associated to social interaction dynamics and to simultaneous adaptation of participants' behaviors.
Consequently, in order to investigate social exchanges between competing agents and related neural activities, we moved towards a "two-person perspective" [14,15,16] and implemented an hyperscanning paradigm [17, 18], where bodily activities of two interacting agents are simultaneously recoded, matched, and analyzed together. We therefore decided to explore brain-to-brain coupling in terms of functional connectivity, understood as the temporal correlation between neurophysiological events that are spatially remote and measured as simultaneous coupling between two time series of biosignal data collected from different inter-agents. Connectivity analyses based on electroencephalographic (EEG) data have the advantage, over methods based on functional imaging data, of being characterized by higher temporal resolution and, then, of being able to mirror swift modulations of moment-by-moment interactions. Such features makes EEG-based hyperscanning a valuable tool to explore social interaction dynamics, as suggested by the first few evidences in literature concerning different interaction situations [19,20,21]. This advantage of hyperscanning techniques over conventional paradigms also emerged in a previous study [22] comparing cooperation and competition between a joint condition, where both subjects played together, a solo condition, where both subjects were asked to complete the task individually, and a condition against a PC. The comparison between the joint and PC, as well as between the joint and the individual task, revealed significant differences in terms of inter-brain functional causal relations.
Further, electrophysiological recordings allow for assessing modulation of oscillatory activity associated to cognitive load. For example, Babiloni et al. [23], in a study where participants were asked to play a card game, found larger activity in prefrontal and anterior cingulated cortex across different frequency bands in the player that leaded the game with respect to the other ones. A successive study with the same paradigm [24] integrated such results with functional connectivity analyses and found that the pattern of inter-brain connectivity in the cooperation condition was denser than in the defect one. In fact, the individualistic choice could have produced a lower synchronization between brains. On the other hand, a cooperative act elicited a weaker brain activity, but a denser synchronization between the two brains. In addition, another work by Sinha et al. [25] showed that the competitive condition was characterized by significantly lower synchronization as compared to cooperation.
Consistently, it was observed that competition lead to increased cognitive load and cortico-cortical communication, likely due to higher efforts linked to strategy planning, as mirrored by modulations of alpha frequency power. In fact, a previous experiment revealed decreased left alpha activity (increased brain response) after a competitive reinforce [26].
Strategy planning, in particular, is a critical cognitive skill and a crucial aspect for inter-personal regulation during competition. Relevantly, such skill shapes inter-agents' actions on the basis of self-perception and attribution of efficacy and of information on one's own and other's performance. Moving in that direction, in previous investigations of competition (or cooperation) dynamics, the presence of an external feedback informing participants on their performance—in particular when it is positive—proved to be able to modulate their behavioral responses [17, 26,27,28]. While it has been suggested that even such modulation may be mediated by dorsolateral prefrontal structures [29], potential effects of processing information conveyed by performance feedbacks on inter-brain neural synchronicity and inter-agents synergies are yet to be explored.
The present study aims at investigating inter-personal synchronization during a competitive task by exploring inter-brain coupling of EEG activities. Further, we will explore the effect on performance and EEG synchronization of receiving an external positive feedback about individual performance. Moreover, being a task involving social and affective components, we were particularly interested in exploring the presence of lateralized patterns to better interpret results at light of subjects' emotional experience. Going down to specifics, we expected that: (1) participants will do fewer errors and will decrease their reaction times after receiving a positive feedback on their performance as a function of the perception of increased efficacy; (2) inter-brain coupling will decrease as the competitive task goes on—and in particular after receiving the positive feedback on individual performance—following reduced interpersonal engagement and implementation of individual strategies instead of joint action plans; (3) the modulation of inter-brain coupling could be primarily observed in prefrontal areas, given their critical role for higher social skills necessary for inter-personal tasks [12, 13, 30] and, in particular, competition [26].
Three different steps of analysis were conducted. First, behavioral data were analyzed. Then, EEG connectivity indices were computed. Finally, correlational analyses were run between these two. To avoid the presence of confounding factors such as a learning effect during the task due to the repeated conditions, a preliminary check was performed to compare the first four blocks (1–4) and blocks 5–8 for all the dependent variables of interest (RTs, ERs, EEG). Since the analyses did not reveal significant differences between the two sets, this factor was not further included in the three formal steps.
ERs and RTs
Two repeated measure ANOVAs were performed with ER and RTs dependent measures. The independent factor was Condition (Cond. 3 levels: control; pre; post-feedback). Considering ERs, ANOVA models highlighted the significant effect of Cond factor (F[2, 23] = 9.78, ≤ .001, η2 = .40), with decreased ERs in post-feedback with respect to pre-feedback sessions (F[2, 29] = 9.15, p ≤ .001, η2 = .39). Similarly, ANOVA models highlighted the significant effect of Condition on RTs (F[2, 29] = 8.75, p ≤ .001, η2 = .38), with decreased RTs during post-feedback sessions with respect to the control task (F[2, 29] = 8.18, p ≤ .001, η2 = .37) and pre-feedback sessions (F[2, 29] = 9.05, p ≤ .001, η2 = .39). Pre-feedback RTs were also significantly lower than those collected during the control task (F[2, 29] = 7.91, p ≤ .001, η2 = .35) (Fig. 1a, b).
Histograms (a) and EEG inter-brain functional connectivity patterns (b) as a function of Condition and Localization for the theta band, Π values. Bars represent ± 1 SE around group means. Asterisks mark statistically significant differences (p < .05). Colored lines represent the strength of the relation, ranging from 0 (yellow) to 1 (red)
Inter-brain connectivity
The second set of ANOVA models was applied to inter-brain connectivity data, with Condition (Cond: control; pre; post-feedback), Localization (Loc: 4 levels. AF; F; C; P), and Lateralization (Lat: 2 levels. Left; Right) as fixed factors. Greenhouse–Geisser correction of degrees-of-freedom was applied to ANOVA outcomes when needed. Simple effects for significant interactions were further checked via pair-wise comparisons, and Bonferroni correction was used to reduce multiple comparisons potential biases. Furthermore, the normality of the data distribution was preliminary assessed by checking kurtosis and asymmetry indices.
As for delta activity, the ANOVA model applied to inter-brain connectivity values showed significant Cond (F[2, 28] = 9.12, p ≤ .001, η2 = .39) and Cond × Localization (F[6, 82] = 9.11, p ≤ .001, η2 = .38) effects. As for the main effect, lower inter-brain connectivity was observed in post-feedback than pre-feedback (F[1, 14] = 8.56, p ≤ .001, η2 = .36) and control (F[1, 14] = 8.45, p ≤ .001, η2 = .35) condition. As for the significant interaction effect, pair-wise analyses highlighted that—within F recording channels—inter-brain connectivity decreased during post-feedback with respect to pre-feedback sessions (F[1, 14] = 9.77, p ≤ .001, η2 = .39) and control condition (F[1, 14] = 10.01, p ≤ .001, η2 = .41). In addition, in F localization, inter-brain connectivity was lower during pre-feedback sessions than during the control condition (F[1, 14] = 9.12, p ≤ .001, η2 = .40) (Fig. 2a, b). No other effect was found to be statistically significant.
Correlation analyses. RTs revealed significant correlations with inter-brain connectivity measures within right and left prefrontal areas during post-feedback session: (a) delta frequency band; (b) theta frequency band
As for the theta frequency band, significant effects were observed for Cond (F[2, 28] = 9.32, p ≤ .001, η2 = .39) and Cond × Localization (F[6, 62] = 8.44, p ≤ .001, η2 = .37) effects. As for the significant main effect, lower inter-brain connectivity was observed in post-feedback than pre-feedback condition (F[1, 11] = 9.03, p ≤ .001, η2 = .37). Moving to the interaction effect, inter-brain connectivity decreased in post-feedback than in pre-feedback (F[1, 14] = 8.16, p ≤ .001, η2 = .36); and control (F[1, 14] = 8.70, p ≤ .001, η2 = .37) conditions over F recording channels. Finally, over frontal areas, inter-brain connectivity was lower during pre-feedback than control conditions (F[1, 12] = 8.23, p ≤ .001, η2 = .35) (Fig. 3a, b). No other effect was statistically significant.
Experimental design and task/trial structures
Alpha and beta bands data did not show statistically significant differences.
Correlation analysis
Correlation analyses (Pearson correlation coefficients) between behavioral (RTs and ERs) and neurophysiological (inter-brain EEG connectivity) measures were computed in order to investigate potential reciprocal associations across those levels.
As shown by Pearson correlation coefficients, delta band values concerning left and right frontal areas and RTs proved to be positively associated during the post-feedback session (respectively r = .543, p ≤ .001; r = -.513, p ≤ .001). Namely, lower right/left DLPFC connectivity was related to reduced RTs values in post-feedback condition. Similarly, as for theta activity, significant positive correlations were found between RTs and inter-brain connectivity within left and right F localization in post-feedback condition (respectively r = .514, p ≤ .001; r = -.498, p ≤ .001) (Fig. 4a, b). No other association was statistically significant.
EEG montage. Electrodes located on left and right anterior frontal (AFF1h, AFF2h), frontal (FFC3h, FFC4h), central (C3, C4), and parietal (P3, P4) sites (dashed contour) have been included in connectivity and statistical analyses
The present study explored cognitive and neural correlates of inter-personal synchronization associated to competitive social dynamics by using a hyperscanning approach. Primary findings highlighted: (1) the effect of competition on cognitive performance, with increased performances during competitive with respect to control tasks, and the salience of an external reinforcing feedback concerning performance levels; (2) a downward modulation of inter-brain connectivity associated to competition; and (3) a significant relationship between brain and behavioral measures.
Firstly, competitive situations were found to produce better behavioral performance when compared to individual (control) conditions. We indeed observed decreased RTs and ERs when subjects had to compete. Specifically, as compared to individual task condition (t0), the presence of a clear competitive connotation (t1) leads to better behavioral performance. That main result is in line with previous evidences, which pointed out the role of competitive contexts, when compared to cooperative ones, in inducing improved cognitive outcomes [31]. As suggested by a previous study by Balconi and Vanutelli [26], such phenomenon might be even more evident when we perceive a positive feedback coming from a win situation. However, in that study the absence of a control condition limited the general extent of conclusions about the significance of competition and perceived superiority effects.
The present critical contrast between an individual and a competitive condition may lead to more stringent conclusions on the role of competition in improving cognitive performance. Further, it also helps in interpreting the further increase of cognitive performances after participants received the global reinforcing feedback related to their performance level (t2), which strengthened their self-perception as better performer with respect to their competitor.
Secondly, we even observed a gradual decrease of inter-brain coupling measures related to prefrontal areas moving from control condition to competitive tasks. Such finding may be explained by taking into account the competition frame and the actual task instructions, which clearly defined participants as co-acting competitors and likely lead them to act as individual agents. Indeed, within such frame, even if participants were involved in the same task, participants would have benefit more from individual and self-focused strategies than from joint action plans. In fact, in this case, the neutral condition without a reinforced competitive instruction showed a "baseline" higher connectivity between two persons during a standard joint action. On the contrary, when participants are required to compete, a sort of disengagement of the joint dynamic occurred.
Previous evidence, in fact, underlined that, if compared to cooperative tasks, competition is associated with decreased inter-brain connectivity. In fact, competitive dynamics seem to involve less inclusion mechanisms than cooperative ones, and a clear separation between the self and the other [4]. Cooperation, instead, creates a bond, an overlapping, between the two inter-agents, which leads to increased connectivity patterns [5, 24, 25]. Interestingly, a similar effect was also observed in the case of inefficient joint interactions [6,7,8,9].
Moreover, the decrease of inter-brain coupling was particularly evident over bilateral prefrontal areas, and that is in line with both previous EEG-based hyperscanning evidences [32] and literature concerning the involvement of prefrontal structures in the neural network supporting co-regulation of joint actions, strategic planning in social tasks, social exchanges, perspective-taking and mentalization [33,34,35,36,37]. Moreover, an involvement of prefrontal regions already emerged in a previous fMRI study [4] during competitive conditions, while cooperation was more associated with orbitofrontal activity.
While the localization of observed effects is consistent with previous literature on neural signature of social interaction, it is however worth noting that the modulation of inter-brain connectivity associated to our experimental manipulation was present in specific low-frequency components of EEG—namely delta and theta oscillations—suggesting that they might mirror social regulation and emotional engagement processes. Strategic control and conflict monitoring in social situations have indeed been associated to the increase of frontal theta oscillations [38, 39]. Again, the amplitude of alpha/theta bands proved to be correlated to behavioral synchronization of speech rhythms in an hyperscanning EEG investigation of verbal interactions [19], and to mirror even empathy for pain [40]. Furthermore, synchronization of theta and delta oscillations is stronger in response to high-arousal and emotionally-connoted stimuli with respect to neutral ones, and tend to be greater in individuals experiencing deeper emotional engagement [41,42,43]. We then suggest that the specific modulation of theta and delta activities might be linked to the motivational and attentional value of ongoing social dynamics and to processing of relevant social-affective cues [44,45,46]. By inducing participants to compete, we indeed created a moderately stressful situation where they engaged with the task, felt to be affectively involved, and enacted individual strategies (instead of synchronized action plans) in order to perform better than their competitor, thus mainly focusing on themselves and reducing inter-personal tuning.
Thirdly, we also observed significant correlations between behavioral performance and inter-brain functional connectivity measures related to prefrontal areas, which were associated with systematic brain-to-brain coupling modulation. Going down to specifics, we noted a systematic convergence of increased cognitive performance and reduction of inter-brain connectivity between the two inter-agents. Thus, while from the one hand we may speak about a general individual "cognitive gain" stimulated by competition and by the presence of the reinforcing feedback, from the other hand this effect occurred at the expenses of the joint dynamics. In fact, competition might have triggered a decreasing trend in inter-brain functional connectivity following induced individualistic strategies. In sum, we may suggest that both behavioral and electrophysiological measures were effective in mirroring the effect of competition and of social reinforcement, and that these levels might similarly offer markers of the impact of external conditions which stress individual instead of inter-subjective goals.
To conclude, future research might try to get a better sketch of competition dynamics and their correlates by implementing competitive task in even more realistic social contexts able to ingenerate competitive intentions in a more ecological way. Secondly, future research may benefit from wider samples, so to better specify and qualify the brain-to-brain coupling phenomenon even taking into account other potentially relevant mediators (such as gender and some psychological constructs, e.g. empathy and social skills). Moreover, future analyses should better explore the effects related to both positive and negative feedbacks, in order to provide a complete scenario of the competitive dynamics. Finally, considering previous imaging studies revealing specific neural networks for cooperation and competition, further development should also consider a multi-method approach with combined techniques that can provide both temporal and spatial information of the joint interactions.
Fifteen couples of young volunteers took part in the study (Mage = 24.13, SDage = 1.05, 14 women; age range 20–25, identical for women/male). Couples were constituted by same-sex and age-matched participants who were not familiar before meeting at the experimental session. All participants were right-handed and had normal or corrected-to-normal vision. None of them reported history of neurology or psychiatric disorders, and did not showed pathological scores during an additional initial screening procedure [State-Trait Anxiety Inventory—STAI-Y, [47] Beck Depression Inventory—BDI-II, [48]. All participants gave their written informed consent to participate in the research. The study and experimental procedures were conducted in accordance with the Declaration of Helsinki and were preliminarily approved by the Ethics Committee of the Department of Psychology, Catholic University of the Sacred Heart, Milan.
Participants arrived separately in the lab. Then, they were welcomed jointly by two researchers, one each, who drove them to two different experimental locations. In fact, in order to prevent eye contact or other forms of parallel communication, participants were separated by a black panel. They were seated next to each other in a moderately darkened room in front of two PC monitors, one each. During the experiment, each participant was assisted by a researcher for instructions or help. After stressing the competitive connotation of the task, participants were introduced to a selective attention task [13, 26, 28], where they had to detect and respond to target versus non-target stimuli in a sequence of similar stimuli (blue or green circles or triangles). A new target was presented at the beginning of each block. They were required to memorize the target and then to recognize it among other simple geometric figures by making a two-alternative forced-choice with left/right buttons. Each trial was made up of three stimuli, which were shown for 500 ms and separated by a 300- ms Inter-Stimulus Interval (ISI).
Compared to previous versions of the task, the present version included also a control condition where subjects were not asked to compete, but they were simply required to complete the task by their own (t0; 100 trials). The control condition was then followed by two other experimental sessions (t1 and t2; 100 trials each) where participants had to compete and try to perform at their best. Between the two competition sessions participants received a global feedback concerning their overall individual performance up to that moment. Conditions (control and feedback) were counterbalanced across subjects and presented by a within-subject design. During the tasks, instead, participants received additional real-time feedbacks every three trials: two upward-directed arrows (good trial-specific performance), a dash (mean trial-specific performance); or two downward-directed arrows (bad trial-specific performance). Trial feedbacks lasted for 5000 ms. The EEG activity within this time frame was averaged and used to compute participants' response to each trial and used to compute synchronization between the two members of the dyad (see also "Connectivity Analysis" section). Then, other 5000 ms occurred as Inter-Trial Interval (ITI). Both trial- and general-feedback were manipulated by the experimenter. As for the between-sessions feedback, all participants were told that their performance was "well above" their competitor's one and were encouraged to keep the same performance level during the following session ("Measures recorded till now reveal that your performance is very good. Your response profile is well superior to your competitor's one—about 78% for RTs and 68% for ERs. Keep going like this in the following part"). As for trial-specific feedbacks, participants received systematic reinforcement across the task by being presented with positive feedbacks in 75% of cases (dash and down-arrows appeared only in 25% of cases and mainly at the beginning of the task so to make the task more credible) (Fig. 5). The experiment lasted about 75 min. Additional information on the task can be found in above-cited published works.
Behavioral results. ERs (a) and RTs (b) modulation as a function of Condition (control vs. pre-feedback vs. post-feedback). Increased performance was observed in post-feedback with respect to pre-feedback and control conditions. Bars represent ± 1 SE around group means. Asterisks mark statistically significant differences (p < .05)
According to qualitative debriefing interviews realized at the end of experimental sessions, participants referred that they were strongly engaged in the competitive task (96%), that they deemed the feedback as veridical (95%), and that their performance at the task was relevant for perceived self-efficacy (97%) as also the perception of having better performances than the other participant (96%).
Reaction times (RTs) were collected from the stimulus onset, and error rates (ERs) were computed as the total number of incorrect target/non-target detections out of the total number of trials (higher values corresponded to increased incorrect responses).
EEG recording and reduction
Electrophysiological activities were recorded via two EEG systems (V-Amp, Brain Products GmbH, Gilching; Truscan RS, Deymed Diagnostic sro, Hronov) with a 15-channels montage (AFF1h, AFF2h, FFC3h, Fz, FFC4h, C3, Cz, C4, P3, Pz, P4, O1, O2, T7, and T8). Ag/AgCl electrodes were placed according to the 5% International System [49] and referred to earlobes. Sampling rate was set to 500 Hz and electrodes impedance was always kept below 5 kΩ. A 50 Hz notch and a 0.01–250 Hz bandpass were set as input filters. Electrooculogram was collected by placing two additional electrodes above and below the left eye.
EEG data were then analyzed by Vision Analyzer2 Software (Brain Products, Gilching, Germany). Data were filtered offline (0.1-50 Hz bandpass filter, 48 db/oct) and re-referenced to common average, which makes data reference-free.
A regression-based ocular correction algorithm suitable for low density montages was applied to data so to reduce artifacts due to saccades and eye-blinks [50]. Signals were then segmented and visually checked so to reject any residual ocular, movement or muscular artifacts. Only artifact-free segments were included in subsequent processing steps. All subjects were included in the analysis since we defined the cut off of 95 trials for each condition. After the visual check, frequency power spectra were computed starting from cleaned waveforms by applying the Fast Fourier Transform. Individual average EEG power values (standard frequency bands: delta—0.5 to 3.5 Hz, theta—4 to 7.5 Hz, alpha—8 to 12.5 Hz, beta—13 to 30 Hz) were finally computed for each recording channel and experimental condition. When performing statistical analyses, we only focused on lateralized activities over anterior frontal—AF (AFF1h, AFF2h), frontal—F (FFC3h, FFC4h), central—C (C3, C4), and parietal—P (P3, P4) areas (Fig. 6). All subjects were included in the final sample, since we defined the cut off of 95 trials for each condition.
Histograms (a) and EEG inter-brain functional connectivity patterns (b) as a function of Condition and Localization for the delta frequency band, Π values. Bars represent ± 1 SE around group means. Asterisks mark statistically significant differences (p < .05). Colored lines represent the strength of the relation, ranging from 0 (yellow) to 1 (red)
Connectivity analysis
EEG connectivity data were obtained by computing partial correlation coefficients (Πij) on subjects' response to the 300 trial feedbacks (averages of the subsequent 5 s), for each pair of channels, each dyad, and each frequency band. Coefficients were calculated by normalizing the inverse of the covariance matrix Γ = Σ−1:
$$\begin{aligned} \varGamma & = \left( {\varGamma_{ij} } \right) = \varSigma^{ - 1} \quad {\text{inverse}}\,{\text{of}}\,{\text{the}}\,{\text{covariance}}\,{\text{matrix}} \\ \varPi ij & = \frac{ - \varGamma ij}{{\sqrt {\varGamma ii\varGamma jj} }}\quad {\text{partial}}\,{\text{correlation}}\,{\text{matrix}} \\ \end{aligned}$$
Correlational analyses have been previously used to assess intra-brain connectivity, especially between frontal areas, with other techniques, such as fNIRS (see for example [51, 52]). In particular, differently from simple correlations, partial correlation quantifies the relationship between two signals (in our case: i, j) given (net to) the values of all the other variables that could be directly connected to the model. It is applied in all those cases where the strength of the relationship between two variables is a matter of interest, ranging from computational models [53] to neuroscience. Indeed, the same statistical model was applied in previous work to assess inter-brain synchrony with EEG [54] during failing cooperative interactions. In fact, similarly to the present paradigm, specific averaged values in response to the feedback were used instead of timeseries.
After computing partial correlation values, they were entered into ANOVA models as dependent variables.
Toppi J, Borghini G, Petti M, He EJ, De Giusti V, He B, et al. Investigating cooperative behavior in ecological settings: an EEG hyperscanning study. PLoS ONE. 2016;11:e0154236. https://doi.org/10.1371/journal.pone.0154236.
Vanutelli ME, Nandrino J-L, Balconi M. The boundaries of cooperation: sharing and coupling from ethology to neuroscience. Neuropsychol Trends. 2016;19:83–104.
Balconi M, Vanutelli ME. Cooperation and competition with hyperscanning methods: review and future application to emotion domain. Front Comput Neurosci. 2017;11:86.
Decety J, Jackson PL, Sommerville JA, Chaminade T, Meltzoff AN. The neural bases of cooperation and competition: an fMRI investigation. Neuroimage. 2004;23:744–51.
Balconi M, Crivelli D. Vanutelli ME. Why to cooperate is better than to compete: Brain and personality components. BMC Neurosci; 2017. p. 18.
Balconi M, Vanutelli ME. When cooperation was efficient or inefficient. Functional near-infrared spectroscopy evidence. Front Syst Neurosci. 2017;11:26.
Balconi M, Gatti L, Vanutelli ME. When cooperation goes wrong: brain and behavioural correlates of ineffective joint strategies in dyads. Int J Neurosci. 2017;128:155–66.
Balconi M, Gatti L, Vanutelli ME. Cooperate or not cooperate EEG, autonomic, and behavioral correlates of ineffective joint strategies. Brain Behav. 2018;8:e00902.
Balconi M, Vanutelli ME, Gatti L. Functional brain connectivity when cooperation fails. Brain Cognit. 2018;123:65–73.
Hasson U, Ghazanfar AA, Galantucci B, Garrod S, Keysers C. Brain-to-brain coupling: a mechanism for creating and sharing a social world. Trends Cognit Sci. 2012;16:114–21. https://doi.org/10.1016/j.tics.2011.12.007.
Crivelli D, Balconi M. Agency and inter-agency, action and joint action: theoretical and neuropsychological evidences. In: Balconi M, editor. Neuropsychology of the sense of agency. From consciousness to action. New York: Springer-Verlag; 2010. p. 107–22.
Balconi M, Pagani S. Social hierarchies and emotions: cortical prefrontal activity, facial feedback (EMG), and cognitive performance in a dynamic interaction. Soc Neurosci. 2015;10:166–78. https://doi.org/10.1080/17470919.2014.977403.
Balconi M, Pagani S. Personality correlates (BAS-BIS), self-perception of social ranking, and cortical (alpha frequency band) modulation in peer-group comparison. Physiol Behav. 2014;133C:207–15.
Schilbach L. A second-person approach to other minds. Nat Rev Neurosci. 2010;11:449.
Konvalinka I, Roepstorff A. The two-brain approach: how can mutually interacting brains teach us something about social interaction? Front Hum Neurosci. 2012;6:215. https://doi.org/10.3389/fnhum.2012.00215.
Koike T, Tanabe HC, Sadato N. Hyperscanning neuroimaging technique to reveal the "two-in-one" system in social interactions. Neurosci Res. 2015;90:25–32.
Montague PR, Berns GS, Cohen JD, McClure SM, Pagnoni G, Dhamala M, et al. Hyperscanning: simultaneous fMRI during linked social interactions. Neuroimage. 2002;16:1159–64.
Holper L, Scholkmann F, Wolf M. Between-brain connectivity during imitation measured by fNIRS. Neuroimage. 2012;63:212–22. https://doi.org/10.1016/j.neuroimage.2012.06.028.
Kawasaki M, Yamada Y, Ushiku Y, Miyauchi E, Yamaguchi Y. Inter-brain synchronization during coordination of speech rhythm in human-to-human social interaction. Sci Rep. 2013;3:1–8. https://doi.org/10.1038/srep01692.
Lindenberger U, Li S-C, Gruber W, Müller V. Brains swinging in concert: cortical phase synchronization while playing guitar. BMC Neurosci. 2009;10:22.
Sänger J, Müller V, Lindenberger U. Intra- and interbrain synchronization and network properties when playing guitar in duets. Front Hum Neurosci. 2012;6:312. https://doi.org/10.3389/fnhum.2012.00312.
Astolfi L, Toppi J, Vogel P, Mattia D, Babiloni F, Ciaramidaro A, et al. Investigating the neural basis of cooperative joint action. An EEG hyperscanning study. In: 2014 36th annual international conference of the IEEE on engineering in medicine and biology society (EMBC). IEEE; 2014. p. 4896–9.
Babiloni F, Cincotti F, Mattia D, De Vico Fallani F, Tocci A, Bianchi L, et al. High resolution EEG hyperscanning during a card game. In: 29th annual international conference of the IEEE on engineering in medicine and biology society 2007. EMBS 2007. 2007; p. 4957–60.
Astolfi L, Toppi J, De Vico Fallani F, Vecchiato G, Cincotti F, Wilke CT, et al. Imaging the social brain by simultaneous hyperscanning during subject interaction. IEEE Intell Syst. 2011;26:38–45.
Sinha N, Maszczyk T, Wanxuan Z, Tan J, Dauwels J. EEG hyperscanning study of inter-brain synchrony during cooperative and competitive interaction. 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2016; p. 4813–8.
Balconi M, Vanutelli ME. Competition in the brain. The contribution of EEG and fNIRS modulation and personality effects in social ranking. Front Psychol. 2016;7:1587. https://doi.org/10.3389/fpsyg.2016.01587.
Monterosso J, Ainslie G, PamelaToppiMullen PA-C, Gault B. The fragility of cooperation: a false feedback study of a sequential iterated prisoner's dilemma. J Econ Psychol. 2002;23:437–48.
Balconi M, Vanutelli ME. Interbrains cooperation: hyperscanning and self-perception in joint actions. J Clin Exp Neuropsychol. 2017;39:607–20.
Lukinova E, Myagkov M. Impact of short social training on prosocial behaviors: an fMRI study. Front Syst Neurosci. 2016;10:60. https://doi.org/10.3389/fnsys.2016.00060.
Chiao JY, Adams RB, Tse PU, Lowenthal L, Richeson JA, Ambady N. Knowing who's boss: fMRI and ERP investigations of social dominance perception. Group Process Intergroup Relat. 2008;11:201–14.
Tauer JM, Harackiewicz JM. The effects of cooperation and competition on intrinsic motivation and performance. J Person Soc Psychol. 2004;86:849–61.
De Vico Fallani F, Nicosia V, Sinatra R, Astolfi L, Cincotti F, Mattia D, et al. Defecting or not defecting: how to "read" human behavior during cooperative games by EEG measurements. PLoS ONE. 2010;5:e14187. https://doi.org/10.1371/journal.pone.0014187.
Weiland S, Hewig J, Hecht H, Mussel P, Miltner WHR. Neural correlates of fair behavior in interpersonal bargaining. Soc Neurosci. 2012;7:537–51.
Van Overwalle F, Baetens K. Understanding others' actions and goals by mirror and mentalizing systems: a meta-analysis. Neuroimage. 2009;48:564–84.
Kalbe E, Schlegel M, Sack AT, Nowak DA, Dafotakis M, Bangard C, et al. Dissociating cognitive from affective theory of mind: a TMS study. Cortex. 2010;46:769–80.
Karafin MS, Tranel D, Adolphs R. Dominance attributions following damage to the ventromedial prefrontal cortex. J Cognit Neurosci. 2004;16:1796–804.
Suzuki S, Niki K, Fujisaki S, Akiyama E. Neural basis of conditional cooperation. Soc Cognit Affect Neurosci. 2011;6:338–47.
Billeke P, Zamorano F, Cosmelli D, Aboitiz F. Oscillatory brain activity correlates with risk perception and predicts social decisions. Cereb Cortex. 2013;23:2872–83.
Cristofori I, Moretti L, Harquel S, Posada A, Deiana G, Isnard J, et al. Theta signal as the neural signature of social exclusion. Cereb Cortex. 2013;23:2437–47.
Mu Y, Fan Y, Mao L, Han S. Event-related theta and alpha oscillations mediate empathy for pain. Brain Res. 2008;1234:128–36.
Balconi M, Grippa E, Vanutelli ME. What hemodynamic (fNIRS), electrophysiological (EEG) and autonomic integrated measures can tell us about emotional processing. Brain Cognit. 2015;95:67–76.
Knyazev GG, Slobodskoj-Plusnin JY, Bocharov AV. Event-related delta and theta synchronization during explicit and implicit emotion processing. Neuroscience. 2009;164:1588–600.
Paré D. Role of the basolateral amygdala in memory consolidation. Prog Neurobiol. 2003;70:409–20.
Balconi M, Falbo L, Conte VA. BIS and BAS correlates with psychophysiological and cortical response systems during aversive and appetitive emotional stimuli processing. Motiv Emot. 2012;36:218–31.
Balconi M, Pozzoli U. Arousal effect on emotional face comprehension. Frequency band changes in different time intervals. Physiol Behav. 2009;97:455–62. https://doi.org/10.1016/j.physbeh.2009.03.023.
Başar E. Brain function and oscillations: integrative brain function neurophysiology and cognitive processes. Berlin: Springer; 1999.
Pedrabissi L, Santinello M, editors. State-Trait Anxiety Inventory—Forma Y. Firenze: Giunti OS; 1989.
Ghisi M, Flebus GB, Montano A, Sanavio E, Sica C, editors. Beck Depression Inventory—II. Firenze: Giunti OS; 2006.
Oostenveld R, Praamstra P. The five percent electrode system for high-resolution EEG and ERP measurements. Clin Neurophysiol. 2001;112:713–9.
Gratton G, Coles MGH, Donchin E. A new method for off-line removal of ocular artifact. Electroencephalogr Clin Neurophysiol. 1983;55:468–84.
Chaudhary U, Hall M, DeCerce J, Rey G, Godavarty A. Frontal activation and connectivity using near-infrared spectroscopy: verbal fluency language study. Brain Res Bull. 2011;84:197–205. https://doi.org/10.1016/j.brainresbull.2011.01.002.
Balconi M, Pezard L, Nandrino J-L, Vanutelli ME. Two is better than one: the effects of strategic cooperation on intra- and inter-brain connectivity by fNIRS. PLoS ONE. 2017;12:e0187652.
Herbsleb JD, Mockus A. An empirical study of speed and communication in globally distributed software development. IEEE Trans Softw Eng. 2003;29:481–94.
Balconi M, Gatti L, Vanutelli ME. EEG functional connectivity and brain-to-brain coupling in failing cognitive strategies. Conscious Cognit. 2018;60:86–97.
MB planned the research, supervised data acquisition and statistical analysis, wrote the paper. MEV realized the empirical research, applied the statistical analysis, wrote the paper. Both authors read and approved the final manuscript.
Tha authors acknowledge Davide Crivelli for his contribution to the research.
The datasets generated and/or analyzed during the current study are not publicly available due to the biometrics nature of the data but are available from the corresponding author on reasonable request.
The study and experimental procedures were conducted in accordance with the Declaration of Helsinki and were preliminarily approved by the Ethics Committee of the Department of Psychology, Catholic University of the Sacred Heart, Milan. The project was approved Ethics Committee of the Department of Psychology, Catholic University of the Sacred Heart, Milan, 2016. All participants gave their written informed consent to participate in the research.
Research Unit in Affective and Social Neuroscience, Catholic University of the Sacred Heart, Milan, Italy
Michela Balconi
& Maria Elide Vanutelli
Department of Psychology, Catholic University of the Sacred Heart, Largo Gemelli 1, 20123, Milan, Italy
Search for Michela Balconi in:
Search for Maria Elide Vanutelli in:
Correspondence to Michela Balconi.
Hyperscanning
Functional connectivity
Reinforcing feedback
|
CommonCrawl
|
tweetnotebook.com DIRECTORY Your repair guide directory
network solutions internal server error - Whitney PC Repair
network transport error - NEI Nemmer Electric Company
nevada state quarter error - Scrappy's pc repair
netflix ubuntu error code 1001 - Cheap Repairs
network error xml http request exception 101 - Midessa Long Distance
network link error 10049 wsaeaddrnotavail - Computer Guy The
networking error - kbhtech.net
network error yahoo mail was unable to connect - Your Way Computer Tech Team
network error on facebook blocked friend - A Plus Computer Specialists
networker administrator input/output error - Gigs of Knowledge
network error please try connecting to the remote computer - Geek City USA, LLC
network printer error messages - Ross' Computer
networker error codes - AirGen Equipment LLC
netware error 2436 - LAN Soulutions
network error connection reset by peer winscp - A-1 Computers Inc
pagerank power iteration error - The Computer Man
903 N Milam St, Fredericksburg, TX 78624
pagerank power iteration error Willow City, Texas
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. In Proceedings of the 35th Meeting of the Association for Computational Linguistics, Prague, CZ, 2007, pp. 424–431. The PageRank transferred from a given page to the targets of its outbound links upon the next iteration is divided equally among all outbound links. PageRank can be calculated for collections of documents of any size.
The iterative method can be viewed as the power iteration method[24][25] or the power method. pp.209–245. Please try the request again. Consider the graph in Figure 21.4 .
doi:10.1007/978-3-540-74810-6. ^ Matthew Richardson & Pedro Domingos, A. (2001). Iterative[edit] At t = 0 {\displaystyle t=0} , an initial probability distribution is assumed, usually P R ( p i ; 0 ) = 1 N {\displaystyle PR(p_{i};0)={\frac {1}{N}}} . Last updated on Oct 26, 2015. Generated Mon, 24 Oct 2016 00:25:28 GMT by s_wx1157 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection
Bloomberg. Thus, upon the first iteration, page B would transfer half of its existing value, or 0.125, to page A and the other half, or 0.125, to page C. The PageRank algorithm was designed for directed graphs but this algorithm does not check if the input graph is directed and will execute on undirected graphs by converting each edge in P R ( A ) = P R ( B ) L ( B ) + P R ( C ) L ( C ) + P R ( D )
Both of the above algorithms are scalable, as each node processes and sends only small (polylogarithmic in n, the network size) number of bits per round. By default, dangling nodes are given outedges according to the personalization vector (uniform if not specified). Indeed, the relative contribution of PageRank to the overall score may again be determined by machine-learned scoring as in Section 15.4.1 . Retrieved October 16, 2009 ^ Bartleman, Wil (2014-10-12). "Google Page Rank Update is Not Coming".
CS1 maint: Multiple names: authors list (link) Haveliwala, Taher; Jeh, Glen; Kamvar, Sepandar (2003). "An Analytical Comparison of Approaches to Personalizing PageRank" (PDF). For a teleportation rate of 0.14 its (stochastic) transition probability matrix is: (259) The PageRank vector of this matrix is: (260) Observe that in Figure 21.4 , , , and are A PageRank results from a mathematical algorithm based on the webgraph, created by all World Wide Web pages as nodes and hyperlinks as edges, taking into consideration authority hubs such as The SERP rank of a web page refers to the placement of the corresponding link on the SERP, where higher placement means higher SERP rank.
This can be seen by noting that M {\displaystyle {\mathcal {M}}} is by construction a stochastic matrix and hence has an eigenvalue equal to one as a consequence of the Perron–Frobenius Retrieved 2014-07-08. ^ Johan Bollen, Marko A. Managed Admin. Power Method[edit] If the matrix M {\displaystyle {\mathcal {M}}} is a transition probability, i.e., column-stochastic and R {\displaystyle \mathbf {R} } is a probability distribution (i.e., | R | = 1
Ivan and V. doi:10.1137/140976649. ^ Pankaj Gupta, Ashish Goel, Jimmy Lin, Aneesh Sharma, Dong Wang, and Reza Bosagh Zadeh WTF: The who-to-follow system at Twitter, Proceedings of the 22nd international conference on World Wide and Tomkins, A. Please try the request again.
Parameters :G : graph A NetworkX graph alpha : float, optional Damping parameter for PageRank, default=0.85 personalization: dict, optional : The "personalization vector" consisting of a dictionary with a key for There is a social relationship that exists between PageRank and the people who use it as it is constantly adapting and changing to the shifts in modern society. Your cache administrator is webmaster. Since December 2007, when it started actively penalizing sites selling paid text links, Google has combatted link farms and other schemes designed to artificially inflate PageRank.
Google.com. Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity.. Information Processing & Management. 12 (5): 297–312. References [R192]A.
It can be understood as a Markov chain in which the states are pages, and the transitions, which are all equally probable, are the links between pages. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6] Other link-based ranking algorithms for Web pages include the HITS algorithm invented by Jon Google has not disclosed the specific method for determining a Toolbar PageRank value, which is to be considered only a rough indication of the value of a website. doi:10.1016/0022-2496(77)90033-5. ^ Page, Larry, "PageRank: Bringing Order to the Web" at the Wayback Machine (archived May 6, 2002), Stanford Digital Library Project, talk.
Internet Mathematics. These strategies have severely impacted the reliability of the PageRank concept,[citation needed] which purports to determine which documents are actually highly valued by the Web community. In other words, M ′ := M + D {\displaystyle {\mathcal {M}}^{\prime }:={\mathcal {M}}+{\mathcal {D}}} , where the matrix D {\displaystyle {\mathcal {D}}} is defined as D := P D t BBC News.
www.google.com. Retrieved 19 October 2015. ^ a b c d e f Brin, S.; Page, L. (1998). "The anatomy of a large-scale hypertextual Web search engine" (PDF). PMID21149343. ^ D. Page and Brin confused the two formulas in their most popular paper "The Anatomy of a Large-Scale Hypertextual Web Search Engine", where they mistakenly claimed that the latter formula formed a
SD2 uses PageRank for the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. Page C would transfer all of its existing value, 0.25, to the only page it links to, A. Information Processing Letters. 115: 633–634. Retrieved 2009-02-25. ^ a b Page, Lawrence; Brin, Sergey; Motwani, Rajeev and Winograd, Terry (1999). "The PageRank citation ranking: Bringing order to the Web".
Patent—Method for node ranking in a linked database—Patent number 7,058,628—June 6, 2006 PageRank U.S. Unlike the Google Toolbar, which shows a numeric PageRank value upon mouseover of the green bar, the Google Directory only displayed the bar, never the numeric values. The first paper about the project, describing PageRank and the initial prototype of the Google search engine, was published in 1998:[5] shortly after, Page and Brin founded Google Inc., the company P R ( A ) = P R ( B ) 2 + P R ( C ) 1 + P R ( D ) 3 . {\displaystyle PR(A)={\frac {PR(B)}{2}}+{\frac {PR(C)}{1}}+{\frac
The university received 1.8 million shares of Google in exchange for use of the patent; the shares were sold in 2005 for $336 million.[13][14] PageRank was influenced by citation analysis, early The information that is available to individuals is what shapes thinking and ideology and PageRank is the device that displays this information. Grolmusz (2013). "Equal opportunity for low-degree network nodes: a PageRank-based method for protein target identification in metabolic graphs".
ora 24333 zero iteration count error informatica
Re: Error while executing the workflow Sowmya Apr 6, 2009 9:37 AM (in response to user157370) Hi,Please check your SQL override in the DSQ. Any help/suggestions is greatly appreciated. Would there be no time in a universe with only light? Quick Search: CODE Oracle PL/SQL Code Library JOBS Find Or Post Oracle Jobs FORUM Oracle Discussion & Chat Oracle Database Error: ORA-24333 [Return To Oracle Error Index] Top Best Answer 0 Mark this reply as the best answer?(Choose caref...
nintendo ds error power off
This [email protected]#$ thing has only been used since mid-August (it's now mid-December) and I'm stuffed if I can't get it working before Xmas. Yes No Voted Undo Score 0 Cancel Add a comment 0/1024 Cancel Post comment Are you sure you want to delete this zzzzzz? That was not what fixed it, because I immediately tried it after that, and still got the same error whenever I turned the wifi back on. Cancel Evan Parker Rep: 1 Posted: 11/29/2015 Options Permalink History If I break the orange cable connected to...
norton power eraser error
Site Changelog Community Forum Software by IP.Board Sign In Use Facebook Use Twitter Need an account? Please try to boot your computer in Safe Mode & then try to run Norton Power Eraser. Sign in to report inappropriate content. Accessing and setup of a Wireless Gateway Find everything you need to know about setting up your wireless gateway. Back to top #3 quietman7 quietman7 Bleepin' Janitor Global Moderator 45,627 posts ONLINE Gender:Male Location:Virginia, USA Local time:05...
open power policy error
Just click the sign up button to choose a username and then you can ask your own questions on the forum. Nach langem Suchen habe ich die Ursache bei gefunden. metacomment.io Toggle navigation About Contact Howto: Repair Power schemes March 2, 2008February 12, 2011 Christian Windows XP In some cases, an error may occur when trying to view, change or add Click file and click Connect Network Registry. 3. Load default setting error! The incorrect settings are now deleted To download the co...
© Copyright 2018 tweetnotebook.com. All rights reserved.
|
CommonCrawl
|
Proportionality Mathematics
Get help with your Proportionality (mathematics) homework. Access the answers to hundreds of Proportionality (mathematics) questions that are explained in a way that's easy for you to understand. Can't find the question you're looking for? Go ahead and submit it to our experts to be answered.
Study tools on Study.com
27,000 + Video Lessons
920,000+ Questions and Answers
65,000+ Quizzes
Proportionality Mathematics / Questions and Answers
Proportionality Mathematics Questions and Answers
Test your understanding with practice problems and step-by-step solutions. Browse through all study tools.
Question & Answers (13)
Video Lessons (13)
Questions and Answers (304)
If y varies inversely as x and y equals 18 when x equals 5, find y when x equals 3.
How high is a tree that casts a 22-ft shadow at the same time a 4-ft post casts a shadow that is 6-ft long?
Determine if the statement is true or false. |3 \sin 6x| \leq 3 for all x.
Levi must paint the inside of his house, the inside of his house has a paintable surface area of 3500 ft^2. Levi went to the European "Home Depot" to buy his paint when he returned home, he realized each bucket of paint covers 5 m^2, Levi has 140 buckets
Farmer Brown had ducks and cows. One day he noticed that the animals had a total of 16 heads and 42 feet. How many of the animals were cows?
The temperature, T, of a given mass of gas, varies inversely with its volume, V. The temperature of 20 cm^3 of a certain gas is 15^\circ C. What will the temperature be when it is compressed to a volu
If y varies jointly with x and z, and y = 24 when x = 3 and z = 2, then the value of x when y = 6 and z = 0.5 is (blank).
Given y varies directly with x. When y = 20, x = 50. Find x when y = 36.
A sample of 144 firecrackers contained 8 duds. How many duds would you expect in a sample of 2016 firecracker?
The time required to do a job varies inversely as the number of people working. It takes 5 hours for 7 bricklayers to build a park well. How long will it take 10 bricklayers to complete the job?
A particular hybrid car travels approximately 288 mi on 6 gal of gas. Find the amount of gas required for a 912-mi trip.
The number of calculators Mrs. Hopkins can buy for the classroom varies inversely as the cost of each calculator. She can buy 24 calculators that cost $60 each. How many calculators can she buy if the
The number of kilograms of water in a human body varies directly as the mass of the body. An 87 kg person contains 58 kg of water. How many kilograms of water are in a 72 kg person?
Does the equation 7x - 4y = 0 represent a direct variation? If so, find the constant of variation. A) yes; { k= \frac{7}{4} } B) yes, k= -4 C) No D) yes; { k= \frac{-7}{4} }
The heat generated by a stove element varies directly as the square of the voltage and inversely as the resistance. If the voltage remains constant, what needs to be done to triple the amount of heat
Determine whether the equation represents direct, inverse, joint, or combined variation. { y=\frac{31x}{wz} } .
Solve using variation of parameters. A) y'' + y = 3 \sec x - x^2 + 1 B) 2y'' + y = \tan x + e^{2x} - 2 Differential equations.
The U-Drive Rent-A-Truck company plans to spend $16 million on 320 new vehicles. Each commercial van will cost $25,000, each small truck $80,000, and each large truck $70,000. Past experience shows
The indicated functions are known linearly independent solutions of the associated homogeneous differential equation on (0, infinity). Find the general solution of the given non homogeneous equation.
x varies directly as the square of y and x = 12 when y = 4. What is the value of x when y = 6?
Frank runs 10 miles in 75 minutes. At the same rate, how many miles would he run in 69 minutes?
Which of the following proportions will allow you to correctly compute the answer to the question: 51 in. = _ ft.? 1. \frac{51}{x} = \frac{1}{12} 2. \frac{x}{51} = \frac{12}{1} 3. \frac{51}{12} = \fra
Solve the proportion. \frac{2}{7} = \frac{x}{42} A.) \frac{1}{2} B.) 12 C.) \frac{2}{7} D.) 6
Determine whether y varies directly with x if so, solve for the constant of variation k. 3y= -7x-18
The variable z varies jointly with y and the square of x. If x= -2 when y= 7 and z= -84, find x when z= -96 and y= 2.
You rollerblade at an average speed of 8 miles per hour. The number of miles m you rollerblade during h hours is modeled by m= 8h. Do these two quantities have a direct variation?
Suppose that y is directly proportional to x and that y= 10 when x= 3. Find the constant of proportionality k.
The American Association of Individual Investors (AAII) polls its subscribers on a weekly basis to determine the number who are bullish, bearish, or neutral on the short-term prospects for the stock m
The variables x and y are directly proportional, and y = 2 when x = 3. What is the value of y when x = 9?
Three quantities R, S, and T are such that R varies directly as S and inversely as the square of T. (a) Given that R = 480 when S = 150 and T = 5, write an equation connecting R, S, and T. (b) (i) Find the value of R when S = 360 and T = 1.5. (ii) Find th
The amount of taxes a city collect is proportional to the population of the city. If the city collects $6 billion in 1982 when the population was 2 million people, how much did the city collect in 1990 when the population was 3 million people?
Solve: If y is proportional to x and x is 4 when y is 22, then what is y when: (g)x = \frac{1}{w}? y =
Solve the following proportion. 2 / 15 = 13 / x.
Translate the statement of variation into an equation; use k as the constant of variation. V varies jointly as s and the fourth power of u.
A three quarter inch wire has 12 ohms resistance. How much resistance has the same length of half-inch wire, if resistance varies inversely as the square of the diameter?
You pay $1 to rent a movie plus an additional $0.50 per day until you return to the movie. Your friend pays $1.25 per day to rent a movie. a. Make tables showing the costs to rent a movie for up to 5
A walker's speed, v, is proportional to the ratio of his leg length, L, and the period of the repeating motion of his legs, T, that is, v is proportional to L/T. If the period is measured to be propor
In a Harris poll, adults were asked if they are in favor of abolishing the penny. Among the responses, 1290 answered "no" , 454 answered "yes", and 400 had no opinion. What is the sample portion of ye
If y varies directly as x, and y = 9 when x= 5, find y when x = 10.
Ben bought 2 sandwiches for $5.00. Let x represent the number of sandwiches purchased and let y represent the total cost. Graph this proportional relationship.
Find a mathematical model that represents the statement. (Determine the constant of proportionality.) y is inversely proportional to x^3. (y = 5 when x = 2.)
The number of ants in Bob's kitchen increase at a rate that is proportional to the number of ants present each day. If there are 20 ants on day 0 and 60 ants on day 4, how many ants will there be on day 14? Round to the nearest ant.
Suppose r varies directly as the square of m, and inversely as s. If r = 15 when m = 15 and s = 9, find r when m = 60 and s = 9.
If burning one mole releases 1310.0 kj, then how many moles will release 227.30?
What is the solution of the proportion? 3y - 8/12 = y/5
If y varies directly as the square root of x and y = 8 when x = 81, find y if x = 6561. (Round off your answer to the nearest hundredth.)
A recipe calls for of a cup of milk for 11 cookies. How many cups of milk are needed to make 132 cookies?
The total population of animals is directly proportional to the size of the habitat (in acres) polled. Write an equation using only one variable that could be used to solve for the constant of variation k .
Find the value of y for a given value of x, if y varies directly with x. If y = 39 when x = -117, what is y when x = -132?
Solve for n. \frac{3}{7 - n}= \frac{1}{n}
Solve for x. fraction{3}{x - 7}= fraction{2}{2x+1}
If z varies inversely as w, and z = 40 when w = .06, find z when w = 20.
The quantity y varies directly with the square of x. If y = 3 when x = 9, find y when x is 11. Round answer to the nearest hundredth.
Suppose that y varies directly with x and inversely with z. If y= 25 when x= 35 and z= 7, write the equation that models the relationship. Then find y when x= 12 and z= 4.
An example of a useful somaclonal variation is Shi 1 answer below An example of a useful somaclonal variation is Shikonin dye production Short crop duration in sugarcane Male sterility White rust re
Suppose that S varies directly as the 2/5 power of T, and that S=8 when T=32. Find S when T=243.
Solve the proportion u / 4 = 11 / 17.
Solve and find the value of x. \frac{2}{3}=\frac{1.2}{x}
If y is inversely proportional to x, and y = 8 when x = 2, find y when x = 5.
Are 84/105 and 128/160 proportional?
{{countDisplay}}
Concepts related to Proportionality Mathematics
More general
|
CommonCrawl
|
Why is it impossible to measure position and momentum at the same time with arbitrary precision?
I'm aware of the uncertainty principle that doesn't allow $\Delta x$ and $\Delta p$ to be both arbitrarily close to zero. I understand this by looking at the wave function and seeing that if one is sharply peeked its fourier transform will be wide.
But how does this stop one from measuring both position and momentum at the same time? I've googled this question, but all I found were explantions using the 'Observer effect'. I'm not sure, but I think this effect is very different from the intrinsic uncertainty principle.
So what stops us from measuring both position and momentum with arbitrairy precision? Does a quantum system always have to change when observerd? Or does it have to do with the uncertainty principle?
EDIT: I'm getting a lot of answers telling me where the uncertainty principle comes from. I greatly appreciate it, but feel I already have a strong understanding what it means.
My question really was why it is impossible to measure both position and momentum at the same time with infinite precision. I understand that for a given wavefunction that if $\Delta x$ is small, $\Delta p$ will be big and how this arises from fourier transformations. But I fail to see how this prevents anyone from doing a simultaneous measurement of both $x$ and $p$ with infinite precision.
Sorry if I'm misunderstanding the previous answers.
quantum-mechanics wavefunction heisenberg-uncertainty-principle quantum-measurements
catmousedog
catmousedogcatmousedog
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
– rob ♦
$\begingroup$ Since it was deleted, here's 3blue1brown's explanation of the general uncertainty principle. The tl;dw is that the "uncertainty principle" is actually a property of all waves. $\endgroup$
– BlueRaja - Danny Pflughoeft
When someone asks "Is it really impossible to simultaneously measure position and momentum in quantum theory?", the best preliminary answer one can give is another question: "what do you exactly mean by measurement? and by position and momentum?". Those words have several meanings each in quantum theory, reflected in literature and experimental practice. There is a sense in which simultaneous measurement of position and momentum is not only possible, but also routinely made in many quantum labs, for example quantum-optics labs. Such measurement is indeed at the core of modern quantum applications such as quantum-key distribution.
So I think it's best first of all to make clear what the different meanings of measurement, position, momentum are in actual applications and in the literature, and to give examples of the different experimental procedures that are called "measurement of position" etc. What's important is to understand what's being done; the rest is just semantics.
Let me get there step by step. The answer below summarizes what you can find in current articles published in scientific journals and current textbooks, works and results which I have experienced myself as a researcher in quantum optics. All references are given throughout the answer and at the end, and I strongly recommend that you go and read them. Also, this answer is meant to discuss the uncertainty principle and simultaneous measurement within quantum theory. Maybe in the future we'll all use an alternative theory in which the same experimental facts are given a different meaning; there are such alternative theories proposed at present, and many researchers indeed are working on alternatives. Finally, this answer tries to avoid terminological debates, explaining the experimental, laboratory side of the matter. Warnings about terminology will be given throughout. (I don't mean that terminology isn't important, though: different terminologies can inspire different research directions.)
We must be careful, because our understanding of the uncertainty principle today is very different from how people saw it in the 1930–50s. The modern understanding is also borne out in modern experimental practice. There are two main points to clarify.
1. What do we exactly mean by "measurement" and by "$\Delta x$"?
The general picture is this:
We can prepare one copy of a physical system according to some specific protocol. We say that the system has been prepared in a specific state (generally represented by a density matrix $\pmb{\rho}$). Then we perform a specific operation that yields an outcome. We say that we have performed one instance of a measurement on the system (generally represented by a so-called positive-operator-valued measure $\{\pmb{O}_i\}$, where $i$ labels the possible outcomes).
We can repeat the procedure above anew – new copy of the system – as many times as we please, according to the same specific protocols. We are thus making many instances of the same kind of measurement, on copies of the system prepared in the same state. We thus obtain a collection of measurement results, from which we can build a frequency distribution and statistics. Throughout this answer, when I say "repetition of a measurement" I mean it in this specific sense.
There's also the question of what happens when we make two or more measurements in succession, on the same system. But I'm not going to discuss that here; see the references at the end.
This is why the general empirical statements of quantum theory have this form: "If we prepare the system in state $\pmb{\rho}$, and perform the measurement $\{\pmb{O}_i\}$, we have a probability $p_1$ of observing outcome $i=1$, a probability $p_2$ of observing outcome $i=2$, ..." and so on (with appropriate continuous limits for continuous outcomes).
Now, there's a measurement precision/error associated with each single instance of the measurement, and also a variability of the outcomes across repetitions of the measurement. The first kind of error can be made as small as we please. The variability across repetitions, however, generally appears not to be reducible below some nonzero amount which depends on the specific state and the specific measurement. This latter variability is what we call "$\Delta x$".
So when we say "cannot be measured with arbitrary precision", what we mean more exactly is that "its variability across measurement repetitions cannot be made arbitrarily low". The fundamental mystery of quantum mechanics is the lack – in a systematic way – of reproducibility across measurement instances. But the error in the outcome of each single instance has no theoretical lower bound.
Of course this situation affects our predictive abilities, because whenever we repeat the same kind of measurement on a system prepared on the same kind of state, we don't really know what to expect, within $\Delta x$.
This important distinction between single and multiple measurement instances was first pointed out by Ballentine in 1970:
Ballentine: The Statistical Interpretation of Quantum Mechanics, Rev. Mod. Phys. 42 (1970) 358 (other copy)
see especially the very explanatory Fig. 2 there. And it's not a matter of "interpretation", as the title might today suggest. It's an experimental fact. Clear experimental examples of this distinction are given for example in
Leonhardt: Measuring the Quantum State of Light (Cambridge 1997)
see for example Fig. 2.1 there and its explanation. Also the more advanced
Mandel, Wolf: Optical Coherence and Quantum Optics (Cambridge 2008).
See also the textbooks given below.
The distinction between error of one measurement instance and variability across measurement instances is also evident if you think about a Stern-Gerlach experiment. Suppose we prepare a spin in the state $x+$ and we measure it in the direction $y$. The measurement yields only one of two clearly distinct spots, corresponding to either the outcome $+\hbar/2$ or $-\hbar/2$ in the $y$ direction. This outcome may have some error in practice, but we can in principle clearly distinguish whether it is $+\hbar/2$ or $-\hbar/2$. However, if we prepare a new spin in the state $x+$ and measure $y$ again, we can very well find the opposite outcome – again very precisely measured. Over many measurements we observe these $+$ and $-$ outcomes roughly 50% each. The standard deviation is $\hbar/2$, and that's indeed the "$\Delta S_y$" given by the quantum formulae: they refer to measurement repetitions, not to one single instance in which you send a single electron through the apparatus.
It must be stressed that some authors (for example Leonhardt above) use the term "measurement result" to mean, not the result of a single experiment, but the average value $\bar{x}$ found in several repetitions of an experiment. Of course this average value has uncertainty $\Delta x$. There's no contradiction here, just a different terminology. You can call "measurement" what you please – just be precise in explaining what your experimental protocol is. Some authors use the term "one-shot measurement" to make the distinction clear; as an example, check these titles:
Pyshkin et al: Ground-state cooling of quantum systems via a one-shot measurement, Phys. Rev. A 93 (2016) 032120 (arXiv)
Yung et al: One-shot detection limits of quantum illumination with discrete signals, npj Quantum Inf. 6 (2020) 75 (arXiv).
The fact that, even though the predictive uncertainty $\Delta x$ is finite, we can have infinite precision in a single (one-shot) measurement, is not worthless, but very important in applications such as quantum key distribution. In many key-distribution protocols the two key-sharing parties compare the precise values $x$ they obtained in single-instance measurements of their entangled states. These values will be correlated to within their single-instance measurement error, which is much smaller than the predictive uncertainty $\Delta x$. The presence of an eavesdropper would destroy this correlation. The two parties can therefore know that there's an eavesdropper if they see that their measured values only agree to within $\Delta x$, rather than to within the much smaller single-instance measurement error. This scheme wouldn't work if the single-instance measurement error were $\Delta x$. See for example
Reid: Quantum cryptography with a predetermined key, using continuous-variable Einstein-Podolsky-Rosen correlations, Phys. Rev. A 62 (2000) 062308 (arXiv)
Grosshans et al: Quantum key distribution using gaussian-modulated coherent states, Nature 421 (2003) 238 (arXiv). In Figure 2 one can see very well the difference between single-instance measurement error and the variability $\Delta x$ across measurements.
Madsen et al: Continuous variable quantum key distribution with modulated entangled states (free access), Nat. Comm. 3 (2012) 1083. See especially Fig. 4 and its explanation.
2. What is exactly a "measurement of position" or of "momentum"?
In classical mechanics there's only one measurement (even if it can be realized by different technological means) of any specific quantity $Q$, such as position or spin or momentum. And classical mechanics says that the error in one measurement instance and the variability across instances can both be made as low as we please.
In quantum theory there are many different experimental protocols that we can interpret, for different reasons, as "measurements" of that quantity $Q$. Usually they all yield the same mean value across repetitions (for a given state), but differ in other statistical properties such as variance. Because of this, and of the variability explained above, Bell (of the famous Bell's theorem) protested that we actually shouldn't call these experimental procedures "measurements":
Bell: Against "measurement" (other copy), in Miller, ed.: Sixty-Two Years of Uncertainty: Historical, Philosophical, and Physical Inquiries into the Foundations of Quantum Mechanics (Plenum 1990).
In particular, in classical physics there's one joint, simultaneous measurement of position and momentum. In quantum theory there are several measurement protocols that can be interpreted as joint, simultaneous measurements of position and momentum, in the sense that each instance of such measurement yields two values, the one is position, the other is momentum. In the classical limit they become the classical simultaneous measurement of $x$ and $p$. This possibility was first pointed out by Arthurs & Kelly in 1965:
Arthurs, Kelly: On the simultaneous measurement of a pair of conjugate observables, Bell Syst. Tech. J. 44 (1965) 725 (other copy).
This simultaneous measurement is not represented by $\hat{x}$ and $\hat{p}$, but by a pair of commuting operators $(\hat{X}, \hat{P})$ satisfying $\hat{X}+\hat{x}=\hat{a}$, $\hat{P}+\hat{p}=\hat{b}$, for specially chosen $\hat{a}, \hat{b}$. The point is that the joint operator $(\hat{X}, \hat{P})$ can rightfully be called a simultaneous measurement of position and momentum, because it reduces to that measurement in the classical limit (and obviously we have $\bar{X}=\bar{x}, \bar{P}=\bar{p}$). In fact, from the equations above we could very well say that $\hat{x},\hat{p}$ are defined in terms of $\hat{X},\hat{P}$, rather than vice versa.
This kind of simultaneous measurement – which is possible for any pairs of conjugate variables, not just position and momentum – is not a theoretical quirk, but is a daily routine measurement in quantum-optics labs for example. It is used to do quantum tomography, among other applications. You can find detailed theoretical and experimental descriptions of it in Leonhardt's book above, chapter 6, entitled "Simultaneous measurement of position and momentum".
But as I said, there are several different protocols that may be said to be a simultaneous measurement of conjugate observables, corresponding to different choices of $\hat{a},\hat{b}$. What's interesting is the way in which these measurements differ. They can be seen as forming a continuum between two extremes:
– At one extreme, the variability across measurement repetitions of $X$ has a lower bound (which depends on the state of the system), while the variability of $P$ is infinite. Basically it's as if we were measuring $X$ without measuring $P$. This corresponds to the traditional $\hat{x}$.
– At the other extreme, the variability across measurement repetitions of $P$ has a lower bound, while the variability for $X$ is infinite. So it's as if we were measuring $P$ without measuring $X$. This corresponds to the traditional $\hat{p}$.
– In between, there are measurement protocols which have more and more variability for $X$ across measurement instances, and less and less variability for $P$. This "continuum" of measurement protocols interpolates between the two extremes above. There is a "sweet spot" in between in which we have a simultaneous measurement of both quantities with a finite variability for each. The product of their variabilities, $\Delta X\ \Delta P$, for this "sweet-spot measurement protocol" satisfies an inequality similar to the well-known one for conjugate variables, but with an upper bound slightly larger than the traditional $\hbar/2$ (just twice as much, see eqn (12) in Arthurs & Kelly). So there's a price to pay for the ability to measure them simultaneously.
This kind of "continuum" of simultaneous measurements is also possible for the famous double-slit experiment. It's realized by using "noisy" detectors at the slits. There are setups in which we can observe a weak interference beyond the two-slit screen, and at the same time have some certainty about the slit at which a photon could be detected. See for example:
Wootters, Zurek: Complementarity in the double-slit experiment: Quantum nonseparability and a quantitative statement of Bohr's principle, Phys. Rev. D 192 (1979) 473
Banaszek et al: Quantum mechanical which-way experiment with an internal degree of freedom, Nat. Comm. 4 (2013) 2594 (arXiv)
Chiao et al: Quantum non-locality in two-photon experiments at Berkeley, Quant. Semiclass. Opt. 73 (1995) 259 (arXiv), for variations of this experiment.
We might be tempted to ask "OK but what's the real measurement of position an momentum, among all these?". But within quantum theory this is a meaningless question, similar to asking "In which frame of reference are these two events really simultaneous?" within relativity theory. The classical notions and quantities of position and momentum simply don't exist in quantum theory. We have several other notions and quantities that have some similarities to the classical ones. Which to consider? it depends, on the context and application. The situation indeed has some similarities with that for "simultaneity" in relativity: there are "different simultaneities" dependent on the frame of reference; which we choose depends on the problem and application.
In quantum theory we can't really say "the system has these values", or "these are the actual values". All we can say is that when we do such-and-such to the system, then so-and-so happens. For this reason many quantum physicists (check eg Busch et al. below) prefer to speak of "intervention on a system" rather than "measurement of a system" (I personally avoid the term "measurement" too).
Summing up: we can also say that simultaneous measurement of position and momentum is possible – and in fact a routine.
So the answer to your question is that in a single measurement instance we actually can (and do!) measure position and momentum simultaneously and both with arbitrary precision. This fact is important in applications such as quantum-key distribution, mentioned above.
But we also observe an unavoidable variability upon identical repetitions of such measurement. This variability makes the arbitrary single-measurement precision unimportant in other applications, where consistency through repetitions is required instead.
Moreover, we must specify which of the simultaneous measurements of momentum and position we're performing: there isn't just one, as in classical physics.
To form a picture of this, you can imagine two quantum scientists having this chat:
– "Yesterday I made a simultaneous measurement of position and momentum using the experimental procedure $M$ and preparing the system in state $S$."
– "Which values did you expect to find, before making the measurement?"
– "The probability density of obtaining values $x,p$ was, according to quantum theory, $P(x,p)=\dotso$. Its mean was $(\bar{x},\bar{p}) = (30\cdot 10^{-17}\ \mathrm{m},\ 893\cdot 10^{-17}\ \mathrm{kg\ m/s})$ and its standard deviations were $(\Delta x, \Delta p)=(1\cdot 10^{-17}\ \textrm{m},\ 1\cdot 10^{-17}\ \mathrm{kg\ m/s})$, the quantum limit. So I was expecting the $x$ result to land somewhere between $29 \cdot 10^{-17}\ \mathrm{m}$ and $31 \cdot 10^{-17}\ \mathrm{m}$; and the $p$ result somewhere between $892 \cdot 10^{-17}\ \mathrm{kg\ m/s}$ and $894 \cdot 10^{-17}\ \mathrm{kg\ m/s}$." (Note how the product of the standard deviations is $\hbar\approx 10^{-34}\ \mathrm{J\ s}$.)
– "And which result did the measurement give?"
– "I found $x=(31.029\pm 0.00001)\cdot 10^{-17}\ \textrm{m}$ and $p=(893.476 \pm 0.00005)\cdot 10^{-17}\ \mathrm{kg\ m/s}$, to within the widths of the dials. They agree with the predictive ranges given by the theory."
– "So are you going to use this setup in your application?"
– "No. I need to be able to predict $x$ with some more precision, even if that means that my prediction of $p$ worsens a little. So I'll use a setup that has variances $(\Delta x, \Delta p)=(0.1\cdot 10^{-17}\ \textrm{m},\ 10\cdot 10^{-17}\ \mathrm{kg\ m/s})$ instead."
Even if the answer to your question is positive, we must stress that: (1) Heisenberg's principle is not violated, because it refers to the variability across measurement repetitions, not the the error in a single measurement. (2) It's still true that the operators $\hat{x}$ and $\hat{p}$ cannot be measured simultaneously. What we're measuring is a slightly different operator; but this operator can be rightfully called a joint measurement of position and momentum, because it reduces to that measurement in the classical limit.
Old-fashioned statements about the uncertainty principle must therefore be taken with a grain of salt. When we make more precise what we mean by "uncertainty" and "measurement", they turn out to have new, unexpected, and very exciting faces.
Here are several good books discussing these matters with clarity, precision, and experimental evidence:
de Muynck: Foundations of Quantum Mechanics, an Empiricist Approach (Kluwer 2004)
Peres: Quantum Theory: Concepts and Methods (Kluwer 2002) (other copy)
Holevo: Probabilistic and Statistical Aspects of Quantum Theory (2nd ed. Edizioni della Normale, Pisa, 2011)
Busch, Grabowski, Lahti: Operational Quantum Physics (Springer 1995)
Nielsen, Chuang: Quantum Computation and Quantum Information (Cambridge 2010) (other copy)
Bengtsson, Życzkowski: Geometry of Quantum States: An Introduction to Quantum Entanglement (2nd ed. Cambridge 2017).
pglpmpglpm
$\begingroup$ I am curious about this: how far does the non-uniqueness of measurement of simultaneous $(x, p)$ go, exactly? Is it simply of the form that it is sensitive to what kind of "region" in the phase space we imagine as bounding the precision asked for (e.g. within a circle, versus within a square, etc. in the $(x, p)$-space), or is there actually no unique assignment of a probability $P$ for "the" measurement of $(x, p)$ to be within a phase-space region $R \subseteq \mathbb{R}^2$, even of nontrivial area, for a given quantum state $\rho$? $\endgroup$
– The_Sympathizer
$\begingroup$ The reason I'm asking is because I had a question thread here for a bit asking if it were possible to make a "phase space wave function" $\psi_{xp}(x, p)$ that applies even when $[\hat{x}, \hat{p}] \ne 0$, but that was "ever fuzzy" in the sense it could never be localized to a point, thus implying perpetual variance in both measurements. However, I retracted this when I started to wonder if it was based on a mistaken assumption that a unique assignment of a classical probability (norm-squared of $\psi_{xp}$) to each phase region $R$ in the canonical $(x, p)$ variables is even possible. $\endgroup$
$\begingroup$ (Note this is not quite the same as the Wigner function [though presumably would be related by some transformation] - the Wigner function is real-valued, this would be complex-valued, and would follow a Born-rule analogue, only being in the full $(x, p)$ space instead of either $x$ or $p$ individually) $\endgroup$
$\begingroup$ @The_Sympathizer Quantum theory can be interpreted as a "classical" theory with restrictions on the kind of allowable measurements (this is what Bohmian mechanics does, for example), but not as a "classical" theory with restrictions on the possible states. The reasons are purely geometrical. See Holevo's book cited above, §1.5 and arxiv.org/abs/1105.3238. Of course we must first clarify what "classical" means here (see Holevo again). This discussion lies a bit outside the scope of the present question and answer, though. $\endgroup$
– pglpm
You can't measure precise values at the same time because precise values for both don't exist at the same time.
All the properties of, say, an electron and be inferred from the electron's wave function, $\Psi(\vec x)$. The wave function is a mathematical object that covers all of space. It has a complex value at each point.
The electron doesn't have a precise position. Instead, it has a probability of being found at each point, $\vec x$, in space on being measured. That probability is $\Psi(\vec x)^*\Psi(\vec x)$. (That is a little loose. Really the probability of being found in a small region $d \vec x$ is $\int \Psi(\vec x)^*\Psi(\vec x) d \vec x$.)
The probability of being found somewhere is $1$, and so $\int\Psi(\vec x)^*\Psi(\vec x)dx = 1$. A function like this must approach $0$ everywhere except in some finite region.
There is a limiting case where it is $0$ everywhere except at one point, where it is infinite. In that case, it has a definite position.
You can also get the momentum from $\Psi(\vec x)$. Again, a definite momentum doesn't exist, except in a limiting case.
In general, $p = h\lambda$. That means an electron with a definite momentum would have a constant amplitude sinusoidal wave function with a definite wavelength. Such a wave function would cover all of space. $\Psi(\vec x) = A e^{i \vec p \cdot \vec x}$. This isn't possible, except as a limiting case where the amplitude approaches $0$. But in this limiting case, the wave function has the same (infinitesimal) amplitude everywhere. The electron has no location at all. It is spread over all space.
These limiting cases are at opposite ends of a range of possibilities. Most wave functions are non-zero over some finite region. Or at least, given any small number $\epsilon$, $|\Psi(\vec x)| > \epsilon$ only over a finite region.
The electron will be found in that finite region, but it doesn't have a precise location. Just a region where it will be found.
Likewise it doesn't have a definite momentum. You can use Fourier analysis to break a function up into a sum of functions of the form $A e^{i \vec p \cdot \vec x}$. $\Psi(\vec x) = \sum A(\vec p) e^{i \vec p \cdot \vec x}$. In the case of a non-periodic function like we have here, it is an infinite sum of infinitesimal functions. It is expressed as an integral rather than a sum. $\Psi(\vec x) = \int A(\vec p) e^{i \vec p \cdot \vec x} d \vec p$
You can think of $A(\vec p)$ as another way of expressing the wave function. This is another mathematical function, defined over that set of all possible momenta. It is useful for describing the momentum of the electron.
It can be shown that $A(\vec p)$ has lots of the same kinds of properties that $\Psi(\vec x)$ does. For example, the probability of finding the electron has momentum $\vec p$ is (again loosely) $A(\vec p)^*A(\vec p)$.
It can be shown $\int A(\vec p)^*A(\vec p)d\vec p = 1$. That is, the probability of finding the electron with some momentum is $1$. It can be shown the function can only be non-zero for a finite range of $\vec p$'s.
There is a limiting case where where $A(\vec p)$ is $0$ everywhere except for one value of $\vec p$. In this limiting case, the electron has a definite $\vec p$.
But the usual case is that the electron has neither a definite $\vec x$, nor a definite $\vec p$. That is, when the wave function is expressed as $\Psi(\vec x)$, it has a finite region where $\Psi(\vec x) > 0$. In this case, it turns out that when the wave function is expressed as $A(\vec p)$, there is a finite range of $\vec p$'s where $A(\vec p) > 0$.
The Uncertainty Principle is an important relation between the size of these two finite regions. $\Delta \vec x \Delta \vec p > \hbar/2$.
This video from 3blue1brown illustrates the idea. In particular, it shows how the Uncertainty Principle comes from wave properties.
Addendum - I didn't address an area where pglpm's answer really shines. I thought I would add my 2 cents.
Suppose you have an electron prepared in a state given by a particular wave function, $\Psi(\vec x)$. The position and momentum can be calculated to be particular values $\vec x$ and $\vec p$, with uncertainties $\Delta \vec x$ and $\Delta \vec p$. Note that uncertainties are often expressed as standard deviations of expected outcomes. This means position and momentum can be predicted to be $\vec x \pm \Delta x$ and $\vec p \pm \Delta p$.
Suppose the electron is just arriving at a free standing thin film surface containing many atoms.
If $\Delta \vec x$ is large, it is not possible to predict which atom the electron will hit in advance. Nevertheless, the electron will hit a particular atom. It may the atom is affected in some permanent way, say by being ejected and leaving a hole. In that case, it is possible to go back afterward and find out very precisely what the position of the electron was.
If $\Delta \vec p$ is large, it is not possible to predict in advance what the electron's momentum will be measured to be. But if it ejects an atom, it may be possible to measure time of flight of the scattered electron and atom to detectors with high spatial resolution and get a very precise value for what the electron's initial momentum turned out to be.
The Uncertainty Principle does not limit how precisely we can determine the outcomes of these measurements. It limits how precisely we can predict them in advance. If you have many electrons in the same state, it limits how repeatable multiple measurements will be.
Immediately after the collision, the electron and atom will be in new states. Both states will have a $\Delta \vec x$ and $\Delta \vec p$. It is not possible to predict in advance when and where either will hit their detectors. But it is possible to say that the combined outcomes of the position and momentum measurements of the scattered electron and atom will add up to a momentum consistent with the electron's initial momentum and uncertainty.
mmesser314mmesser314
$\begingroup$ There are various uncertainty relations. 1) Robertson's: $\sigma_A \sigma_B \ge |\langle[A,B] \rangle|/2$ 2) Ozawa's: $\epsilon_A \eta_B + \epsilon_A \sigma_B + \sigma_A \eta_B \ge |\langle[A,B] \rangle|/2$ 3) Heisenberg's noise-distrubance uncertainty: $\epsilon_A \eta_B \ge |\langle[A,B] \rangle|/2$ 4) Heisenberg's joint measurements uncertainty: $\epsilon_A \epsilon_B \ge |\langle[A,B] \rangle|/2$. $\sigma_{A (B)} $ has the standard meaning while $\epsilon_A$ denotes the error/noise of the measuring apparatus for a single instance of measurment of $A$... $\endgroup$
– Omar Nagib
$\begingroup$ $\eta_B$ denotes the subsequent disturbance/change in observable $B$ after measuring $A$. The OP is asking about uncertainty for single instances of measurments, i.e., relations 2, 3, and 4 NOT 1 which is your answer. Relations 2, 3, and 4 are due to both the observer effect and quantum mechanics. Ozawa's relation for example is universally valid (as well as Robertson's), while relation 3 was strongly violated recently multiple times [1] [2] [3] [1]:doi.org/10.1038/nphys2194 [2]: arxiv.org/abs/1208.0034 [3]: doi.org/10.1038/s41534-019-0183-6 $\endgroup$
$\begingroup$ In fact, Ozawa's relation implies $\eta_B$ is finite even when $\epsilon_A=0$ and vice versa. This is in contrast to relation 3 which says $\eta_B=\infty$ when $\epsilon_A=0$ and vice versa. In a neutron-spin experiment (see the first paper in the previous answer), the authors achieve $\epsilon_A=0$ with finite $\eta_B$ violating relation 3 and confirming relation 2. $\endgroup$
$\begingroup$ Alternatively, if one uses an alternative definition for $\epsilon_A$ and $\epsilon_B$ than the one used in previous papers, one can prove relations of form 3 and 4 [1]. I won't get into technicals but this topic is very subtle and currently, an active research field and your answer does not address these subtleties. [1] doi.org/10.1103/PhysRevLett.111.160405. $\endgroup$
$\begingroup$ @OmarNagib - You are right to point this out. You obviously know more about it than I. And I have not defined $\Delta \vec x$ or $\Delta \vec p$ well enough to be quantitative about them. I just quoted the Uncertainty Principle as in, for example, the Wikipedia article. Nevertheless, the argument I laid out does show why there is a single measurement Uncertainty Principle. $\Delta \vec x$ and $\Delta \vec p$ have precise meanings in the Uncertainty Principle I quoted and the others you mentioned. But the point I intended to make is just that there is an Uncertainty Principle, and this is why. $\endgroup$
– mmesser314
It is possible to measure both the position and the momentum of a particle to arbitrary position "at the same time", if you take that phrase to mean "within such quick succession that you can be confident that the probability distribution for the first measured quantity has not changed via Schrodinger evolution between the two measurements".
But doing so isn't very useful, because there will always be some infinitesimal delay between the two measurements, and whichever one comes second will effectively erase the information gained from the first measurement. For example, if you measure position and then momentum immediately afterwards, then you can get a very precise value for both measurements, but the process of getting a precise momentum reading will change the wavefunction such that it's position after the momentum measurement now has large uncertainty with respect to a subsequent measurement. So the momentum measurement "nullifies" the information from the prior position measurement, in the sense of rendering it unrepeatable.
So it's better to talk about the inability to "know" the position and momentum at the time than about the inability to "measure" both (which actually is possible). Fully understanding why requires understanding both the "state collapse" behavior of measurements, and the "wide <-> narrow" relation between non-commuting observables (e.g. via the Fourier transform) that you mention.
That's for measurements in extremely quick succession. You could ask about measurements that take place at exactly the same time, but that gets into philosophical waters as to whether two events ever occur at exactly the same time even in classical physics. In practice, if you try to do both measurements at once, then you'll always find that the particle comes out with either very tightly bounded position or momentum, and with large uncertainty in the other quantity.
tparkertparker
That is a very deep question and it takes years to understand it. I try my best to answer it.
"what stops us from measuring both position and momentum with arbitrary precision?"
level 1: nature. That is how the nature works.
level 2: particles are neither wave nor particle. They behave differently from everything that we see in everyday life. You can watch Feynmann lecture on youtube where he explains this concept. In some experiment they behave like a particle and in some experiment they behave like a wave.
level 3: To our best understanding they behave like a field which exist everywhere. How can you measure the position and momentum of a field with infinite certainty? This way of looking at the particles explains everything except gravity.
"Does a quantum system always have to change when observerd?"
The quantum system is observed in one of the states. Before the observation the system is all the possibilities at the same time with different weight. You can learn more about this when you study path integral.
Kian MalekiKian Maleki
Others have already said this, but here is the succinct version: You cannot determine the electron's position and momentum at the same time for the exact same reason that you cannot determine the electron's favorite flavor of ice cream, namely this: The electron does not have a favorite flavor of ice cream. Likewise, most electrons, most of the time, do not have a definite position or a definite momentum. You can force it to have one of those (or at least a very good approximation thereto), but then it surely does not have the other.
WillOWillO
$\begingroup$ But isn't that true for anything? Does the earth have a definite position in momentum? $\endgroup$
– Bill Alsept
$\begingroup$ @billalsept: In classical mechanics it does. $\endgroup$
– WillO
$\begingroup$ That's my point. Electron has a definite position in them and I'm also, we just don't know what it is yet. But indirectly we could determine if you completely correlate two electrons. $\endgroup$
$\begingroup$ I don't think this is a good answer. It assumes the Copenhagen Interpretation of QM, which states that particles do not have a position or momentum until measured. However, even if that interpretation is incorrect, the uncertainty principle still applies, because it's tied to the wave nature of QM, not any one interpretation. $\endgroup$
The correct form of the uncertainty principle is that the product of delta x and delta p is always greater than or equal to (hbar)/2. Among other things, this means that the more precisely we know x, the less precisely we can know p.
This has nothing to do with the so-called observer effect; it has to do with the wave-like behavior of quantum particles.
niels nielsenniels nielsen
To measure speed, you measure time between two positions. Once you have a speed determined, which of the two positions would you associate with it simultaneously? You can't rightly do it with either position. To associate it with the average of the two positions would require that you assume constant velocity but you only measured the average velocity between the two positions and have no way to know if it was constant between them.
hodop smithhodop smith
$\begingroup$ If this were the right analysis, it would apply equally well to classical mechanics. $\endgroup$
Your statement that the uncertainty relation comes from the Fourier transform is quite glib. Physics is not just a collection of math results. QM developed as a theory to account for experimental observations that did not respect classical mechanics, Newtonian or Relativistic. Really they pointed to the fact that our paradigm regarding the nature of matter was wrong. In QM every observable quantity is represented by an operation acting on a linear function space. The eigenvalues of those operators represent the only allowed measurements of that quantity that can be observed. For example, position ($x$), momentum ($p_x$), energy ($E$), etc are all operators. The eigenfunctions of these operators represent the "state" that the system will be prepared in once a measurement is made.
When one measures $x$, which perhaps one can do with arbitrary precision and accuracy, and gets a specific value, the particle is left in an eigenstate of the operator $x$, which is a Dirac delta function. Now if one tries to measure $p_x$ immediately after there is an equal probability to get any value of $p_x$. Once you measure $p_x$ your previous measurement of $x$ is completely ruined. You are not at all justified in making the claim that you know the value of $x$. If you try to measure $x$ again you will get a different answer. This is what the uncertainty relation is describing. To say that it has to do with measurement precision is a red herring.
The OP wrote:
I understand that for a given wavefunction that if $\Delta x$ is small, $\Delta p$ will be big and how this arises from fourier transformations. But I fail to see how this prevents anyone from doing a simultaneous measurement of both $x$ and $p$ with infinite precision.
This seems to boil down to the issue of what "simultaneous measurement" means. What it means to simultaneously measure two observables $A, B$ is to perform a single measurement on the system, obtaining the values $a$ and $b$, such that, immediately after the measurement, the system is in a state where the value of $A$ is certainly $a$ and the value of $B$ is certainly $b$.
In other words, the result of the measurement is that the system is in a simultaneous eigenstate of $A$ and $B$. Since there are no simultaneous eigenstates of $x$ and $p$ (as the OP already understands), this isn't possible for this particular pair of observables.
Brian BiBrian Bi
The uncertainty principle is not a limitation on "measurements", but rather expresses a fundamental limitation of the Universe regarding its information capacity. In a rough sense, the Universe only "allocates", so to speak, so many bits to each particle, and thus there are only so many bits available that can, at any given time, realize its actual position and momentum together. This is most evident when it is written in the - and this is a more accurate form! - form involving the informational entropy of the position and momentum:
$$H_x + H_p \ge \lg(e\pi \hbar)$$
which quite simply says that there is always going to be entropy - a lack of information, compared to their classical counterpart - in either one, the other, or both.
There isn't a lot of magic about that. The Universe simply is economical and doesn't splurge an infinite number of bits to detail its particles' parameters.
That's why that, as the other answer mentions, if you do now bring in measurement, which most properly understood is the transaction of information between a system and an agent, and try to measure them both to higher precision than the amount above (about 170.18 bits jointly, if taken relative to a scale of 1 m and 1 N·s), you will not be able to repeat the measurement immediately afterward and obtain the same values. Obtaining the same value would require that information be in the particle so it could be retrieved again, but there isn't storage room for that. Hence what you get is junk.
The_SympathizerThe_Sympathizer
$\begingroup$ How do you define Hx, Hp and e? $\endgroup$
– lalala
The Uncertainty Principle has almost nothing to do with measurement. It's intrinsic to wave phenomena that if a composition of waves has a definite frequency, it has great uncertainty in duration and vice versa. Technically, a pure sine wave of a single frequency that lasts only temporarily isn't so pure. When expanded into its Fourier representation, one finds a density distribution of multiple frequencies.
The frequency and duration are conjugate quantities of the wave. So are wave number and wavelength.
De Broglie's Principle allows us to associate a wavelength to a material particle based on its momentum. Classical Mechanics establishes a relationship between wavelength and momentum for light waves.
Together we have that "Matter Waves" have momentum and position as conjugate quantities. A particle with a narrow band of momenta must have a broad distribution in space. The Born interpretation of the particle wave associates the modulus of its value with the probability of being located in that position.
Uncertainty in position is the standard deviation of the position given by its wave function. The Inverse Fourier Transform is the wave function in momentum space.
It can then be proven that having a definite position, i.e. a high concentration of likelihood to be located about a specific point, requires that we have a broad concentration of momenta, that is, the particle has a probability of being in multiple momentum states that are far apart.
It all comes down to the intrinsic nature of waves and waves being associated with momentum and position.
This manifests itself through measurement in various ways.
As it happens, The Uncertainty Principe tells us we can't simultaneously know with arbitrary accuracy any two components of a quantum particle's Spin Angular Momentum, having spin 1/2.
Measure $L_x$ and we might get $\hbar/2$. Measure it a bunch more times, and you get the same answer, repeating. Now measure $L_x$ and suppose you get $\hbar/2$, then measure $L_y$. There's only a 50/50 chance of getting either of the two allowed values. Measure $L_x$ again, you have only a 50/50 chance of once again obtaining $\hbar/2$.
Quantum Mechanically, to be in a specific state of $L_x$ is necessarily to be in multiple states of $L_y$ or $L_z$. One gets a definite answer only if a particle is in a pure state of a quantum observable.
In more complicated quantum system one finds that there are only certain allowed quantum states. Measurement will yield only certain specific results regardless of how the kinematic value is measured. We only observe the Uncertainty Principle in play when we make observations, i.e. execute measurements, but the behavior is indicative of fundamental attributes of the system and not of the measurement apparatus.
R. RomeroR. Romero
As a laymen, I think my way through this one without invoking the quantum world, but simple the definitions of "position" and "velocity".
If you want to measure momentum, you need to measure velocity.
If you want to measure velocity, you need to measure distance and the time taken across that distance.
To measure distance, you need to measure 2 positions.
If you measure 2 positions, then which is "the" position of the object?
I imagine this problem the same as looking at a perpendicularly travelling car through binoculars. If you've ever tried this, you find yo have to keep moving your binos over, to follow moving car. Then someone asks you "Where is it now?"
StewartStewart
I guess if you can gauge the precise format of the wavefunction, in either $x$ or $p$ domain (you can certainly calculate it), and consider that quantum state to be the real description of its position and momentum, then you know both. It is just that to say you "know the particle position precisely" usually means the wavefunction is forced (i.e. collapsed) into a very narrow peak in the $x$ domain, which implies a wide wavefunction in the $p$ domain (or vice-versa). So it boils down to the semantics of "knowing precisely".
lvellalvella
For wave $\Delta \nu \Delta t$ and $\Delta k \Delta x$ are limited to be larger than some function of the amplitude. This holds for sound, light, water waves etc. In the case of a Schrödinger wave function the amplitude cannot be arbitrarily small as the wave must be normalised to 1. Because of this there is a minimum value of $\hbar/2$.
Once you understand this, the question becomes 'why should we use quantum mechanics?'. To this the answer is that it is 100% accurate, so far. Why this is no one knows.
my2ctsmy2cts
Not the answer you're looking for? Browse other questions tagged quantum-mechanics wavefunction heisenberg-uncertainty-principle quantum-measurements or ask your own question.
Why can't the Uncertainty Principle be broken for individual measurements if it is a statistical law?
A question regarding commutators in quantum mechanics
Is it true that we can't measure position and momentum together?
What is the meaning of the word 'momentum' in quantum physics? (Especially in the problem that I will describe below.)
Does the integral of a Wigner function over a finite region mean anything?
How can the collapse of the wavefunction be compatible with Heisenberg uncertainty principle?
Showing that position times momentum and energy times time have the same dimensions
Measuring position and momentum at the same time?
Uncertainty and Classical waves
If uncertainty principle is explained by wave function, then doesn't wave function collapse when we measure position or momentum?
Is momentum/position uncertainty related to the fact that to MEASURE momentum, one needs a CHANGE in position?
Contradiction with the uncertainty principle using repeated measurements
Why doesn't Gaussian wavepacket broadening in position mean there will be a shortening in momentum?
|
CommonCrawl
|
Titles are hyperlinked to pdf copies of the final project write-up. Course coordinator: Barry Balof
TITLE: Baire One Functions
AUTHOR: Johnny Hu
ABSTRACT: This paper gives a general overview of Baire one functions, including examples as well as several interesting properties involving bounds, uniform convergence, continuity, and $F_\sigma$ sets. We conclude with a result on a characterization of Baire one functions in terms of the notion of first return recoverability, which is a topic of current research in analysis.
TITLE: Upper bounds on the $L(2;1)$-labeling Number of Graphs with Maximum Degree $\Delta$
AUTHOR: Andrew Lum
ABSTRACT: $L(2;1)$-labeling was first defined by Jerrold Griggs [Gr, 1992] as a way to use graphs to model the channel assignment problem proposed by Fred Roberts [Ro, 1988]. An $L(2;1)$-labeling of a simple graph $G$ is a nonnegative integer-valued function $f : V(G)\rightarrow \{0,1,2,\ldots\}$ such that, whenever $x$ and $y$ are two adjacent vertices in $V(G)$, then $|f(x)-f(y)|\geq 2$, and, whenever the distance between $x$ and $y$ is 2, then $|f(x)-f(y)|\geq 1$. The $L(2;1)$-labeling number of $G$, denoted $\lambda(G)$, is the smallest number $m$ such that $G$ has an $L(2;1)$-labeling with no label greater than $m$. Much work has been done to bound $\lambda(G)$ with respect to the maximum degree $\Delta$ of $G$ ([Cha, 1996], [Go, 2004], [Gr, 1992], [Kr, 2003], [Jo, 1993]). Griggs and Yeh [Gr, 1992] conjectured that $\lambda \leq \Delta^2$ when $\Delta \geq 2$.
In §1, we review the basics of graph theory. This section is intended for those with little or no background in graph theory and may be skipped as needed. In §2, we introduce the notion of $L(2;1)$-labeling. In §3, we give the labeling numbers for special classes of graphs. In §4, we use the greedy labeling algorithm to establish an upper bound for $\lambda$ in terms of $\Delta$. In §5, we use the Chang-Kuo algorithm to improve our bound. In §6, we prove the best known bound for general graphs.
TITLE: Bijections on Riordan Objects(voted outstanding senior project)
AUTHOR: Jacob Menashe
ABSTRACT: The Riordan Numbers are an integer sequence closely related to the well-known Catalan Numbers [2]. They count many mathematical objects and concepts. Among these objects are the Riordan Paths, Catalan Partitions, Interesting Semiorders, Specialized Dyck Paths, and Riordan Trees. That these objects have been shown combinatorially to be counted by the same sequence implies that a bijection exists between each pair. In this paper we introduce algorithmic bijections between each object and the Riordan Paths. Through function composition, we thus construct 10 explicit bijections: one for each pair of objects.
TITLE: The Problem of Redistricting: the Use of Centroidal Voronoi Diagrams to Build Unbiased Congressional Districts
AUTHOR: Stacy Miller
ABSTRACT: This paper is a development of the use of MacQueen's method to draw centroidal Voronoi diagrams as apart of the redistricting process. We will use Washington State as an example of this method. Since centroidal Voronoi diagrams are inherently compact and can be created by an unbiased process, they could create congressional districts that are not only free from political gerrymandering but also appear to the general public as such.
TITLE: Signal Analysis
AUTHOR: David Ozog
ABSTRACT: Signal processing is the analysis, interpretation, and manipulation of any time varying quantity [1]. Signals of interest include sound files, images, radar, and biological signals. Potentials for application in this area are vast, and they include compression, noise reduction, signal classification, and detection of obscure patterns. Perhaps the most popular tool for signal processing is Fourier analysis, which decomposes a function into a sum of sinusoidal basis functions. For signals whose frequencies change in time, Fourier analysis has disadvantages which can be overcome by using a windowing process called the Short Term Fourier Transform. The windowing process can be improved further using wavelet analysis. This paper will describe each of these processes in detail, and will apply a wavelet analysis to Pasco weather data. This application will attempt to localize temperature fluctuations and how they have changed since 1970.
[email protected] Last updated: Thu May 24 13:49:47 PDT 2007
|
CommonCrawl
|
Physical Interpretation of the Integrand of the Feynman Path Integral
In quantum mechanics, we think of the Feynman Path Integral $\int{D[x] e^{\frac{i}{\hbar}S}}$ (where $S$ is the classical action) as a probability amplitude (propagator) for getting from $x_1$ to $x_2$ in some time $T$. We interpret the expression $\int{D[x] e^{\frac{i}{\hbar}S}}$ as a sum over histories, weighted by $e^{\frac{i}{\hbar}S}$.
Is there a physical interpretation for the weight $e^{\frac{i}{\hbar}S}$? It's certainly not a probability amplitude of any sort because it's modulus squared is one. My motivation for asking this question is that I'm trying to physically interpret the expression $\langle T \{ \phi(x_1)...\phi(x_n) \} \rangle = \frac{\int{D[x] e^{\frac{i}{\hbar}S}\phi(x_1)...\phi(x_n)}}{\int{D[x] e^{\frac{i}{\hbar}S}}}$.
quantum-mechanics quantum-field-theory path-integral
ChickenGodChickenGod
$\begingroup$ If you like this question you may also enjoy reading this Phys.SE post. $\endgroup$ – Qmechanic♦ Apr 15 '13 at 5:17
"It's certainly not a probability amplitude of any sort because it's modulus squared is one." This does not follow... Anyway, an (infinite) normalisation factor is hidden away in the measure. The exponential has the interpretation of an unnormalised probability amplitude. Typically you don't have to worry about the normalisation explicitly because you compute ratios of path integrals, as your example shows. The book about the physical interpretation of path integrals is the original, and very readable, one by Feynman and Hibbs, which now has a very inexpensive Dover edition. I heartily recommend it. :) (Though make sure you get the emended edition as the original had numerous typos.)
Michael BrownMichael Brown
$\begingroup$ Ah! I forgot about the factor hidden in the measure! Does that mean that the modulus of the weight of each path is equal and the only difference between each path is the phase? $\endgroup$ – ChickenGod Apr 15 '13 at 1:55
$\begingroup$ @ChickenGod In the simplest and most common by far cases, yes - the weight of all paths are equal and the only difference is the phase. Exceptions exist, such as the nonlinear sigma model where the measure contains a determinant which basically gives the invariant measure on the (curved) space of paths, like the $r^2 \sin(\theta)$ Jacobian to go from cartesian to spherical coordinates. You can find more about these models in Zinn-Justin. $\endgroup$ – Michael Brown Apr 15 '13 at 2:12
$\begingroup$ The overall normalization of the path integral is irrelevant for virtually all physical purposes and the integrand's constancy of the absolute value does indeed imply that the integrand cannot be interpreted as any sort of wave function because that wouldn't converge after squaring the absolute value. This answer is just plain wrong. $\endgroup$ – Luboš Motl Jul 3 '17 at 16:14
Up to a universal normalization factor, $\exp(iS_{\rm history}/\hbar)$ is the probability amplitude for the physical system to evolve along the particular history. All the complex numbers in quantum mechanics are "probability amplitudes of a sort".
This is particularly clear if we consider the sum over histories in a short time interval $(t,t+dt)$. In that case, all the intermediate histories may be interpreted as "linear fields" – for example, a uniform motion $x(t)$ – which is why only the straight line contributes and its meaning is nothing else than the matrix elements of the evolution operator.
It may be puzzling why all the histories have probability amplitudes with the same absolute value. But it's true – in the sense of Feynman's path integral – and it's how Nature operates. At the end, some histories (e.g. coarse-grained histories in the sense of the consistent history interpretation of quantum mechanics) are more likely or much more likely than others. But within quantum mechanics, all these differences between the likelihood of different coarse-grained histories are explained by constructive or destructive interference of the amplitudes (and/or from different "sizes of the ensemble of histories")! That's just the quantum mechanics' universal explanation for why some histories are more likely than others. Histories that are not coarse-grained are equally likely!
Luboš MotlLuboš Motl
$\begingroup$ When you write 'coarsegrained histories', do you mean "macroscopically distinct" histories in the thermodynamic sense? So for one such history, the path integral really will sum over a huge number of microscopically distinct histories which then accumulates into a probability amplitude which then can have any magnitude (including complete destructive interference)? $\endgroup$ – BjornW Jul 2 '17 at 10:32
$\begingroup$ Which regions of the path integral you have to resum depends on the precise quantity you're interested in. Only the total sum over all histories has a truly physical meaning. But what was relevant here is that even though the absolute value of the integrand is constant, this property disappears due to interference if you clump the histories into nearby "families". I was meaning families enough to change $S$ by $O(1)$ or so - these families may still be much smaller than ensembles you could use in thermodynamics for other purposes. But they're larger than the minimum from quantum mechanics. $\endgroup$ – Luboš Motl Jul 3 '17 at 16:18
$\begingroup$ Yes that makes sense, thanks. Reason I asked was because I'm trying to show (to myself) how things like a field history with couplings eventually end up in that (1/137)^n lower-probability bin.. It's another thing that's not intuitively clear with the abs mag integrand, but it's not obvious how it appears out of interference either. I guess food for another question :) $\endgroup$ – BjornW Sep 15 '17 at 9:15
$\begingroup$ Dear Bjorn, to get the expansion in terms of Feynman diagrams, you need to analyze the path integral perturbatively, expand it to a power law or Taylor series. You surely can't get the individual terms with different powers of 1/137 - you can't even extract the exponents - by making just an order of magnitude estimate of the whole integral. One point is that every new term in the 1/137 expansion measures a correction to a phase of the previous one. It's mostly phases, and not the absolute values, that are being expanded. $i$ is almost always in the expansion parameter. $\endgroup$ – Luboš Motl Sep 16 '17 at 4:01
One of the the numerical values of the weight $\exp{\frac{i S}{\hbar}}$ is going to have a maximum contribution to the Feynman path integral. You've probably seen a probability density plot in 2D or 1D. The classical path is going to be the one that minimizes the action. Think of it as a maximum probability density moving from one most probable position to another. The classical action Lagrangian density is going to contribute the most to the path integral.
John MJohn M
One interesting interpretation comes from the Wick rotation, where you interpret $it/\hbar=\beta=1/(k_BT)$ - an imaginary time turns many quantum equations into similar equations of thermodynamics / statistical mechanics.
Since $S = \int L\,dt = \int (T-V)\,dt$ has the dimension of energy times time, this means the exponent can be interpreted as energy times $\beta$, i.e.
$$e^{\frac i\hbar S} \approx e^{-\beta E}$$
which is the summand of the partition function, from which you can then derive other thermodynamic quantities like entropy, heat capacity or the Helmholtz free energy.
Note I've been very hand-wavy about the transition from $L$ to -$E$, you'd have to insert the actual Legendre transformation $L=pq-H$ and really evaluate the path integration. I think we've done that more exactly in lectures at one point, but I can't remember for sure...
Tobias KienzlerTobias Kienzler
At a time when my hammer was the Dirac's delta distribution, I conjectured that the answer was Feynman Integral is a generalization of a Dirac's delta, the use of this delta being to find the extreme of the action.
Given a function $f(x)$, find a Dirac measure $\delta_f$ concentrated in the critical points of $f$. The answer is obviously $ \delta(f'(x))$, and using that $\delta(w)=\frac 1{2\pi}\int e^{iwt} dt$ we can write
$$ < \delta(f'(x)) | g(x)> = \int \int e^{i z f'(x)} g(x) dz dx = \int \int \lim_{y\to x} e^{i z {f(y)-f(x)\over y-x} } g(x) dz dx $$
and substuting $\epsilon= {y - x \over z}$
$$ <\delta_f | g> = \int \int \lim_{\epsilon\to 0} e^{i {1 \over \epsilon} (f(y) - f(x))} g(x) dx {dy \over \epsilon} $$ And if we know that the extreme is unique, we can work with the "halved" expresion
$$<\delta_f^{1/2} | O> = \lim_{\epsilon\to 0} {1 \over \epsilon^{1/2}} \int e^{i {1\over\epsilon} f(x)} O(x) dx$$ from which, by taking modulus square, $ <\delta_f | g> = <\delta_f^{1/2} | O> <\delta_f^{1/2} | O>^* $
But this is only a zero dimensional static argument. It is not even a D=0+1 theory, it is D=0+0. The same argument for Quantum or Classical Mechanics (D=0+1) or for Field Theory (D=3+1, say) should involve to control $$<\delta_L^{h,\epsilon'} | O[\phi] > = \int ... \int {1\over (h \epsilon')^{n/2}} e^{i {1\over h} L^{\epsilon'}_n[\phi_0,x_1,...x_n,\phi_1]} O[\phi] (\Pi dx_i) $$ with some technique similar to a brownian bridge for a Wiener measure, or at least my note of 1998 says that.
The funny thing of this idea was to start from the Lagrangian without any postulate of wuantum mechanics, not even the propagator, which is the cornerstone of the answers in Why is the contribution of a path in Feynmans path integral formalism $\sim e^{(i/\hbar)S[x(t)]}$. I think that pursuing this way I had tripped against the non differentiable paths mentioned in Once a quantum partition function is in path integral form, does it contain any operators? and What type of non-differentiable continuous paths contribute for the path integral in quantum mechanics.
The connection of path integral to classical mechanics is discussed also here What type of non-differentiable continuous paths contribute for the path integral in quantum mechanics and in the paper from Dirac quoted in this answer https://physics.stackexchange.com/a/134215/1335
ariveroarivero
$\begingroup$ just now I do not remember what I had in mind to control the general case, but it seems I thought that Gell-Mann RG transformation could surface during the process for QFT, while for QM I hoped to use the "Tangent Groupoid" of Alain Connes. $\endgroup$ – arivero Aug 24 '15 at 18:33
Not the answer you're looking for? Browse other questions tagged quantum-mechanics quantum-field-theory path-integral or ask your own question.
Why is the contribution of a path in Feynmans path integral formalism $\sim e^{(i/\hbar)S[x(t)]}$
Once a quantum partition function is in path integral form, does it contain any operators?
Dirac's remark that inspired Feynman when formulating path integral
Path integral quantization of bosonic string theory
What type of non-differentiable continuous paths contribute for the path integral in quantum mechanics
What is the meaning of the Fourier transform of Feynman propagator?
Differential equation (Greens function) satisfied by the kernel using path integrals
Transition amplitudes by functional methods in QFT
Path Integral Simulation
Explicit derivation of the Feynman amplitude of $e^+e^-\rightarrow\mu^+\mu^-$
Operator product expansion (of free field theory) in functional quantization approach
basic mathematical concepts concerning the manipulation of field operators
The interpretation of Feynman Propagator for Complex scalar field
Expectation values of states in QFT via the path integral
|
CommonCrawl
|
M.R. Yegan
Limit of two real variables function $f(x,y)=|x|^{1/|y|}$ at the origin
limits multivariable-calculus
Mar 29 at 20:57 EagleToLearn 60
Are there any proofs for (ir)rationality of the numbers $\sin(e)$, $\cos(e)$?
rationality-testing
Apr 3 '18 at 3:03 Community♦ 1
Calculate $\lim_{n\to\infty} \sum_{k=1}^{n}\frac{(-1)^{k-1}}{{k^2}\binom{n}{k}}$ [closed]
calculus sequences-and-series limits
Sep 13 '17 at 11:09 Gabriel Romon 18.9k
Divergence of the sequence $\sin(n!)$
Aug 9 '17 at 18:37 Michael Hardy 1
Find the sum of series $\sum_{n=1}^\infty \ln(1+1/n^2)$
Sep 10 '16 at 8:06 E.H.E 18.6k
Convergent subsequence of $\sin(n)$
calculus analysis
Dec 18 '15 at 8:16 Robert Israel 343k
Differentiability of the function$f(x)= x^2 \sin(1/x)$, $f(0)=0$ at the origin
calculus real-analysis
Mar 2 '15 at 13:54 dannum 1,603
Divergence-Convergence of the sequence $\sin(n!{\pi}\theta)$
Aug 16 '14 at 11:38 RE60K 14.1k
Inequality $k!\pi+\frac{\pi}{6}\le{m!}\le{k!}\pi+\frac{5\pi}{6}$
analytic-number-theory
Aug 8 '14 at 10:03 mathlove 93.8k
Does convergence of $S_{n!}=\sum{1}^{n!}a_{k}$ implies $a_{k}$ approaches to zero? [closed]
Jul 10 '14 at 15:16 user98187609 198
Does the sequence $\sin(n!\pi^2)$ converge or diverge?
May 18 '14 at 1:50 hot_queen 6,052
|
CommonCrawl
|
Exercises - Probability Distributions
Determine which of the following represent valid probability mass functions.
$\displaystyle{\begin{array}{c|c|c|c|c}x & 0 & 1 & 2 & 3 \\\hline P(x) & 1/8 & 3/8 & 3/8 & 1/8\end{array}}$
$\displaystyle{\begin{array}{c|c|c|c|c}x & 0 & 1 & 2 & 3 \\\hline P(x) & 1/8 & -3/8 & 3/4 & 1/2\end{array}}$
(a) this IS a valid probability mass function as the probabilities listed are always between 0 and 1, inclusive, and the probabilities sum to 1; (b) NOT a valid probability mass function, as $P(1)$ is not between 0 and 1, inclusive; (c) NOT a valid probability mass function as the sum of the probabilities is not equal to 1.
Is the function described below a valid probability mass function? Explain.
$$f(x)=\frac{x+1}{10} \textrm{ for } x = 0, 1, 2, 3$$
Note $0 \le f(x) \le 1$ for $x = 0, 1, 2, 3$ and $f(0) + f(1) + f(2) + f(3) = 1$, so $f(x)$ does indeed describe a probability mass function.
Find the mean, variance, and standard deviation for the following distribution
$$\displaystyle{\begin{array}{c|c|c|c|c}x & 0 & 1 & 2 & 3 \\\hline P(x) & 1/8 & 3/8 & 3/8 & 1/8\end{array}}$$
mean: $\mu = (0)(1/8) + (1)(3/8) + (2)(3/8) + (3)(1/8) = 12/8 = 3/2$;
variance: $\sigma^2 = (0^2)(1/8) + (1^2)(3/8) + (2^2)(3/8) + (3^2)(1/8) - \mu^2 = 3/4$;
standard deviation: $\sigma = \sqrt{\sigma^2} \doteq 0.866$
A game at a local fair will give $\$$100 to anyone that can break a balloon by throwing a dart at it. The game costs $\$$5 to throw a single dart, and you're willing to spend $\$$30 trying to win. Assuming that you have a 10% chance of hitting the balloon on any given throw, find the expected number of darts you will throw.
Consider the possible outcomes, we could throw a dart 1, 2, 3, 4, 5, or 6 times. Letting these be the possible values for $x$, we find $P(x)$, the probability that we throw exactly $x$ darts for each such value.
We throw exactly 1 dart precisely when we break the balloon on our first attempt. Thus, $P(1) = 0.10$. We throw exactly 2 darts when we fail to break the balloon on the first attempt, but succeed on the second. So $P(2) = (0.90)(0.10)$, recalling that if we have a 10% chance of hitting the balloon on any given throw, we have a 90% chance of missing it. Similarly, we find $P(3) = (0.90)^2(0.10)$, $P(4) = (0.90)^3(0.10)$, and $P(5) = (0.90)^4(0.10)$.
However, we find the probability of throwing 6 darts a little differently, as one throws exactly 6 darts when one misses the previous 5. It doesn't matter whether we hit the balloon on the sixth try or not, as we are only willing to spend $\$30$ trying to win, and will stop throwing darts after the sixth try regardless. Consequently $P(6) = (0.90)^5$.
Collecting these results in a table, we have:
$$\begin{array}{c|c|c|c|c|c|c} x & 1 & 2 & 3 & 4 & 5 & 6\\\hline P(x) & 0.10 & (0.90)(0.10) & (0.90)^2(0.10) & (0.90)^3(0.10) & (0.90)^4(0.10) & (0.90)^5 \end{array}$$
Now, calculating the expected value $\mu$, we have
$$\begin{array}{rcl} \mu &=& (1)(0.10) + (2)(0.90)(0.10) + (3)(0.90)^2(0.10)\\ & & + \, (4)(0.90)^3(0.10) + (5)(0.90)^4(0.10) + (6)(0.90)^5\\ &=& 4.68559 \end{array}$$
Based on previous information, it has been determined that 90% of the population brush their teeth once a day. Answer the following for a sample of 20 people:
What is the probability that exactly 18 people brush their teeth once a day?
What is the probability that at least 18 people brush their teeth once a day?
Treat as a binomial distribution, with $n = 20, p = 0.90,$ and $q = 0.10$:
(a) $$P(18) = {}_{20}C_{18} (0.90)^{18}(0.10)^2$$
In R:
dbinom(18,size=20,prob=0.90)
[1] 0.2851798
In Excel:
=BINOM.DIST(18,20,0.9,FALSE)
(b) $$\begin{array}{rcl} P(18) + P(19) + P(20) &=& {}_{20}C_{18} (0.90)^{18}(0.10)^2\\ && + \, {}_{20}C_{19} (0.90)^{19}(0.10)^1\\ && + \, {}_{20}C_{20} (0.90)^{20}(0.10)^0 &\approx& 0.6769 \end{array}$$
1-pbinom(17,size=20,prob=0.90)
=1-BINOM.DIST(17,20,0.9,TRUE)
A fair coin is tossed eight times. Let the random variable be the number of heads that appear.
What is the probability that exactly 4 heads appear?
What is the probability that you get either all heads or all tails?
This is binomial with $n = 8$, and $p = 1/2$. (a) $70/256 \approx 0.2734$; (b) $1/256 + 1/256 = 1/128 \approx 0.0078$
Toss a coin 16 times. Let X be the number of heads that appear.
Find the probability that there will be more than 13 heads using a Binomial distribution.
This is binomial with $n = 16$ and $p = 1/2$. (a) $P(14) + P(15) + P(16) \approx 0.0021$;
Roll a standard die 8 times. Let $X$ be the number of 2's rolled. Find the following:
$P(X=3)$, the probability of rolling exactly 3 two's
$P(X \lt 3)$, the probability of rolling less than 3 two's
$P(X \ge 3)$, the probability of rolling at least 3 two's
(a) $P(3) \approx 0.1042$; (b) $P(0) + P(1) + P(2) \approx 0.8652$; (c) $1 - P(X \lt 3) \approx 0.1348$
A shipment of 25 computers contains 10 computers with a defective DVD burner. What is the probability, if a random sample of 6 computers is selected and then tested, that the sample will contain at least 1 defective computer?
$$P( \textrm{ at least 1 defective} ) = 1 - P(0 \textrm{ defective }) = 1 - \frac{({}_{15}C_6)({}_{10}C_0)}{({}_{25}C_6)} \approx 0.9717$$
In a club of 25 members there are 10 married men and 15 unmarried men. What is the probability that a subcommittee of 6 will have at least 1 married man?
$$P( \textrm{ at least 1 married man} ) = 1 - P(0 \textrm{ married men }) = 1 - \frac{({}_{10}C_0)({}_{15}C_6)}{({}_{25}C_6)} \approx 0.9717$$
It is known that 5% of all tax returns contain at least one error. For a random selection of 10 tax returns, what is the probability that at most 2 of them contain errors?
This is a binomial with $n = 10$ and $p = 0.05$. $P(0) + P(1) + P(2) = 0.9885$ approximately.
pbinom(2,size=10,prob=0.05)
=BINOM.DIST(2,10,0.05,TRUE)
Assume 75% of Americans wear seatbelts. If 200 Americans are selected at random, find the expected number of people in this group that wear their seatbelts.
This is binomial with $n = 200$ and $p = 0.75$. Recall for a binomial distribution, $\mu = np$, so the expected number of people in this group that wear their seatbelts is $(200)(0.75) = 150$.
Toss a fair coin 100 times. Let X be the number of heads showing. Give the mean and the standard deviation for this experiment.
This is binomial with $n = 100$ and $p = 0.50$. $\mu = np = 50$ and $\sigma = \sqrt{npq} = \sqrt{(100)(0.50)(0.50)} = 5$.
The probabilities that a game of chance results in a win, loss, or tie for the player to go first is 0.48, 0.46, and 0.06, respectively. If the game is played 8 times, find the probability that there will be 3 wins, 4 losses and 1 tie.
Multinomial distribution. $$P( \textrm{3 wins, 4 losses, 1 tie} ) = \frac{8!}{3! 4! 1!} (0.48)^3 (0.46)^4 (0.06)^1 \approx 0.0831$$
A DVD has a defect on average every 2 inches along its track. What is the probability of seeing less than 3 defects within a 5 inch section of its track?
Poisson distribution with $\lambda = 5/2$. $$\begin{array}{rcl} P(x \lt 3) &=& P(0) + P(1) + P(2)\\ &=& \displaystyle{\frac{e^{-5/2} (5/2)^0}{0!} + \frac{e^{-5/2} (5/2)^1}{1!} + \frac{e^{-5/2} (5/2)^2}{2!}}\\ &=& \displaystyle{e^{-5/2}\left(1 + \frac{5}{2} + \frac{25}{8}\right)}\\ &\approx& 0.5438 \end{array}$$
ppois(2,lambda=5/2)
=POISSON.DIST(2,5/2,TRUE)
Usually 50 potential jurors are held to compose a jury of 12. Suppose that this group of 50 has 15 females and 35 males.
What is the probability that the jury will be made up of all the same sex?
What is the probability that the jury will be made up of 4 females and 8 males?
Hypergeometric. (a) $P(\textrm{all females}) + P(\textrm{all males}) \approx 0.00687$; (b) $\displaystyle{\frac{({}_{15}C_4)({}_{35}C_8)}{({}_{50}C_{12})} \approx 0.2646}$
Compare and contrast the Poisson distribution with the Binomial distribution.
Check notes online.
The probability that a worker will become disabled in a one-year period is 0.0045. If there are 500 workers on an assembly line, find the probability that more than 4 workers will become disabled. (Use the Poisson distribution to approximate the probability.)
Poisson distribution. $\lambda \approx (500)(0.0045) = 2.25$. Use complement: $P(\textrm{more than 4}) = 1 - P(4 \textrm{ or less}) = 1 - (P(0)+P(1)+P(2)+P(3)+P(4))$, which is approximately $1 - 0.92198 \approx 0.078$
Assume that customers enter a large store at the rate of 60 per hour (one per minute). Find the probability that during a five-minute interval no one will enter the store.
Poisson distribution. $\lambda = 5$ (for the five minute period). $P(0) \approx 0.0067$ that no one arrives in the five minute period.
Given the function defined below,
$$f(x) = \frac{x+1}{6} \textrm{ for } x=0,1,2$$
Explain clearly why this is a valid probability mass function.
Find the mean and standard deviation of this function.
(a) Both conditions are met: (i) The sum of the probabilities equals one: $1/6 + 2/6 + 3/6 = 1$, and (ii) $0 \le f(x) \le 1$ for each possible value of $x$; (b) $\mu = 8/6, \sigma = \sqrt{5/9} \approx 0.745$.
You have a box with 25 different colored balls as follows: 4 red, 6 blue, 10 white, and 5 green.
What is the probability that there will be exactly one of each color if four balls are drawn?
What is the probability that there will be two white balls if two balls are drawn?
Hypergeometric.
$\displaystyle{\textrm{(a) } \frac{({}_{4}C_1)({}_{6}C_1)({}_{10}C_1)({}_{5}C_1)}{{}_{25}C_4} = \frac{1200}{12650} \approx 0.095}$
$\displaystyle{\textrm{(b) } \frac{({}_{10}C_2)({}_{15}C_0)}{{}_{25}C_2} = \frac{45}{300} = 0.15}$
Assume that the probability of a college student having a car on campus is 0.30. A random sample of 12 students is taken. What is the probability that at least 4 will have a car on campus?
Work the problem using the Binomial distribution.
(a) Working as a binomial, we use the complement. $1 - (P(0)+P(1)+P(2)+P(3)) \approx 1 - 0.49251 \approx 0.51$;
A bridge hand contains 13 cards. What is the probability that a bridge hand will contain 9 spades, all four aces, and one non-spade, non-ace.
Hypergeometric. $2.8 \times 10^{-8}$
The switchboard receives an average of 3 calls per minute. For a randomly selected minute, what is the probability that there will be at least 4 calls?
Poisson. $1 - (P(0)+P(1)+P(2)+P(3)) \approx 1 - 0.0647 = 0.353$
|
CommonCrawl
|
Results for 'A. D. Pruszenski' (try it on Scholar)
Catalogue des Manuscrits alchémiques grecs, publié sous la direction de Bidez, F. Cumont, A. Delatte, O. Lagercrantz, et J. Ruska. Vol. V. 1. Les manuscrits d'Espagne décrits par C. O. Zuretti. 2. Les manuscrits d Athènes décrits par A. Severeyns. Vol. VI. Michel Psellus, Epître sur la Chrysopée: Opuscules et extraits sur la météorologie et la démonologie publiés par Joseph Bidez. En appendice Proclus sur l'art hiératique; Psellus, Choix de dissertations inédites. Pp. v + 175 and xiv + 246. Brussels: Lamartin, 1928. [REVIEW]D. N. A. - 1929 - Journal of Hellenic Studies 49 (1):117-117.details
La Faune Marine dans la décoration des Plats à Poissons. Étude sur la céramique grecque d'Italie Méridionale. By Léon Lacroix. Pp. 69; 40 plates. Verviers: Lacroix, 1937. 50 frs. belg. [REVIEW]D. T. A. - 1937 - Journal of Hellenic Studies 57 (2):268-269.details
Some Leisure Hours of a Long Life. By H. Montague Butler, D.D., Master of Trinity College, Cambridge. Cambridge : Bowes and Bowes. [REVIEW]D. G. A. - 1914 - The Classical Review 28 (8):279-280.details
Sermo Latinus: A Short Guide to Latin Prose Composition. By J. P. Postgate, Litt.D. New Edition, Revised and Greatly Augmented. Pp. Vi + 186. Macmillan and Co. [REVIEW]D. G. A. - 1913 - The Classical Review 27 (06):214-215.details
Sermo Latinus: A Short Guide to Latin Prose Composition. By J. P. Postgate, Litt.D. New Edition, Revised and Greatly Augmented. Pp. Vi + 186. Macmillan and Co. [REVIEW]D. G. A. - 1913 - The Classical Review 27 (6):214-215.details
Kjellberg and Säflund Greek and Roman Art, 3000 B.C. To A.D. 550. Trans. P. Fraser. London: Faber and Faber. 1968. Pp. 250. 250 Illus. £3 10s. [REVIEW]M. S. A. - 1969 - Journal of Hellenic Studies 89:181-181.details
Landmarks in the Struggle Between Science and Religion. By James Y. Simpson, M.A., D.Sc., F.R.S.E., Professor of Natural Science, New College, Edinburgh. [REVIEW]E. E. A. - 1926 - Philosophy 1 (3):388.details
Science and Religion in Philosophy of Religion
Retroactive Inhibition in the A-B, A-D Paradigm as Measured by a Multiple-Choice Test.Coleman T. Merryman - 1971 - Journal of Experimental Psychology 91 (2):212-214.details
Philosophy of Psychology in Philosophy of Cognitive Science
Tour d'horizon des principaux programmes et dispositifs de soutien à l'insertion professionnelle en enseignementOverview of key measures to support teacher inductions.Stéphane Martineau & Joséphine Mukamurera - 2012 - Revue Phronesis 1 (2):45-62.details
This text provides a brief portrait of teacher induction programs in place in the school boards in Quebec and an analysis of three major support systems used. The analysis is based on both the descriptive documents integration programs posted on the website of the Carrefour national de l'insertion professionnelle en enseignement (CNIPE) as well as a review of research. Ce texte présente un bref portrait des programmes d'insertion professionnelle mis en place dans les commissions scolaires québécoises ainsi qu'une analyse de (...) trois des principaux dispositifs de soutien utilisés. L'analyse se base à la fois sur les documents descriptifs des programmes d'insertion mis en ligne sur le site web du Carrefour national de l'insertion professionnelle en enseignement (CNIPE) ainsi que sur une recension des recherches. (shrink)
Education in Professional Areas
Tour d'horizon des principaux programmes et dispositifs de soutien à l'insertion professionnelle en enseignement.Stéphane Martineau & Joséphine Mukamurera - 2012 - Revue Phronesis 1 (2):45-62.details
Qumran: Correnti Del Pensiero Giudaico (Iii A.C.-I D.C).Giovanni Ibba - 2007 - Carocci.details
Judaism in Philosophy of Religion
Otto's Criticisms of Schleiermacher: A. D. SMITH.A. D. Smith - 2009 - Religious Studies 45 (2):187-204.details
An assessment is made of Rudolf Otto's criticisms of Friedrich Schleiermacher's claim that religious feeling is to be interpreted as essentially involving a feeling of absolute dependence. Otto's criticisms are divided into two kinds. The first suggest that a feeling a dependence, even an absolute one, is the wrong sort of feeling to locate at the heart of religious consciousness. It is argued that this criticism is based on misinterpretations of Schleiermacher's view, which is in fact much closer to Otto's (...) than the latter appreciated. The second kind of criticism suggests that the feeling of absolute dependence cannot play the foundational role assigned to it by Schleiermacher, since it is itself a secondary response. It is argued not only that Otto provides no justification for this criticism, but that Otto's own position is incoherent unless Schleiermacher's view is accepted. (shrink)
Religious Experience in Philosophy of Religion
Virtue and Character: A. D. M. Walker.A. D. M. Walker - 1989 - Philosophy 64 (249):349-362.details
Moral theories which, like those of Plato, Aristotle and Aquinas, give a central place to the virtues, tend to assume that as traits of character the virtues are mutually compatible so that it is possible for one and the same person to possess them all. This assumption—let us call it the compatibility thesis—does not deny the existence of painful moral dilemmas: it allows that the virtues may conflict in particular situations when considerations associated with different virtues favour incompatible courses of (...) action, but holds that these conflicts occur only at the level of individual actions. Thus while it may not always be possible to do both what would be just and what would be kind or to act both loyally and honestly, it is possible to be both a kind and a just person and to have both the virtue of loyalty and the virtue of honesty. (shrink)
Ethics in Value Theory, Miscellaneous
Berkeley on Action: A. D. Woozley.A. D. Woozley - 1985 - Philosophy 60 (233):293-307.details
At the risk of proving myself such a caviller, I want to ask a question which I have seldom heard raised, and which I have never seen discussed in anything that I have read about Berkeley. If I am right, it poses a problem for his immaterialism, not only different, but coming from a different direction, from those objections that are commonly levelled against him. If I am wrong, it will show how right Berkeley was to stress the difficulty of (...) using for one purpose our language which has become fashioned for another. At least, I hope that I shall not fail to be the 'fair and ingenuous reader'for whom he asked. (shrink)
Berkeley: Philosophy of Action in 17th/18th Century Philosophy
Negligence and Ignorance: A. D. Woozley.A. D. Woozley - 1978 - Philosophy 53 (205):293-306.details
The purpose of this paper is to discuss and to relate to each other two topics: the admissibility of ignorance and mistake of fact as defences against negligence in crime; and the inadmissibility of ignorance and mistake of law as defences against criminal charges. I am in not concerned at all with torts negligence, only with criminal offences which can be committed negligently, where negligence suffices for liability, as in the law of homicide. This produces an untidy classification of elements, (...) one or other of which is needed to provide the required mens rea : intention , knowledge , recklessness and negligence. It is untidy, because the last does not belong on the same list as the other three, each of which can appropriately be called a state of mind in what we might say to be a positive sense, for each of them includes some degree of awareness of and/or attitude to relevant facts. If negligence is to be called a state of mind, it is so in a very stretched and negative way: to be told that a person was not attending to, thinking of or noticing something that he should have been is to be given some information, of a negative sort, about his state of mind, but it tells us very little, for it eliminates only one of an unlimited range of states of mind . His not attending, noticing, etc., is equally compatible with his daydreaming and with his concentrating hard on something else. If negligence requires inadvertence, as is commonly maintained, then there was a state of mind which the agent should have been in but was not; if, as I would argue, it does not require inadvertence, then there was a state of mind which the agent should have been in, and maybe he was not in it, maybe he was in it. would not require it; the definition runs, 'a person is negligent if he fails to exercise such care, skill or foresight as a reasonable man in his situation would exercise'. However, that is only a proposal; at present advertent negligence is rare in criminal law, although common in torts.) On this view, the questions are whether his performance fell below scratch, what are to be the excusing conditions for such a performance, and if the answer to is yes, whether his performance was covered by the excusing conditions. (shrink)
Control and Responsibility in Meta-Ethics
Torts in Philosophy of Law
The Settlement of 26 June A.D. 4 and its Aftermath.R. A. Birch - 1981 - Classical Quarterly 31 (02):443-.details
In a recently published article I have suggested an amendment of the textual crux in Suetonius, Tiberius 21. 4 and an interpretation of the passage as providing direct evidence that the arrangement of the marriages of Germanicus and the younger Drusus was integral to Augustus' settlement of 26 June a.d. 4, even if they were not celebrated until early 5. This view differs from the more usual assumption that while the marriages took place in 5, the date of their arrangement (...) was not particularly significant, or from the possibility implied by Levick that Germanicus' marriage may have been arranged to placate the 'faction' of the elder Julia after the consolidation in 4 of the position of Livia's descendants. The more precise hypothesis that the marriages were intended as part of the settlement may help us to bring into sharper focus some of the political events of the next few years, and this article attempts to do so; in particular it looks at the internal balance of the settlement; the anomalous separate adoption of Agrippa Postumus; and the decline and fall of Agrippa Postumus and the younger Julia. First, however, some further observations on the hypothesis in my earlier article. (shrink)
Social and Political Philosophy, Misc in Social and Political Philosophy
Did Galba Visit Britain In A.D. 43?A. A. Barrett - 1983 - Classical Quarterly 33 (01):243-.details
In his Vita Galbae Suetonius informs us that after Gaius' assassination Galba was urged by some to attempt to seize power but declined to do so. Consequently he was much favoured by Claudius, and held in such high regard that when Galba was smitten by a sudden, though mild, illness, the emperor postponed the expedition launched against Britain in A.D. 43: 'ut cum subita ei [sc.Galbae] valetudo nee adeo gravis incidisset, dilatus sit expeditionis Britannicae dies'. The reference to the postponement (...) is clear and unequivocal, and contains nothing scurrilous or titillating that might have persuaded Suetonius to fabricate it. It does not, accordingly, seem unreasonable that modern commentators should, on the basis of this passage, record that Galba was present in Britain in A.D. 43. (shrink)
"Doctor Knows Best"? – Eine Analyse der Arzt-Patient-Beziehung in der TV-KrankenhausserieDr. House"Doctor Knows Best"?—A Critical Analysis of the Physician-Patient Relationship in the TV seriesHouse M.D. [REVIEW]Uta Bittner, Sebastian Armbrust & Franziska Krause - 2013 - Ethik in der Medizin 25 (1):33-45.details
Vor dem Hintergrund, dass in den Medien und der Öffentlichkeit thematisierte und dargestellte Arztbilder stets auch auf die öffentliche Meinung und die Vorstellungen der Menschen von Ärzten wirken, spürt der Artikel der Frage nach, welches Arztbild die amerikanische TV-KrankenhausserieDr. House transportiert und welche Ausprägung das dargestellte Arzt-Patienten-Verhältnis einnimmt. Hierbei werden die medizinethischen Reflexionen durch eine detaillierte medienwissenschaftliche Genre-Einordnung und dramaturgische Analyse eingerahmt und unterstützt. Zudem werden als Analyseinstrumentarium die vier Modelle des Arzt-Patienten-Verhältnisses nach Emanuel/Emanuel herangezogen. Dieser interdisziplinäre Forschungsansatz zeigt, dass (...) die Hauptfigur der Serie, der Arzt Dr. Gregory House, durchaus als Gegenentwurf einesmodernen Arztes, der fürsorglich, nicht-direktiv und stets im Sinne desinformed consent handelt, konzipiert und präsentiert wird. Doch ist daraus nicht zu schließen, dass die Figur des Dr. House einseitig als Paternalist gezeichnet ist. Die Kategorisierung und Einordnung des Dr. House und der von ihm repräsentierten Arzttugenden ist vielmehr entlangaller Elemente des gesellschaftlichen Arztideals vorzunehmen, zu denen eine entsprechende wissenschaftlich-medizinische Kompetenz, die Orientierung an objektivierbaren Indizien, die Verpflichtung auf das naturwissenschaftliche, evidenzbasierte Ideal, eine angenehme Kommunikation mit dem Patienten sowie die nötige, gebotene Aufklärung zählen. So wird in der Analyse deutlich, dass die Darstellung von Dr. House vielschichtiger ist und immer wieder auf ethische Dilemma-Situationen in der Medizin verweist. Diese wirkmächtige dramatisch-filmische Darstellung von Konfliktsituationen im diagnostisch-therapeutischen Kontext sollte daher auch in der kommunikationswissenschaftlichen Wirkungsforschung vertieft werden. (shrink)
Good Kid, M.A.A.D City: Kendrick Lamar's Autoethnographic Method.James B. Haile Iii - 2018 - Journal of Speculative Philosophy 32 (3):488-498.details
So much of Africana philosophical research and scholarship has focused on personal, anecdotal experiences to tell/disclose larger intellectual narratives of race, nation, history, time, and space.1 Yet the personal nature in which Africana philosophy articulates itself has often been seen as particular and not yet universal—in other words, not rightly or properly "philosophical." But understood methodologically, the sort of introspection inherent in Africana philosophy becomes not only one way of "doing" philosophy but the grounding for philosophical insight.2 Kendrick Lamar's album (...) good kid, m.A.A.d. city provides for us an example of such methodological insight, proper for "doing" philosophy.3In characterizing... (shrink)
A Note on the Mutiny of the Pannonian Legions in A.D. 14.J. J. Wilkes - 1963 - Classical Quarterly 13 (2):268-271.details
The origins of the unrest among the Pannonian legions in A.D. 14 are easily discerned. The great war in Illyricum of A.D. 6–9 involved the legions in a series of extremely arduous campaigns extending across the western half of the Balkan Peninsula, in particular the impenetrable forests of Bosnia and the rugged karst of Dalmatia. The nearness of this area to Italy made the war a great crisis in the reign of Augustus: conquest of Illyricum was the keystone of Augustus' (...) northern frontier policy and no efforts were spared to achieve this. Advances in Germany could be determined from expediency but the subjugation of the Bosnian tribes was a necessity. During the war the need for men was so great that conscription was introduced in Italy and even freedmen were enlisted when ordinary citizen volunteers were not forthcoming. Cassius Dio speaks of the low morale and outbreaks of mutiny in the army of Tiberius during the last season of the war. (shrink)
Reinterpreting Sellars in the Light of Brandom, McDowell, and A. D. Smith.Niels Skovgaard Olsen - 2010 - European Journal of Philosophy 18 (4):510-538.details
Abstract: The intent of this paper is to indicate a development in Sellars' writings which points in another direction than the interpretations offered by Brandom, McDowell, and A. D. Smith. Brandom and McDowell have long claimed to preserve central insights of Sellars's theory of perception; however, they disagree over what exactly these insights are. A. D. Smith has launched a critique of Sellars in chapter 2 of his book The Problem of Perception which is so penetrating that it would tear (...) Sellars' philosophy of perception apart if it were adequate. However, I try to show firstly that Brandom's and McDowell's interpretations are unsatisfying when Sellars' late writings are taking into consideration. And secondly that we can give another interpretation of Sellars that is not vulnerable to some of the problems of which Smith accuses Sellars. (shrink)
Wilfrid Sellars in 20th Century Philosophy
A History of Women Philosophers, Volume 1: Ancient Women Philosophers, 600 B.C. - 500 A.D.Mary Ellen Waithe - 1989 - Hypatia 4 (1):155-159.details
A History of Women Philosophers, Volume I: Ancient Women Philoophers, 600 B.C. - 500 A.D., edited by Mary Ellen Waithe, is an important but somewhat frustrating book. It is filled with tantalizing glimpses into the lives and thoughts of some of our earliest philosophical foremothers. Yet it lacks a clear unifying theme, and the abrupt transitions from one philosopher and period to the next are sometimes disconcerting. The overall effect is not unlike that of viewing an expansive landscape, illuminated only (...) by a few tiny spotlights. (shrink)
History: Feminist Philosophy in Philosophy of Gender, Race, and Sexuality
On A. D. Smith's Constancy Based Defence of Direct Realism.Phillip John Meadows - 2013 - Philosophical Studies 163 (2):513-525.details
This paper presents an argument against A D Smith's Direct Realist theory of perception, which attempts to defend Direct Realism against the argument from illusion by appealing to conscious perceptual states that are structured by the perceptual constancies. Smith's contention is that the immediate objects of perceptual awareness are characterised by these constancies, which removes any difficulty there may be in identifying them with the external, or normal, objects of awareness. It is here argued that Smith's theory does not provide (...) an adequate defence of Direct Realism because it does not adequately deal with the difficulties posed by the possibility of perceptual illusion. It is argued that there remain possible illusory experiences where the immediate objects of awareness, which in Smith's account are those characterised by perceptual constancies, cannot be identified with the external objects of awareness, contrary to Direct Realism. A further argument is offered to extend this conclusion to all non-illusory cases, by adapting an argument of Smith's own for the generalising step of the Argument from Illusion. The result is that Smith's theory does not provide an adequate Direct Realist account of the possibility of perceptual illusion. (shrink)
Direct and Indirect Perception in Philosophy of Mind
The Emergence of Contextualism in Rousseau's Political Thought: The Case of Parisian Theatre in the Lettre a D'Alembert.F. Forman-Barzilai - 2003 - History of Political Thought 24 (3):435-464.details
In this article, I address Rousseau's evolution as a political thinker between the years 1750 and 1753, during which time his critics challenged him to square the radical implications of his Discours sur les sciences et les arts with the realities of eighteenth-century European life. It was in the course of replying to his critics that Rousseau first adopted what I refer to as a more contextual orientation to political institutions. I argue that Rousseau's ostensibly Montesquieuian turn in the replies (...) sustained his claim in the Lettre a d'Alembert that theatre, the scourge of Geneva's republican simplicity, might nevertheless serve a useful function in Paris, where meurs, in Rousseau's estimation, had lapsed already to a point of irreversible corruption. I conclude that this contextual orientation to institutions guided much of Rousseau's subsequent thought about political reform in the modern republic. (shrink)
History of Political Philosophy in Social and Political Philosophy
The Oxford Classical Dictionary. Ed. M. Cary, J. D. Denniston, J. Wight Duff, A. D. Nock, W. D. Ross and H. H. Scullard. Pp. Xix + 971. Oxford: Clarendon Press, 1949. 50s. [REVIEW]J. O. Thomson, M. L. Clarke, M. Cary, J. D. Denniston, J. Wight Duff, A. D. Nock, W. D. Ross & H. H. Scullard - 1949 - Journal of Hellenic Studies 69 (1):95-96.details
Effects of Contextual Similarity on Unlearning in the A-B, D, E, F and B, D, E, F Paradigms.Richard K. Landon & James H. Crouse - 1970 - Journal of Experimental Psychology 83 (1p1):186.details
Emotions, Misc in Philosophy of Mind
La Fenomenologia Dels Estats D'Ànim En El Llegat Inèdit de Husserl. Notes Per a la Seva Sistematització.Ignacio Quepons - 2016 - Enrahonar: Quaderns de Filosofía 57:79-97.details
https://revistes.uab.cat/enrahonar/article/view/v57-quepons.
Lectures Delivered in Connection with the Dedication of the Graduate College of Princeton University in October, 1913, by Émile Boutroux, Alois Riehl, A. D. Godley, Arthur Shipley. [REVIEW]Emile Boutroux, A. D. Godley, Alois Riehl & A. E. Sir Shipley - unknowndetails
20th Century German Philosophy in European Philosophy
Greek Precedents for Claudius' Actions in A.D. 48 and Later.M. S. Smith - 1963 - Classical Quarterly 13 (01):139-.details
The aim of the following note is to draw attention to certain links between the marriage of Claudius and Agrippina and that of Nero and Octavia. Previous writers have not, so far as I know, dealt fully with the implications of these marriages, especially when they are seen in the light of the Silanus affair of A.D. 48–49. In each case, it will be argued here, Claudius may have been impressed by Greek precedents.
The Religious Uses of History: Judaism in First-Century A.D. Palestine and Third-Century Babylonia.Jacob Neusner - 1966 - History and Theory 5 (2):153-171.details
The development of Talmudic Judaism from the first to the fifth century A.D. is marked by a decline of interest in the knowledge and explanation of historical events. Neither the destruction of the Temple in 70 A.D. nor the advent of the Sasanians in Babylonia in 226 A.D. provoked refiection on history among the Talmudic rabbis. In Jerusalem in the first century, Yohanan ben Zakkai stressed an interim ethic and policy for survival and redemption; Rav and Samuel, in third century (...) Babylonia, converted messianic speculation and scriptural exegesis into a policy of passive acceptance of political events. But lack of interest in political history masked belief that the covenant of Sinai would win redemption not through the course of historical events but apart from it. (shrink)
Greek Precedents for Claudius' Actions in A.D. 48 and Later.M. S. Smith - 1963 - Classical Quarterly 13 (1):139-144.details
National Education. H. E. Armstrong, H. W. Eve, Joshua Fitch, W. A. Hewins, John C. Medd, T. A. Organ, A. D. Provand, B. Reynolds, Francis Stoves, Laurie Magnus. [REVIEW]A. D. Sanger - 1903 - International Journal of Ethics 13 (3):395-398.details
The Fasti for A.D. 70–96.Paul Gallivan - 1981 - Classical Quarterly 31 (01):186-.details
The political and administrative requirements of the Roman state during the early years of the Principate demanded an increase in the annual number of consulars. When Augustus finally acted to remedy this situation in 5 b.c., he introduced a system of suffect consuls and thereby increased the number of consuls from the two per annum of the Republic to four. A regular practice became established whereby one or both of the ordinary consuls retired at the end of June to be (...) replaced in office for theremainder of the year by a suffect consul. For the reigns of Gaius and Claudius additional suffects were included in many years and a new pattern can be seen to have emerged. It was usual now for each ordinarius to hold office for the first six months of the year except in some special cases where the ordinarii resigned at the end of two months and their place was taken by a pair of suffects who remained in office for the next four months to serve out the more regular tenure of the ordinary consuls. Under Nero, the innovation of this two-month ordinary consulship was not perpetuated and ordinarii usually remained in office for the full six months. Suffect consulships throughout the period a.d. 38–68 were held for periods of either two, four or six months. The Civil War of a.d. 68/69 and the consequent changes of emperor broke the above pattern. For 69, there are no fewer than sixteen consuls known to have held office during the year. Such confusion, however, would not be unexpected given the startling events of this year. Of considerable importance to students of the early Empire, therefore, is the question of what happened to the system of allocating consulships during a particular year when the state had once again settled itself down to running in routine under the victorious Flavian emperors. The answer to this question will be of particular importance for prosopographers of the early Empire for whom chronology is the backbone of their investigations, since the fasti for the reigns of Vespasian and Titus are notable for the number of years in which the complete list of consuls is lacking. (shrink)
Book Review:National Education. H. E. Armstrong, H. W. Eve, Joshua Fitch, W. A. Hewins, John C. Medd, T. A. Organ, A. D. Provand, B. Reynolds, Francis Stoves, Laurie Magnus. [REVIEW]A. D. Sanger - 1903 - Ethics 13 (3):395-.details
Fruhitaliotische Vasen. By A. D. Trendall. Pp. 44; 32 Plates. Leipzig: H. Keller1938. Export Price, RM 21.F. N. Pryce, A. D. Trendall, J. D. Beazley & P. Jacobsthal - 1938 - Journal of Hellenic Studies 58 (2):269-269.details
Hermès Trismégiste: The Corpus Hermeticum. Text Edited by A. D. Nock, Translation by A. J. Festugière. Vol. I Poimandrès, Traités II–XII; Vol. II Traités XIII–XVIII, Asclepius. Pp. 404. Paris: Assn. G. Budé, 1945. [REVIEW]Wilfred L. Knox, A. D. Nock, A. J. Festugiere & Hermes Trismegiste - 1947 - Journal of Hellenic Studies 67:145-145.details
Lucretius: Poetry, Philosophy, Science / Edited by Daryn Lehoux, A. D. Morrison, and Alison Sharrock.Daryn Lehoux, A. D. Morrison & Alison Sharrock (eds.) - 2013 - Oxford University Press.details
Lucretius in Ancient Greek and Roman Philosophy
Hermès Trismégiste. Vol. III, Fragments extraits de Stobée, I–XXII. Ed. and trans. A.-J. Festugière. Vol. IV, Fragments extraits de Stobée, XXIII–XXIX. Ed. and trans. A.-J. Festugière; Fragments divers, ed. A. D. Nock, trans. A.-J. Festugière. Pp. ccxxviii + 93, and 150. Paris: Société d'Edition 'Les Belles Lettres', 1954. Price not stated.H. J. Rose, A. -J. Festugiere & A. D. Nock - 1955details
National Education, by H. E. Armstrong, H. W. Eve Joshua Fitch W. A. Hewins John C. Medd T. A. Organe A. D. Provand, B. Reynolds, Francis Stoves and Laurie Magnus. [REVIEW]A. D. Sanger - 1902 - Ethics 13:395.details
On a Class of M.A.D. Families.Yi Zhang - 1999 - Journal of Symbolic Logic 64 (2):737-746.details
We compare several closely related continuum invariants, i.e., $\mathfrak{a}$, $\mathfrak{a}_\mathfrak{e}$, $\mathfrak{a}_\mathfrak{p}$ in two forcing models. And we shall ask some open questions in this field.
Model Theory in Logic and Philosophy of Logic
Sallust's Jugurtha - A. D. Leeman: Aufbau und Absicht von Sallusts Bellum Jugurthinum. Pp. 33. Amsterdam: Noord-Hollandsche Uitgevers Mij., 1957. Paper, fl. 2. [REVIEW]D. A. Malcolm - 1959 - The Classical Review 9 (02):140-142.details
A Decomposition of the Rogers Semilattice of a Family of D.C.E. Sets.Serikzhan A. Badaev & Steffen Lempp - 2009 - Journal of Symbolic Logic 74 (2):618-640.details
Khutoretskii's Theorem states that the Rogers semilattice of any family of c.e. sets has either at most one or infinitely many elements. A lemma in the inductive step of the proof shows that no Rogers semilattice can be partitioned into a principal ideal and a principal filter. We show that such a partitioning is possible for some family of d.c.e. sets. In fact, we construct a family of c.e. sets which, when viewed as a family of d.c.e. sets, has (up (...) to equivalence) exactly two computable Friedberg numberings ¼ and ν, and ¼ reduces to any computable numbering not equivalent to ν. The question of whether the full statement of Khutoretskii's Theorem fails for families of d.c.e. sets remains open. (shrink)
Mathematical Logic in Formal Sciences
Set Theory in Philosophy of Mathematics
The Long-Term Sustenance of Sustainability Practices in MNCs: A Dynamic Capabilities Perspective of the Role of R&D and Internationalization. [REVIEW]Subrata Chakrabarty & Liang Wang - 2012 - Journal of Business Ethics 110 (2):205-217.details
What allows MNCs to maintain their sustainability practices over the long-term? This is an important but under-examined question. To address this question, we investigate both the development and sustenance of sustainability practices. We use the dynamic capabilities perspective, rooted in resource-based view literature, as the theoretical basis. We argue that MNCs that simultaneously pursue both higher R&D intensity and higher internationalization are more capable of developing and maintaining sustainability practices. We test our hypotheses using longitudinal panel data from 1989 to (...) 2009. Results suggest that MNCs that have a combination of both high R&D intensity and high internationalization are (i) likely to develop more sustainability practices and (ii) are likely to maintain more of those practices over a long-term. As a corollary, MNCs that have a combination of both low R&D and low internationalization usually (i) end up developing little or no sustainability practices and (ii) find it difficult to sustain whatever little sustainability practices they might have developed. (shrink)
Business Ethics in Applied Ethics
Analyse contrastée des attentes et des représentations d'étudiants en formation initiale à l'enseignement secondaire en fonction de leur engagement ou non dans un établissement scolaire.Sandra Pellanda Dieci, Laura Weise & Anne Monnier - 2012 - Revue Phronesis 1 (2):63-81.details
In Geneva, since the beginning of pre-service secondary teacher training at university, two different types of students in teacher preparation coexist: some of them have got part-time classes, others have no teaching assignment. In an introduction to the teaching profession, students from different disciplines of the two types take a course on the same sources of professional knowledge. By analyzing the representations of the teaching profession, we find that the process of construction of their professional identity varies according to whether (...) they have a student teaching placement or not. : A Genève, depuis l'universitarisation de la formation des enseignants du secondaire, deux statuts d'étudiants en formation initiale à l'enseignement coexistent : les uns à mi-temps en responsabilité de classe, les autres sans contact avec le terrain. Dans une unité de formation d'introduction à la profession enseignante, des étudiants de disciplines différentes des deux statuts suivent un dispositif de formation identique portant sur les savoirs de référence constitutifs de la profession. En analysant les représentations du métier d'enseignant des étudiants, on constate que l'identité professionnelle en construction de ceux-ci évolue différemment selon s'ils sont sur le terrain ou non. (shrink)
Socrates on Reason, Appetite and Passion: A Response to Thomas C. Brickhouse and Nicholas D. Smith, Socratic Moral Psychology. [REVIEW]Christopher Rowe - 2012 - The Journal of Ethics 16 (3):305-324.details
Section 1 of this essay distinguishes between four interpretations of Socratic intellectualism, which are, very roughly: a version in which on any given occasion desire, and then action, is determined by what we think will turn out best for us, that being what we all, always, really desire; a version in which on any given occasion action is determined by what we think will best satisfy our permanent desire for what is really best for us; a version formed by the (...) assimilation of to, labelled the 'standard' version' by Thomas C. Brickhouse and Nicholas D. Smith, and treated by them as a single alternative to their own interpretation; and Brickhouse and Smith's own version. Section 2 considers, in particular, Brickhouse and Smith's handling of the 'appetites and passions', which is the most distinctive feature of interpretation. Section 3 discusses Brickhouse and Smith's defence of 'Socratic studies' in its historical context, and assesses the contribution made by their distinctive interpretation of 'the philosophy of Socrates'. One question raised in this section, and one that is clearly fundamental to the existence of 'Socratic studies', is how different Brickhouse and Smith's Socrates turns out to be from Plato himself, i.e., the Plato of the post-'Socratic' dialogues; to which the answer offered is that on Brickhouse and Smith's interpretation Socratic moral psychology becomes rather less distinguishable from its 'Platonic' counterpart—as that is currently understood—than it is on the interpretation they oppose. (shrink)
Socrates in Ancient Greek and Roman Philosophy
Du développement durable à la transition : des démarches d'anticipation territoriales en recomposition.Rémi Le Fur - 2018 - Temporalités 28.details
Les démarches d'anticipation, tout comme le développement durable, sont tournées vers le futur. La montée en puissance des préoccupations du développement durable a engendré une multiplication de démarches d'anticipation conduites par les collectivités territoriales. Elle se traduit pour l'action publique par un double élargissement des échelles spatiale et temporelle des enjeux auxquels elle doit répondre, passant d'enjeux locaux et de court terme à des enjeux planétaires et de long terme. Si les démarches d'anticipation sont réceptrices de ces évolutions elles en (...) sont également vectrices, contribuant par les adaptations de leur pratique à une meilleure prise en compte des enjeux du développement durable. Bien que le chantier des améliorations à apporter à ces démarches soit encore ouvert, avec de nombreuses questions encore en suspens, il semblerait que la mobilisation des acteurs soit un préalable indispensable aux dynamiques de transition, à la condition de les maintenir dans le temps de manière souple et adaptable. (shrink)
Analyse contrastée des attentes et des représentations d'étudiants en formation initiale à l'enseignement secondaire en fonction de leur engagement ou non dans un établissement scolaireComparative analysis of the students' expectations and representations in pre-service teacher training for secondary school depending on whether they have a student teaching placement or not.Sandra Pellanda Dieci, Laura Weise & Anne Monnier - 2012 - Revue Phronesis 1 (2):63-81.details
In Geneva, since the beginning of pre-service secondary teacher training at university, two different types of students in teacher preparation coexist: some of them have got part-time classes, others have no teaching assignment. In an introduction to the teaching profession, students from different disciplines of the two types take a course on the same sources of professional knowledge. By analyzing the representations of the teaching profession, we find that the process of construction of their professional identity varies according to whether (...) they have a student teaching placement or not. A Genève, depuis l'universitarisation de la formation des enseignants du secondaire, deux statuts d'étudiants en formation initiale à l'enseignement coexistent : les uns à mi-temps en responsabilité de classe, les autres sans contact avec le terrain. Dans une unité de formation d'introduction à la profession enseignante, des étudiants de disciplines différentes des deux statuts suivent un dispositif de formation identique portant sur les savoirs de référence constitutifs de la profession. En analysant les représentations du métier d'enseignant des étudiants, on constate que l'identité professionnelle en construction de ceux-ci évolue différemment selon s'ils sont sur le terrain ou non. (shrink)
Une perspective bachelardienne pour lire et comprendre les situations d'aprentissage professionnel de la formation à l'enseignement.Lucie Roger, Philippe Maubant & Bernard Mercier - 2012 - Revue Phronesis 1 (1):92-101.details
This text presents a few preliminary results of research currently being conducted at the Université de Sherbrooke's Research Institute on Educational Practices. The study seeks to understand how situations presented in teacher education can support the functioning and success of trainee teachers' professional learning. The article's aim is to identify the points of convergence between situations of professional activity, situations of professional learning, and training situations. The text will attempt to analyze the role that can be played by certain training (...) structures that seek to support a professionalization process that we define in terms of a general aim: the construction of professional knowledge. The originality of this research lies in that it draws on the work of Gaston Bachelard to develop a theoretical frame enabling a reading and interpretation of the results. Ce texte présente quelques résultats préliminaires d'une recherche menée présentement à l'Institut de recherche sur les pratiques éducatives de l'Université de Sherbrooke. Cette recherche vise à comprendre comment les situations présentes dans la formation à l'enseignement peuvent soutenir le fonctionnement et la réussite de l'apprentissage professionnel des enseignants en formation. Nous cherchons à identifier les points de rencontre entre les situations d'activité professionnelle, les situations d'apprentissage professionnel et les situations de formation. Nous tenterons d'analyser le rôle que peuvent jouer certains dispositifs visant à soutenir un processus de professionnalisation que nous définissons au regard d'une finalité : la construction de savoirs professionnels. L'originalité de cette recherche est la convocation des travaux de Gaston Bachelard pour élaborer le cadre théorique permettant la lecture et l'interprétation des résultats. (shrink)
Une perspective bachelardienne pour lire et comprendre les situations d'aprentissage professionnel de la formation à l'enseignementA Bachelardian Perspective for Reading and Understanding Professional Learning Situations in Teacher Education.Lucie Roger, Philippe Maubant & Bernard Mercier - 2012 - Revue Phronesis 1 (1):92.details
A Critique of R.D. Alexander's Views on Group Selection.David Sloan Wilson - 1999 - Biology and Philosophy 14 (3):431-449.details
Group selection is increasingly being viewed as an important force in human evolution. This paper examines the views of R.D. Alexander, one of the most influential thinkers about human behavior from an evolutionary perspective, on the subject of group selection. Alexander's general conception of evolution is based on the gene-centered approach of G.C. Williams, but he has also emphasized a potential role for group selection in the evolution of individual genomes and in human evolution. Alexander's views are internally inconsistent and (...) underestimate the importance of group selection. Specific themes that Alexander has developed in his account of human evolution are important but are best understood within the framework of multilevel selection theory. From this perspective, Alexander's views on moral systems are not the radical departure from conventional views that he claims, but remain radical in another way more compatible with conventional views. (shrink)
Evolution of Phenomena in Philosophy of Biology
Group Selection in Philosophy of Biology
|
CommonCrawl
|
Predicting potential adverse events using safety data from marketed drugs
Chathuri Daluwatte1,
Peter Schotland2,
David G. Strauss1,
Keith K. Burkhart1 &
Rebecca Racz ORCID: orcid.org/0000-0002-5487-56921
While clinical trials are considered the gold standard for detecting adverse events, often these trials are not sufficiently powered to detect difficult to observe adverse events. We developed a preliminary approach to predict 135 adverse events using post-market safety data from marketed drugs. Adverse event information available from FDA product labels and scientific literature for drugs that have the same activity at one or more of the same targets, structural and target similarities, and the duration of post market experience were used as features for a classifier algorithm. The proposed method was studied using 54 drugs and a probabilistic approach of performance evaluation using bootstrapping with 10,000 iterations.
Out of 135 adverse events, 53 had high probability of having high positive predictive value. Cross validation showed that 32% of the model-predicted safety label changes occurred within four to nine years of approval (median: six years).
This approach predicts 53 serious adverse events with high positive predictive values where well-characterized target-event relationships exist. Adverse events with well-defined target-event associations were better predicted compared to adverse events that may be idiosyncratic or related to secondary target effects that were poorly captured. Further enhancement of this model with additional features, such as target prediction and drug binding data, may increase accuracy.
The Food and Drug Administration's (FDA) proposed process modernization to support new drug development involves establishing a unified post-market safety surveillance framework to monitor the benefits and risks of drugs across their lifecycles [1]. While clinical trials are considered the gold standard for detecting and labeling adverse events, these trials are not sufficiently powered to detect less common adverse events. Additionally, some adverse events emerge when a drug is used in clinical practice outside of the specified inclusion/exclusion criteria. Some adverse events may have high prevalence in specific subpopulations who were not enrolled in the clinical trials or subgroups who cannot be identified based on information collected from patients in the trials. For example, a substantially increased risk of Stevens-Johnson syndrome in patients positive for the HLA-B*1502 allele taking carbamazepine was not identified until decades after approval [2]. In addition, concomitant medications (drug-drug interactions) and comorbidities may also contribute to adverse events, and these interactions are not always adequately present or captured in clinical trials. Therefore, post-market safety surveillance is crucial.
FDA uses the FDA Adverse Event Reporting System (FAERS) [3] and the Sentinel Initiative [4] to obtain information about adverse events occurring after drug approval. In 2017, over 1.8 million adverse event cases were reported to the FDA, including nearly 907,000 serious reports and over 164,000 fatal cases [5]. While traditional pharmacovigilance relies on data mining systems, these methods have reporting biases and require manual review of cases to determine reporting accuracy. Recently, there has been a strong interest in developing prediction algorithms to assist in post-market surveillance to overcome such weaknesses and make post-market pharmacovigilance more efficient.
Adverse event information from a variety of sources such as FAERS, literature, genomic data, and social media has been used to both evaluate adverse events and make predictions. For example, FAERS and similar post-market databases have demonstrated utility in adverse event prediction; Xu and Wang showed FAERS, combined with literature, had great utility in detecting safety signals [6]. Others have used chemical structure as the basis for adverse event predictions. Vilar and colleagues used molecular fingerprint similarity to drugs with a known association with rhabdomyolysis to further support and prioritize rhabdomyolysis signals found in FAERS [7]. Another unique option has been to use social media reports to identify new adverse events for drugs before they are reported to regulatory agencies or in peer-reviewed literature; Yang and colleagues used a partially supervised classification method to identify reports of adverse events on the discussion forum for Medhelp [3]. Other sources of information for adverse event prediction and detection include electronic health records, drug labels and even bioassay data [8,9,10]. Additionally, a wide variety of algorithms have been used to make adverse event predictions, including logistic regression models, support vector machine, and ensemble methods [8, 11, 12]. Many of these models have experienced varying degrees of success but overall demonstrate the great potential of developing an adverse event prediction model using a classifier.
However, many of these methodologies have focused on predicting a specific adverse event (e.g. cardiovascular events) or drug class (e.g. oncology drugs) [12,13,14]. Algorithms that can predict a wide variety of adverse events for multiple drug classes are important to enhance post-market safety surveillance. We have previously developed a genetic algorithm to predict approximately 900 adverse events using FDA product labels and FAERS data [15]. In this study, we build on this algorithm to predict 135 adverse events of high priority to regulatory review using safety data from marketed drugs with one or more shared molecular targets. We hypothesize that drugs that have similar modes of action at the same targets will have a similar adverse event profile because of shared structural features and likely target binding characteristics. We additionally expect adverse events that are more closely associated with drug targets (such as serotonin syndrome) to be well-predicted via this methodology. Some idiosyncratic reactions may also be captured well because the shared structural features likely play a role in these reactions where the targets and actions have not yet been fully characterized.
Inclusion and exclusion criteria resulted in 54 test drugs and 213 unique comparator drugs, leading to 287 test-comparator drug combinations. The 54 test drugs used in this study had one to 37 comparator drugs, with one and two comparators being most frequent, as identified by DrugBank (Fig. 1a), and were on the market four to nine years (Fig. 1b). Tanimoto similarity scores between test drugs and comparator drugs ranged between 0.02 and 1, with 0.51 being the mean and 0.5 being the mode. Eighteen test drug-comparator associations included a biologic, as defined by a − 1 Tanimoto score (Fig. 1c). Target cosine similarity scores between test drugs and comparator drugs ranged between 0 and 1, with 0.45 being the mean and 1 being the mode (Fig. 1d). Seventy-nine comparator drugs were approved before 1982, while the most recently approved comparator drug had five years of time in market (Fig. 1e). The 54 test drugs are known to bind to 126 targets based on DrugBank data (summarized in Supplemental Table 1).
Characteristics of test drugs, comparator drugs and test-comparator drug combinations. a) Distribution of number of comparator drugs for test drug. b) Distribution of time on market for test drugs. c) Tanimoto score distribution for test-comparator drug combinations. d) Target similarity score distribution for test-comparator drug combinations. e) Distribution of time on market for comparator drugs
The prevalence of the 135 adverse events considered in this study is summarized in Fig. 2. The overall prevalence of adverse events was higher in the comparator drugs.
Prevalence of adverse events within comparator drugs and test drugs
Prediction models were not made for 26 adverse events that were not observed or observed only in one test drug label (accident, anaphylactoid reaction, aplastic anaemia, apnoea, atrioventricular block, azotaemia, cardiomyopathy, cerebral infarction, coagulopathy, colitis, colitis ulcerative, Crohn's disease, dermatitis bullous, dermatitis exfoliative, gastric ulcer, granulocytopenia, hepatic necrosis, hypokinesia, injury, myopathy, oliguria, respiratory depression, road traffic accident, skin ulcer, thrombosis, and ulcer).
Results at varying thresholds (the minimum percentage of comparator drugs which are predicted positive for an adverse event to result in a positive prediction) for the safety label change evaluation and the number of adverse events with left-skewed positive predictive value, which demonstrated a high probability for high positive predictive value, are summarized in Table 1. Based on these results, we selected 70% as the optimum threshold. This resulted in the highest number of adverse events with high positive predictive values along with a high percentage of predicted safety label changes that were also issued by FDA (32%). All performance histograms at 70% threshold for each adverse event are provided in supplementary materials. Positive predictive value histograms of two well-predicted (i.e. left-skewed histograms) adverse events (febrile neutropenia and hypertension) and two poorly-predicted (i.e. right-skewed histograms) adverse events (bacterial infection and haemorrhage) are shown in Fig. 3.
Table 1 Performance of the algorithm when the threshold to make a positive prediction was varied
Left-skewed positive predictive value histograms demonstrated well-predicted adverse events, as shown in a) Febrile Neutropenia and b) Hypertension. Right-skewed positive predictive value histograms demonstrated poorly-predicted adverse events, as shown in c) Bacterial Infection and d) Haemorrhage
Fifty-three adverse events showed 100% as the positive predictive value mode, with the median between 50 and 100, 25% quantile between 0 and 100, and 75% quantile at 100%, which suggests left-skewed distributions. By having a left-skewed distribution for positive predictive value, these adverse events were considered well-predicted, which suggests high probability of having high positive predictive value (Table 2). Additionally, these adverse events had a sensitivity mode between 0 and 100%, specificity mode of 100%, and negative predictive value mode of 50–100%.
Table 2 Performance and prevalence of adverse events that were well-predicted by the algorithm
Fifty-six adverse events had positive predictive values mode between 0 and 33%, which suggested right-skewed distribution and thus were considered poorly-predicted (Table 3). While the positive predictive value was low, all these adverse events did have high specificity (mode: 76–100%) and negative predictive value (mode: 55–91%). Two adverse events, bacterial infection and fungal infection, additionally had high sensitivity (mode: 100%) (Table 3).
Table 3 Performance and prevalence of adverse events that were poorly-predicted by the algorithm
In this study we developed a preliminary approach to predict 135 adverse events of high priority to regulatory review using post-market safety data from marketed drugs that have the same activity at one or more of the same targets. We identified 53 adverse events that were well-predicted with this approach and chose to use a threshold which optimizes positive predictive value. These adverse events had varying sensitivity, but high specificity and negative predictive value. A model with high positive predictive value but low sensitivity will miss some true adverse events, but this was deemed acceptable for this study. In discussions about optimizing either positive predictive value or sensitivity in this study, it was deemed more important to identify adverse events that are most likely to be true and save time and effort sifting through false positives. In practice, a balance between sensitivity and positive predictive value would likely be optimal in conjunction with a manual review of predictions.
Adverse event predictions based on molecular targets have multiple applications. We may be able to identify difficult to observe events that are not commonly seen in clinical trials to statistical significance. Predicted adverse events may be able to augment post-marketing surveillance activities by providing a list of adverse events to monitor. If an adverse event is discovered during pre-market evaluation or post-market utilization, examination of other drugs with similar pharmacologic mechanism and activity may help evaluate causality of the event and determine if further studies are necessary based on information from all comparators, not necessarily limited to those with the same indication. Particularly, examination of secondary targets may be useful, as this may explain the emergence of an adverse event or why a particular drug is at lower risk for adverse events traditionally labeled as a class adverse event. While the preliminary approach presented here is considered a tool for hypothesis generation, further evaluation and refinement will determine if it is useful in regulatory safety review.
The method reported in this study matches safety data based on drug activity at one or more of the same known targets. This may limit the predictive ability, as some adverse events may be idiosyncratic or be associated with unknown secondary targets, and thus the mechanisms responsible for the event have not yet been identified. Associations may still be identified, however, if overlapping structural features capture this unknown shared idiosyncratic activity. This method can be expanded to match a drug not only based on drug activity at one or more of the same targets, but also considering other features which characterize the drug activity, such as Anatomical Therapeutic Chemical (ATC) codes or binding strength (Ki). ATC codes, developed by the World Health Organization, may provide insight into drugs that are related by mechanism or therapeutic use [16]. Binding strength to targets of interest, which may be obtained from literature or databases such as the Psychoactive Drug Screening Program [17] or ChEMBL [18], may provide further classification of target similarity by identifying comparator drugs that bind to targets of interest at a similar order of magnitude. The model also does not capture drug dose that may be needed to produce the required target activity.
Fifty-six adverse events were predicted with low positive predictive value. Therefore, a positive prediction for these adverse events should be carefully reviewed by experts before reaching a conclusion. In practice, expert review augments this by assessment of FDA Adverse Event Reporting System (FAERS) reports, literature, and more recently evaluations using insurance claims and electronic health data. Reviewers may examine predictions made by this algorithm by reviewing literature and other databases to identify plausible mechanisms for the drug eliciting the reaction, or review cases in FAERS and electronic health records. More detail about evaluation of safety signals at the FDA can be found in Szarfman et al. [19]. Analysis of the poor-performing adverse events in this study identified several clinical patterns: hemorrhage (including "haemorrhage", "haematoma", and "rectal haemorrhage"), infection (including "cellulitis", "fungal infection", and "bacterial infection"), and psychiatric (including "paranoia", "delirium", and "hallucination") adverse events were among the worst-performing events by positive predictive value. Many of these adverse events may be idiosyncratic or related to unknown secondary target effects, and therefore it is difficult to predict an adverse event based on the known drug targets. This study may have been limited by the known targets that are available in DrugBank, as DrugBank may not contain all known secondary targets for all drugs. To better capture adverse events that may be related to secondary drug targets, target prediction for the test drugs and comparator drugs may be incorporated to better match comparator drugs to test drugs. DrugBank contains limited target predictions, so another source would be used.
This study had several limitations. First, the current version of Embase only allows users to extract manually curated adverse events by date for one drug at a time, which makes this process time-intensive for a large set of test drugs and their comparators and thus limited the number of drugs used in this study. We tried to address this limitation by using a probabilistic approach of performance evaluation using bootstrapping. Creating a tool to automate extraction of these adverse events may alleviate the manual burden. Additionally, text-mining FDA labels for adverse events is most accurate when used on a structured document, and thus we elected to use test drugs that had labels available in SPL format. While an assessment of the text-mining for 20 labels showed positive predictive value, sensitivity, and F-score at approximately 90% (unpublished data, Racz et al., 2018), we anticipate larger text-mining errors. This assessment identified patterns in the text-mining algorithm that may lead to errors, and the query is currently being updated to improve performance. Finally, several adverse events were not observed or observed with low prevalence in the test drug set. Further analysis of these adverse events identified some events that may be associated with targets that were not substantially analyzed. This includes events such as "respiratory depression", which is particularly associated with drugs such as benzodiazepines and opioids and their related receptors [20], and "hypokinesia", which may be associated with dopamine receptors [21]. Other adverse events, such as "anaphylactoid reaction" and "apnea", may be reported interchangeably with other MedDRA Preferred Terms, such as "anaphylactic reaction" and "sleep apnea", respectively; therefore, these terms may be reported in lower frequency. To better capture this, we may consider alternative groupings or adding additional terms to complete a mechanistically-related grouping.
This classifier algorithm predicts significant adverse events that are of high priority for regulatory monitoring, some of which may be difficult to observe in clinical trials. The prediction algorithm uses evidence of adverse events available through FDA product labels and scientific literature for drugs that have the same activity at one or more of the same targets along with structural and target similarities and the duration of post-market experience. For this study, we prioritized achieving high positive predictive value for the adverse event prediction. The model achieved high positive predictive value on 53 out of 135 adverse events, including several adverse events with well-characterized target relationships. We found that 32% of the model predicted safety label changes were FDA-issued within four to nine years after approval.
Selection of adverse events for evaluation
This methodology predicts 135 adverse events identified by FDA medical experts and reviewers to be of high priority to regulatory review and the pharmacovigilance efforts of the Office of Surveillance and Epidemiology. High priority was determined by FDA pharmacovigilance experts as events that are serious, may be life-threatening or debilitating, or represent frequent events that result in the need for safety label changes. These 135 adverse events were derived using 167 MedDRA Preferred Terms, grouped by mechanistic similarity according to FDA medical experts. For example, "pancreatitis" and "pancreatitis acute" are mechanistically similar and may be reported interchangeably, thus they were captured as one adverse event, "pancreatitis". The 135 adverse events and the 167 MedDRA Preferred Terms used to define them are listed in Table 4. MedDRA is the Medical Dictionary for Regulatory Activities and is the international medical terminology developed under the auspices of the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use [22]. MedDRA Preferred Terms are medical concepts for symptoms, signs, diagnoses, indications, investigations, procedures, and medical, social, or family history. The FDA Adverse Event Reporting System (FAERS) currently codes reported adverse events as MedDRA Preferred Terms, and all terms from other sources were converted to MedDRA Preferred Terms as described below.
Table 4 Adverse events defined using MedDRA Preferred Terms. The bolded MedDRA Preferred Term is used to name the adverse event, while all MedDRA Preferred Terms grouped together were used to define that adverse event
Drug set selection
Selection of test drugs
Fifty-four drugs approved by FDA between 2008 and 2013 were chosen for this analysis. Analyses were based on available Structured Product Labeling for products and required both an original label and a subsequent version of the label for this assessment. As Structured Product Labeling began in 2006, 2008 was selected to allow time for the requirement to be adequately implemented. The year 2013 was selected as the upper bound to allow at least four years of post-market experience to 2017, which is the median time for a regulatory action on a safety event (e.g. updating a drug label) [23]. Of the drugs approved between 2008 and 2013, drugs were included as long as there was at least one other U.S. marketed drug with the same pharmacological activity at one or more of the same known targets. Additional inclusion criteria were systemic exposure (e.g. not ophthalmic only) and multiple doses (i.e. drugs with single dose administration were excluded) due to an increased likelihood of multiple and significant adverse events.
Selection of comparator drugs
Comparator drugs, defined as drugs that have the same activity (i.e. agonist or antagonist) at one or more of the same targets as the test drug, were chosen using DrugBank [24]. Test and comparator drug targets were identified if the drug had "pharmacological action" at the target (i.e. the column "pharmacological action" in DrugBank must read "yes" as opposed to "no" or "unknown") and must have a defined action column in DrugBank (i.e. "antagonist" or "agonist") at the target. Additionally, the comparator drugs must have been approved in the United States and thus have an FDA product label available.
Features for classifier algorithm
Adverse Events from FDA drug labels
Adverse events were obtained from two versions of the test drug label: the originally-approved FDA product label (between 2008 and 2013) and the drug label as of 2017. The adverse events from the 2017 FDA product label were text-mined using Linguamatics I2E (OnDemand Release, Linguamatics Limited, Cambridge, United Kingdom). Adverse events were extracted as MedDRA Preferred Terms from the Boxed Warnings, Warnings and Precautions, and Adverse Reactions sections. The adverse events from the original product label were manually extracted and translated to MedDRA Preferred Terms by a medical expert from the Boxed Warnings, Warnings and Precautions, and Adverse Reactions sections. Manual curation was employed as Linguamatics OnDemand text-mines the current product label only.
Comparator drug adverse events were text-mined using Linguamatics I2E (Enterprise Release, Linguamatics Limited, Cambridge, United Kingdom). Adverse events were extracted as MedDRA Preferred Terms from Boxed Warnings, Warnings and Precautions, and Adverse Reactions sections. For each comparator drug, the FDA product label in use at the time of the respective test drug approval was used as the source for text-mining (e.g.: if a test drug was approved on November 1, 2010, the comparator drug labels that were in use on November 1, 2010 were mined).
For each drug label and adverse event, the presence or absence of a MedDRA Preferred Term was indicated by "1" or "0", respectively. The classifiers were trained on and performance was analyzed using test drug label data from 2017. To assess the algorithm's ability to predict future safety label changes at the approval date (described in detail in "Classifier" below), the difference between drug label data from 2017 and the label at approval (2008–2013) was used.
Adverse events from scientific literature
Adverse events from scientific literature were mined using Embase Biomedical Database (Elsevier B. V, Amsterdam, The Netherlands), a biomedical database covering journals and conference abstracts [25]. A team of Embase indexers manually curate all adverse events from all full-text articles and associate each adverse event with the related drug. These drugs and adverse events are documented in Emtree terms, Elsevier's controlled terminology. Therefore, each drug in Embase has hundreds to thousands of adverse events associated with it, and each adverse event-drug association has a curated reference. Adverse events reported for all comparator drugs before their respective test drug's approval date were searched for in Embase. The list of adverse events documented by Elsevier as Emtree terms for each comparator drug was exported and manually matched to MedDRA Preferred Terms.
Comparator drug duration in market
Comparator time in market was included as a feature. The longer a drug has been marketed, the more adverse events, particularly difficult to observe adverse events, are identified and evaluated for labeling. The duration in market for comparator drugs was determined from the Orange Book [26]. Drugs that were approved before 1982 have an approval date listed as "Approved Prior to Jan 1, 1982"; the duration in market for these drugs was imputed to be 36 years (1982 to 2017).
Structural similarity
Structural similarity was included as a feature as it was hypothesized that the more structurally similar a comparator drug was to a test drug, the more likely they were to share pharmacology, including unknown secondary pharmacology that was not included in this analysis and may contribute to similar idiosyncratic reactions. Structural similarities of each test drug to its respective comparator drugs were determined using Tanimoto scores. Simplified Molecular Input Line Entry System (SMILES) structures for all test and comparator drugs were imported into the Tanimoto Matrix workflow in the KNIME Analytics Platform (version 3.3.2) [27]. Structures were then converted to MACCS 166-bit fingerprints, and structural similarity between the test drug and the respective comparator drug was determined. For biologics where similarity score was not available, − 1 was imputed as Tanimoto score.
Target similarity
Target similarity, or how closely the target profile of each comparator aligned with that of the test drug, was included as a feature as it was hypothesized that the more targets a comparator shares with a test drug, the more likely it is that a comparator and test drug share adverse events. The set of known pharmacological targets for each test drug and corresponding comparator drugs was extracted from DrugBank [24]. Target similarities of each test drug with its comparator drugs were determined using target-based cosine similarity scores. A trivalent drug-by-target matrix was then constructed such that for each drug-target pair an entry of "1" indicates drug-target activation, an entry of "-1" indicates drug-target inhibition, and an entry of "0" indicates no pharmacological activity. Cosine similarities the test drug has with its comparator drugs were then computed as follows:
$$ cosine\left(\left[ Test\ Drug\right],\left[ Comparator\ Drug\right]\right)=\frac{\left[ Test\ Drug\right]\bullet \left[ Comparator\ Drug\right]}{\left\Vert \left[ Test\ Drug\right]\right\Vert \left\Vert \left[ Comparator\ Drug\right]\right\Vert } $$
Five features were defined for each comparator-test drug -adverse event association: 1) presence or absence of an adverse event in FDA drug label for the comparator drug; 2) presence or absence of an adverse event in scientific literature for comparator drug; 3) structural similarity between comparator drug and test drug; 4) target similarity between comparator drug and test drug; and 5) duration the comparator drug was on the market (Fig. 4), all of which are independent of each other. These features were used to train a Naïve Bayes classifier, using presence or absence of an adverse event in the 2017 FDA drug label for the test drug as the training label (see section Adverse Events from FDA Drug Labels for details). Given the wide range of prevalence of presence of an adverse event, we anticipated the contribution of prevelance of presence of an adverse event to model prediction would be high. Therefore a Naïve Bayes classifier was chosen in order to take into account both prior probability (i.e. prevelance of presence of an adverse event) and likelihood for presence of an adverse event. All statistical calculations were conducted in R version 3.2.2 (R Foundation for Statistical Computing, Vienna, Austria) and the Naïve Bayes classifier from package e1071 was used [28] (see supplemental materials for code).
Flow diagram of experimental methods
Due to the limited number of drugs available for testing and the high dimensionality of prediction (135 adverse events), 10,000 bootstrapping steps were conducted by selecting a random set of 44 drugs to train the Naïve Bayes classifier, while leaving 10 drugs for testing at each iteration (i.e. 10,000/ \( {C}_{44}^{54} \)). A prediction was made by each comparator drug-test drug association for an adverse event of interest. Therefore, since a single test drug can have multiple comparator drugs, there may be multiple predictions for one test drug for each adverse event of interest. To remediate this, if the percentage of comparator drug-test drug combinations that predicted the adverse event of interest was above a predefined threshold, the adverse event was considered a positive prediction for the test drug. Performance was calculated while varying the threshold (0, 10, 30, 50, 60, 70, 90%) above which the percentage of comparator drug-test drug combinations predicted the adverse event of interest to identify the optimum threshold.
As 10,000 bootstrapping steps were performed, the most frequent value (mode), median, 25th and 75th quantiles for each of the performance metrics (sensitivity, specificity, positive predictive value and negative predictive value) were calculated to assess the predictive ability for each adverse event. Performance metric histograms for each adverse event are provided in the supplemental materials. We chose to optimize positive predictive value, as false positives may be more costly in terms of additional studies and regulatory review compared to false negatives. Adverse events with a distribution for positive predictive value that was left-skewed (defined as a mode positive predictive value > 75%) were considered well-predicted.
Leave-one-out cross validation was performed to evaluate safety label changes. Predictions were evaluated as follows:
$$ \%\mathrm{of}\ \mathrm{FDA}-\mathrm{issued}\ \mathrm{safety}\ \mathrm{label}\ \mathrm{changes}\ \mathrm{that}\ \mathrm{were}\ \mathrm{predicted}=\frac{\# of\ drug- AE\ combos\ that\ \boldsymbol{change}\boldsymbol{d}\ \boldsymbol{from}\ \boldsymbol{negative}\ \boldsymbol{to}\ \boldsymbol{positive}\ between\ approval\ and\ 2017\ that\ were\ \boldsymbol{predicted}\ \boldsymbol{positive}}{\# of\ drug- AE\ combos\ that\ \boldsymbol{change}\mathbf{d}\ \boldsymbol{from}\ \boldsymbol{negative}\ \boldsymbol{to}\ \boldsymbol{positive}\ between\ approval\ and\ 2017} $$
$$ \%\mathrm{of}\ \mathrm{predicted}\ \mathrm{safety}\ \mathrm{label}\ \mathrm{changes}\ \mathrm{that}\ \mathrm{were}\ \mathrm{also}\ \mathrm{FDA}-\mathrm{issued}=\frac{\# of\ drug- AE\ combos\ that\ \boldsymbol{changed}\ \boldsymbol{from}\ \boldsymbol{negative}\ \boldsymbol{to}\ \boldsymbol{posi}\mathbf{t}\boldsymbol{ive}\ between\ approval\ and\ 2017\ that\ were\ \boldsymbol{predicted}\ \boldsymbol{posi}\boldsymbol{tive}}{\# of\ drug- AE\ combos\ that\ were\ \boldsymbol{negative}\ \boldsymbol{at}\ \boldsymbol{approval}\ that\ were\ \boldsymbol{predicted}\ \boldsymbol{posi}\boldsymbol{tive}} $$
Evaluation of false positive predictions
Positive predictions that were made by the Naïve Bayes classifier that were not on the respective 2017 drug label were classified as "false positives". To further evaluate if these predictions may be early signals not yet on the label, the case count and Proportional Reporting Ratio (PRR) were identified for each drug-adverse event pair from the FDA Adverse Event Reporting System using OpenFDA [29, 30]. Data from June 30, 1989 to January 1, 2018 was used in this analysis.
The datasets supporting the conclusions of this article are included within the article (and its additional files).
Anatomical Therapeutic Chemical
MedDRA:
Medical Dictionary for Regulatory Activities
FAERS:
FDA Adverse Event Reporting System
PRR:
Proportional Reporting Ratio
FDA:
Simplified Molecular Input Line Entry System
Woodcock J. FDA Voice [Internet]2018. Available from: https://blogs.fda.gov/fdavoice/index.php/2018/06/fda-proposes-process-modernization-to-support-new-drug-development/ [cited 2018].
Ferrell PB Jr, McLeod HL. Carbamazepine, HLA-B*1502 and risk of Stevens-Johnson syndrome and toxic epidermal necrolysis: US FDA recommendations. Pharmacogenomics. 2008;9(10):1543–6.
Yang M, Kiang M, Shang W. Filtering big data from social media--building an early warning system for adverse drug reactions. J Biomed Inform. 2015;54:230–40.
Ball R, Robb M, Anderson SA, Dal PG. The FDA's sentinel initiative--a comprehensive approach to medical product surveillance. Clin Pharmacol Ther. 2016;99(3):265–8.
FDA. FAERS Public Dashboard [Available from: https://www.fda.gov/Drugs/GuidanceComplianceRegulatoryInformation/Surveillance/AdverseDrugEffects/ucm070093.htm].
Zhang L, Zhang J, Shea K, Xu L, Tobin G, Knapton A, et al. Autophagy in pancreatic acinar cells in caerulein-treated mice: immunolocalization of related proteins and their potential as markers of pancreatitis. Toxicol Pathol. 2014;42(2):435–57.
Vilar S, Harpaz R, Chase HS, Costanzi S, Rabadan R, Friedman C. Facilitating adverse drug event detection in pharmacovigilance databases using molecular structure similarity: application to rhabdomyolysis. J Am Med Inform Assoc. 2011;18(Suppl 1):i73–80.
Pouliot Y, Chiang AP, Butte AJ. Predicting adverse drug reactions using publicly available PubChem BioAssay data. Clin Pharmacol Ther. 2011;90(1):90–9.
Gurulingappa H, Toldo L, Rajput AM, Kors JA, Taweel A, Tayrouz Y. Automatic detection of adverse events to predict drug label changes using text and data mining techniques. Pharmacoepidemiol Drug Saf. 2013;22(11):1189–94.
Zhao J, Henriksson A, Asker L, Bostrom H. Predictive modeling of structured electronic health records for adverse drug event detection. BMC Med Inform Decision Making. 2015;15(Suppl 4):S1.
Schuemie MJ, Coloma PM, Straatman H, Herings RM, Trifiro G, Matthews JN, et al. Using electronic health care records for drug safety signal detection: a comparative evaluation of statistical methods. Med Care. 2012;50(10):890–7.
Strickland J, Zang Q, Paris M, Lehmann DM, Allen D, Choksi N, et al. Multivariate models for prediction of human skin sensitization hazard. J Appl Toxicol. 2017;37(3):347–60.
Xu R, Wang Q. Automatic signal extraction, prioritizing and filtering approaches in detecting post-marketing cardiovascular events associated with targeted cancer drugs from the FDA adverse event reporting system (FAERS). J Biomed Inform. 2014;47:171–7.
Frid AA, Matthews EJ. Prediction of drug-related cardiac adverse effects in humans--B: use of QSAR programs for early detection of drug-induced cardiac toxicities. Regul Toxicol Pharmacol. 2010;56(3):276–89.
Schotland P, Racz R, Jackson D, Levin R, Strauss DG, Burkhart K. Target-adverse event profiles to augment Pharmacovigilance: a pilot study with six new molecular entities. CPT Pharmacometrics Syst Pharmacol. 2018;7(12):809–17.
ATC/DDD Index 2018: World Health Organization. [Available from: https://www.whocc.no/atc_ddd_index/].
Roth BL, Lopez E, Patel S, Kroeze WK. The Multiplicity of Serotonin Receptors: Uselessly Diverse Molecules or an Embarrassment of Riches? Neuroscientist. 2000;6(4):252–62.
Gaulton A, Hersey A, Nowotka M, Bento AP, Chambers J, Mendez D, et al. The ChEMBL database in 2017. Nucleic Acids Res. 2017;45(D1):D945–D54.
Szarfman A, Tonning JM, Doraiswamy PM. Pharmacovigilance in the 21st century: new systematic tools for an old problem. Pharmacotherapy. 2004;24(9):1099–104.
Horsfall JT, Sprague JE. The pharmacology and toxicology of the 'Holy Trinity'. Basic Clin Pharmacol Toxicol. 2017;120(2):115–9.
Lemos JC, Friend DM, Kaplan AR, Shin JH, Rubinstein M, Kravitz AV, et al. Enhanced GABA transmission drives Bradykinesia following loss of dopamine D2 receptor signaling. Neuron. 2016;90(4):824–38.
MedDRA: Medical Dictionary for Regulatory Activities [Available from: https://www.meddra.org/].
Downing NS, Shah ND, Aminawung JA, Pease AM, Zeitoun JD, Krumholz HM, et al. Postmarket safety events among novel therapeutics approved by the US Food and Drug Administration between 2001 and 2010. JAMA. 2017;317(18):1854–63.
Wishart DS, Feunang YD, Guo AC, Lo EJ, Marcu A, Grant JR, et al. DrugBank 5.0: a major update to the DrugBank database for 2018. Nucleic Acids Res. 2018;46(D1):D1074–D82.
Elsevier. [Available from: https://www.embase.com/#search].
FDA. Approved Drug Products with Therapeutic Equivalence Evaluations. 38th ed; 2018.
Berthold MRCN, Dill F, Gabriel TR, Kotter T, Meinl T, Ohl P, Sieb C, Thiel K, Wiswedel B, KNIME. The Konstanz Information Miner. In: BH PC, Schmidt-Thieme L, Decker R, editors. Data Analysis, Machine Learning and Applications. Berlin: Springer; 2008. p. 319–26.
Meyer D, Hornik K, Weingessel A, Leisch F. e1071: Misc Functions of the Department of Statistics, Probability Theory Group (Formerly E1071). R package version 1.6–8 ed2017. https://cran.r-project.org/web/packages/e1071/index.html.
Kass-Hout TA, Xu Z, Mohebbi M, Nelsen H, Baker A, Levine J, et al. OpenFDA: an innovative platform providing access to a wealth of FDA's publicly available data. J Am Med Inform Assoc. 2016;23(3):596–600.
Evans SJ, Waller PC, Davis S. Use of proportional reporting ratios (PRRs) for signal generation from spontaneous adverse drug reaction reports. Pharmacoepidemiol Drug Saf. 2001;10(6):483–6.
The authors thank Jeffry Florian and Anuradha Ramamoorthy for their helpful feedback.
This article reflects the views of the authors and should not be construed to represent the FDA's views or policies. The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services.
This project was supported in part by a research fellowship from the Oak Ridge Institute for Science and Education through an interagency agreement between the Department of Energy and the Food and Drug Administration (FDA). The funding body played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.
Division of Applied Regulatory Science, Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD, 20993, USA
Chathuri Daluwatte, David G. Strauss, Keith K. Burkhart & Rebecca Racz
Office of New Drugs, Food and Drug Administration, Silver Spring, MD, USA
Peter Schotland
Chathuri Daluwatte
David G. Strauss
Keith K. Burkhart
Rebecca Racz
CD led classifier design and implementation. CD, PS, and RR contributed data. CD and RR led data analysis and interpretation. CD, PS, DGS, KKB, and RR participated in study design and in the writing and editing of this manuscript. The author(s) read and approved the final manuscript.
Correspondence to Rebecca Racz.
RR's spouse is an employee of AstraZeneca. All other authors have no competing interests to declare.
"Supplemental Materials" contains histograms of the performance for each adverse event; "Supplemental Table 1" contains all targets represented in this study.
Contains Naïve Bayes code, files necessary to run code, and output files obtained to perform analysis described in the paper.
Daluwatte, C., Schotland, P., Strauss, D.G. et al. Predicting potential adverse events using safety data from marketed drugs. BMC Bioinformatics 21, 163 (2020). https://doi.org/10.1186/s12859-020-3509-7
Accepted: 22 April 2020
Adverse reaction
Machine Learning and Artificial Intelligence in Bioinformatics
|
CommonCrawl
|
Could a human beat light in a footrace?
Is there anything preventing the following experiment from being done right now?
Imagine that a human ran from point 'a' to point 'b' while light particles that reflected off a clock moved through a special medium from point 'a' to point 'b' as well. Could a human arrive at point 'b' before light? (As for the special medium, I'm imagining something like a complex maze of mirrors submerged in a very dense material.)
If so, if the human waited at the finish line to view light arriving in second place, would they see the time that the race began on the clock?
optics speed-of-light refraction
Qmechanic♦
JoshJosh
80422 gold badges55 silver badges77 bronze badges
$\begingroup$ It takes roughly a little over 8 minutes for light to travel from the sun to the earth. So, if the light has to travel through a maze of length equivalent to 150 million kilometres then, obviously, it would be quite plausible to devise a winnable race. $\endgroup$ – Strawberry Jan 11 '18 at 22:55
$\begingroup$ Note that light has been slowed down to almost human-running speeds: abcnews.go.com/Technology/story?id=99111&page=1. I can't go that fast on a bike, but there are many who can, and it's not too far away from the top speed of 100m sprinters like Usain Bolt. $\endgroup$ – Arthur Jan 12 '18 at 12:10
$\begingroup$ My first thought in reading this was that of course the human would win. Light doesn't have feet. $\endgroup$ – Engineer Toast Jan 12 '18 at 16:28
$\begingroup$ "Scientists 'freeze' light for an entire minute" $\endgroup$ – Richard Jan 12 '18 at 19:50
$\begingroup$ @EngineerToast - If that's the case, then how can someone be "light on their feet"? $\endgroup$ – Richard Jan 13 '18 at 12:30
No physical laws are being broken in this thought experiment. If you are concerned with the relativistic requirement "nothing can go faster than the speed of light", that only applies to the speed light goes in a vacuum: $c = 3 \times 10^8$ m/s. The reference to light in that relativity postulate makes it sound like if you could only find a situation where you slowed light down, you could break the laws of physics; not so. A better statement of the postulate would be "nothing can go faster than $3 \times 10^8$ m/s, which happens to also be the speed light travels at in a vacuum." I don't see anyone going faster than $3 \times 10^8$ m/s in this thought experiment, so no physics violations.
As for what the human at the end of the race sees:
He sees a blinding blue light from the all the Cherenkov Radiation from even the slightest charged particle passing through the medium. And perhaps the time at the start of the race. It's exactly what you would imagine since we are talking non-relativistic speeds. What an anti-climactic answer, eh?
stafusa
cmscms
$\begingroup$ How does the Cherenkov radiation come into this experiment? No charged particles are mentioned. Any why would this radiation be blinding? $\endgroup$ – sammy gerbil Jan 12 '18 at 1:00
$\begingroup$ @sammygerbil Cherenkov radiation happens when charged particles (which are all over the place) travel faster than the speed of light in the medium. It's a characteristic of nuclear power plants because there are a lot of charged particles there, moving quite close to c, and there are a lot of media that are used to slow these particles. The reason the radiation would be blinding is that the more you lower the speed of light in the medium, the more radiation you're going to get (because more particles "fit"), and lowering the speed of light to ~10 m/s; well, that's lowering it a lot. $\endgroup$ – Williham Totland Jan 12 '18 at 1:49
$\begingroup$ @WillihamTotland, from the diagram and the description, I wasn't imagining at all that the human had to race through the same medium as the light. That seems to be answering a different question. $\endgroup$ – Wildcard Jan 12 '18 at 3:03
$\begingroup$ @Wildcard where does WillihamTotland imply that the human would also be in the medium? The Cherenkov radiation will be emitted in the medium, eventually reach the edge, and then travel through the air as usual in a big bright flash, reaching any human observers. $\endgroup$ – OrangeDog Jan 12 '18 at 11:39
$\begingroup$ @cms: Are you sure the light would be visible as blue outside the medium? And would it be that bright? Consider conversation of momentum for this: A particle which causes Cherenkov light to be created, loses some of its energy. If it is very slow, it doesn't have much energy to begin with, so the photons created cannot have much energy, or there are only few of them. In effect, not much will be seen outside. $\endgroup$ – M.Herzkamp Jan 15 '18 at 11:19
There is a concept of "slow light" which is looking at light pulses whose group velocity propagates very slowly. This is slightly different than your clock example, but close enough that you might be interested in it.
In 1998, Danish physicist Lene Vestergaard Hau led a combined team from Harvard University and the Rowland Institute for Science which succeeded in slowing a beam of light to about 17 meters per second...
Usain Bolt can run roughly 12m/s, so we are not all that far off.
(Of course, we are playing some tremendous games with the light beams while doing these sorts of slow light games. Whether this is actually applicable to your specific thought experiment involving a clock depends on what aspects of the experiment you felt were important)
Cort AmmonCort Ammon
$\begingroup$ You should have read the rest of that sentence and the next in that Wikipedia article: "researchers at UC Berkeley slowed the speed of light traveling through a semiconductor to 9.7 kilometers per second in 2004. Hau and her colleagues later succeeded in stopping light completely...". Even I can beat light that is standing still. $\endgroup$ – Paul Sinclair Jan 12 '18 at 15:01
$\begingroup$ @PaulSinclair I chose not to reference stopping light completely. I felt the slowed light experiment was more likely to lead the OP to research what it meant to slow light than stopping light (which was more likely to be deemed "cheating," even if it used the same techniques) $\endgroup$ – Cort Ammon Jan 12 '18 at 15:21
$\begingroup$ 2005 - phys.org/news/2005-04-optical-frozen.html $\endgroup$ – WernerCD Jan 12 '18 at 15:23
Yes, it is theoretically possible.
For example, you could use two parallel, perfectly reflecting mirrors of length $L$, where $L$ is the distance between point A and point B. Let the distance between the two mirrors be $d$.
Assuming that the ray of light enters the two mirror by hitting one of them close to point A at an angle $\theta$, it will be reflected
$$N \approx \left ( \frac{L}{d \tan \theta} \right)$$
times before arriving at point B, covering a distance
$$l = N \frac d {\cos \theta} \approx \left ( \frac{L}{d \tan \theta} \right) \frac d {\cos \theta} = \frac L {\sin \theta}$$
in the process. Notice that the result does not (somewhat surprisingly) depend on $d$.
Therefore, the time needed to go from point A to point B for the ray of light is
$$t=\frac l c \approx \frac L {c \sin \theta}$$
You can therefore define an "effective speed" $v_e$ for the ray,
$$v_e \equiv c \sin \theta$$
If the speed of a human is $v_h$, the human will be faster than the light ray if
$$v_h > v_e \ \Rightarrow \ \sin \theta < \frac{v_h} c$$
The record speed for a running human (*) is $44.72$ km/h (Usain Bolt, 2009). The speed of light in a vacuum is $1.08 \cdot 10^9$ km/h. You get therefore the condition
$$\sin \theta < 4.14 \cdot 10^{-8}$$
You can see therefore that this is not very easy to realize in practice (and we are neglecting refraction, absorption, scattering, surface roughness etc.).
You also can repeat the calculation assuming that there is a material with refractive index $n$ between the two mirrors.
(*) I want to be generous.
valeriovalerio
$\begingroup$ what if we make a complicated pipe and make light travel in it many times , then it can be done $\endgroup$ – Sahil Chadha Jan 11 '18 at 18:21
$\begingroup$ A light second that fits in a shoebox has been done with coiled fiber optic cable. $\endgroup$ – Joshua Jan 11 '18 at 22:47
$\begingroup$ @SahilChadha What if we make a wall, so the light can't finish the race? $\endgroup$ – Chris♦ Jan 13 '18 at 0:10
$\begingroup$ @Joshua Less than a light millisecond, not a full light second: the IEX magic shoe box $\endgroup$ – Miles Jan 14 '18 at 5:50
It is of course possible not just for a human, but for anything "moving" to beat light moving in a given medium. It is impossible to beat $c=3\cdot10^8\ \mathrm{m/s}$, but not the velocity of light in a given medium.
In the experiment proposed, the man arriving at the end will see time exactly the same way a man out observing, it will not change the notion of Newtonian time, actually.
There is a (natural) misunderstanding about the light velocity (or the velocity of electromagnetic waves): it is always $c=3\cdot10^8\ \mathrm{m/s}$. What happens in other media, is that light propagating around atoms/molecules is constantly absorbed and re-emitted for the electrons given the effect of "slowing down" the net velocity of the light in that medium. But the light velocity, around the medium continues to be $c=3\cdot10^8\ \mathrm{m/s}$, for the effect of movement and relativity theory (except for the absorption and emission moments).
Rodrigo FontanaRodrigo Fontana
$\begingroup$ Look again. The runner doesn't travel through the same medium as the light does. $\endgroup$ – Wildcard Jan 12 '18 at 3:04
$\begingroup$ @Wildcard And he never claims that the runner does? $\endgroup$ – Graipher Jan 12 '18 at 7:30
The dense material would diffuse the light so if you got it dense enough you probably could not read the clock.
In theory if you had enough perfect mirrors you could see the start time.
paparazzopaparazzo
$\begingroup$ No, refraction is a function of face curvature and/or nonuniform index of refraction (see "GRIN" lens). $\endgroup$ – Carl Witthoft Jan 12 '18 at 13:46
$\begingroup$ One could imagine a complex maze of mirrors simply in air: if the mirrors are ideal, there does not seem to be any fundamental problem with getting a perfect image of the source at the destination. Using a dense material is only "convenient" in that it lets you construct a maze for which the total distance traveled by the light is less than the required distance in air alone. $\endgroup$ – wchargin Jan 15 '18 at 8:51
To focus on just one aspect of the answer:
The medium would have to do the majority of slowing down - after too many mirrors you increase the distance such that the clock appears too small to be legible. If you used slightly convex mirrors to magnify it you'd end up shooting nearly all the light outside the medium, and it would be too dim to see. If you increased the brightness to accommodate that loss, you'd vaporize everything within the initial beam of light.
So while the rest of your thought experiment doesn't pose a problem, there are certain physics limitations which would make it so the runner couldn't see the clock from the past.
Adam DavisAdam Davis
I identify 3 questions. The answers are:
First is yes, a human could beat "light" in a timed competition, if the "right" conditions are met. If the light is deflected and sent on a "long enough" path, the light will arrive latter than the human at the finish line.
Second is also yes, if instead of the clock face, a flash of laser light is used, then it is doable (see wikipedia Lunar Laser Ranging experiment).
And Third, if a 1 second "flash" is used (instead of clock face), then the human will see a 1 second flash, some time after getting to the finish line.
For an example, lets use: a 10m distance from start to finish, the human can run 10 m/s, and the laser flash is "bounced" off the moon. The human starts running when he sees the 1 second flash. In 1 second he will get to the finish line (t=d/v) and 1.5 seconds later (2.5 - 1), he will see the "same" 1 second flash again.
GuillGuill
protected by Qmechanic♦ Jan 11 '18 at 18:43
Not the answer you're looking for? Browse other questions tagged optics speed-of-light refraction or ask your own question.
Is there a theoretical maximum for refractive index?
Faster than light in a medium
Can thick-film reflection holograms be used to create true mirrors?
If the speed of light is constant in all reference frames, why does the mirror clock experiment show light travelling on an angle?
Wouldn't any structured beam of light be expected to travel slower than a plane wave?
A light clock experiment
Is there a system of units that replaces time with light-distance?
Feynman's superposition explanation of reflection/refraction, and 'tricking' quantum physics
Internal speed of light
Equivalence principle and the meaning of the coordinate speed of light
Calculating an "apparent" speed of a beam in a medium
|
CommonCrawl
|
Inflows, Outflows, and a Giant Donor in the Remarkable Recurrent Nova M31N 2008-12a? - Hubble Space Telescope Photometry of the 2015 Eruption
Darnley, M. J. and Hounsell, R. and Godon, P. and Perley, D. A. and Henze, M. and Kuin, N. P. M. and Williams, B. F. and Williams, S. C. and Bode, M. F. and Harman, D. J. and Hornoch, K. and Link, M. and Ness, J. -U. and Ribeiro, V. A. R. M. and Sion, E. M. and Shafter, A. W. and Shara, M. M. (2017) Inflows, Outflows, and a Giant Donor in the Remarkable Recurrent Nova M31N 2008-12a? - Hubble Space Telescope Photometry of the 2015 Eruption. The Astrophysical Journal, 849 (2). ISSN 0004-637X
PDF (1709 (1).10145v1)
1709_1_.10145v1.pdf - Accepted Version
Official URL: https://doi.org/10.3847/1538-4357/aa9062
The recurrent nova M31N 2008-12a experiences annual eruptions, contains a near-Chandrasekhar mass white dwarf, and has the largest mass accretion rate in any nova system. In this paper, we present Hubble Space Telescope (HST) WFC3/UVIS photometry of the late decline of the 2015 eruption. We couple these new data with archival HST observations of the quiescent system and Keck spectroscopy of the 2014 eruption. The late-time photometry reveals a rapid decline to a minimum luminosity state, before a possible recovery / re-brightening in the run-up to the next eruption. Comparison with accretion disk models supports the survival of the accretion disk during the eruptions, and uncovers a quiescent disk mass accretion rate of the order of $10^{-6}\,M_\odot\,\mathrm{yr}^{-1}$, which may rise beyond $10^{-5}\,M_\odot\,\mathrm{yr}^{-1}$ during the super-soft source phase - both of which could be problematic for a number of well-established nova eruption models. Such large accretion rates, close to the Eddington limit, might be expected to be accompanied by additional mass loss from the disk through a wind and even collimated outflows. The archival HST observations, combined with the disk modeling, provide the first constraints on the mass donor; $L_\mathrm{donor}=103^{+12}_{-11}\,L_\odot$, $R_\mathrm{donor}=14.14^{+0.46}_{-0.47}\,R_\odot$, and $T_\mathrm{eff, donor}=4890\pm110$ K, which may be consistent with an irradiated M31 red-clump star. Such a donor would require a system orbital period $\gtrsim5$ days. Our updated analysis predicts that the M31N 2008-12a WD could reach the Chandrasekhar mass in
The Astrophysical Journal
This is an author-created, un-copyedited version of an article accepted for publication/published in The Astrophysical Journal. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at doi: 10.3847/1538-4357/aa9062
|
CommonCrawl
|
Lecture 36 - Second Law of Thermodynamics
Stony Brook Physics phy141kk:lectures
Trace: • Lecture 36 - Second Law of Thermodynamics
The second law of thermodynamics
The first law relates heat energy, work and the internal thermal energy of a system, and is essentially a statement of conservation of energy.
The second law of thermodynamics adds a restriction on the direction of thermodynamic processes.
One of the earliest statements of the second law, due to Rudolf Clausius is that:
Heat cannot spontaneously flow from a cold object to a hot one, whereas the reverse, a spontaneous flow of heat from a hot object to a cold one, is possible.
We should note that the first law
$\Delta E_{int}=Q-W$
would not prohibit such a process, so the second law adds something fundamentally new to our understanding of thermodynamics.
Heat Engine
A heat engine is a system for turning temperature gradient between two thermal reservoirs in to mechanical work. We will consider heat engines that operate with a continuous cycle, which means that the system always returns to its initial state at the end of the cycle and there is no change in the internal energy.
The first law tells us that in the case
$Q_{H}=W+Q_{L}$
In writing this equation we have adopted a new sign convention, where all the heats and the work done are positive.
Some of the earliest engines were steam engines, though steam engines should not be thought of a historical relics, about 80% of the worlds electricity comes from steam turbines. The earliest "steam engine" the Aeolipile does not do very much work. To get work out of an engine one needs to design an efficient heat engine cycle.
The efficiency, $e$, of an engine is defined as the ratio of the work we get from the engine $W$ to the input heat $Q_{H}$
$e=\frac{W}{Q_{H}}$
As we know that
$e=\frac{W}{Q_{H}}=\frac{Q_{H}-Q_{L}}{Q_{H}}=1-\frac{Q_{L}}{Q_{H}}$
The lower the waste heat, the more efficient the engine, however the second law of thermodynamics prevents $Q_{L}$ being zero. Kelvin in fact stated the second law explicitly in those terms:
No device is possible whose sole effect is to transform a given amount of heat directly in to work.
Carnot Cycle
To find the hypothetical maximum efficiency of a heat engine we can consider a cycle called the Carnot Cycle first proposed by Carnot.
The Carnot cycle is based entirely on reversible processes, this is not achievable in reality, it would require each process to be executed infinitely slowly so that the process can be considered as a continuous progression through equilibrium states. We can however consider the Carnot cycle as a theoretical ideal which can be approached.
There are 4 processes in the Carnot cycle, which we will consider as in terms of the expansion and compression of an ideal gas.
From A to B. An isothermal expansion, in which an amount of heat $Q_{H}$ is added to the gas.
From B to C. An adiabatic expansion, in which no heat is exchanged and the temperature of the gas is lowered.
From C to D. An isothermal compression, in which an amount of heat $Q_{L}$ is removed from the gas.
From D to A. An adiabatic compression, returning the system to it's original high temperature state.
Efficiency of the Carnot Cycle
The work done in the first isothermal process is
$W_{AB}=nRT_{H}\ln\frac{V_{B}}{V_{A}}$
and as the process is isothermal this means that the heat added is equal to the work done.
$Q_{H}=nRT_{H}\ln\frac{V_{B}}{V_{A}}$
The heat lost in in the second isothermal process will be
$Q_{L}=nRT_{L}\ln\frac{V_{C}}{V_{D}}$
For the adiabatic processes
$P_{B}V_{B}^{\gamma}=P_{C}V_{C}^{\gamma}$ and $P_{D}V_{D}^{\gamma}=P_{A}V_{A}^{\gamma}$
and from the ideal gas law
$\frac{P_{B}V_{B}}{T_{H}}=\frac{P_{C}V_{C}}{T_{L}}$ and $\frac{P_{D}V_{D}}{T_{L}}=\frac{P_{A}V_{A}}{T_{H}}$
These equations can be used to eliminate the temperatures and show that
$\frac{V_{B}}{V_{A}}=\frac{V_{C}}{V_{D}}$
which can be used with the equations for the isothermal processes to show that
$\frac{Q_{L}}{Q_{H}}=\frac{T_{L}}{T_{H}}$
making the efficiency
$e=1-\frac{T_{L}}{T_{H}}$
Carnot's theorem
Carnot's theorem generalizes the result we just derived by stating:
All reversible engines operating between two constant temperatures $T_{H}$ and $T_{L}$ have the same efficiency.
$e_{ideal}=1-\frac{T_{L}}{T_{H}}$
Any irreversible engine operating between the same two fixed temperatures will have a lower efficiency.
Otto Cycle
A four stroke car engine runs on a cycle that can be approximated by the Otto Cycle
In this cycle neither AB or CD are isothermal, but are rather adiabatic processes. BC and DA can be considered to be isovolumetric.
As the heat input and exhaust cycles occur at constant volume $Q_{H}=nc_{V,m}(T_{C}-T_{B})$ and $Q_{L}=nc_{V,m}(T_{D}-T_{A})$
and the efficiency of an Otto cycle is
$e=1-\frac{Q_{L}}{Q_{H}}=1-\frac{T_{D}-T_{A}}{T_{C}-T_{B}}$
Using the fact for an adiabatic process $PV^{\gamma}=\mathrm{constant}$ and for an ideal gas $P=\frac{nrT}{V}$ it can be shown that
$T_{A}V_{A}^{\gamma - 1}=T_{B}V_{B}^{\gamma - 1}$ and $T_{C}V_{C}^{\gamma - 1}=T_{D}V_{D}^{\gamma - 1}$
which combined with the fact that $V_{C}=V_{B}$ and $V_{A}=V_{D}$ gives (after some manipulation!)
$e=1-(\frac{V_{A}}{V_{B}})^{1-\gamma}$
We can produce refrigeration only by doing work, to do otherwise would violate the second law of thermodynamics. We can achieve refrigeration by going around one of the cycles we discussed earlier in the opposite direction.
The coefficient of performance, $COP$, of a refrigerator is defined as the heat removed $Q_{L}$ divided by the work done $W$. As before we apply the first law $Q_{L}+W=Q_{H}$ so
$COP=\frac{Q_{L}}{W}=\frac{Q_{L}}{Q_{H}-Q_{L}}$
As with a heat engine we can consider the Carnot cycle to be the ideal case, which means that for an ideal refrigerator
$COP=\frac{T_{L}}{T_{H}-T_{L}}$
Methods of cooling
Most household refrigerators run on a vapor compression cycle. Let's pretend that the steps in this process can be approximated as those in the Carnot cycle, and recall that we are going around the cycle in the reverse direction to when we use it to produce work. In this approximation the stages of the cycle in which heat transfer occurs are isothermal, but in fact this is very much not the case, for a refrigerator to work we actually rely on these stages to convert the refrigerant from liquid to vapor in the evaporator (due to the $Q_{L}$ added to the refrigerant) and from vapor to liquid in the condenser (due to the $Q_{H}$ removed from the refrigerant). In the compressor we do work on the gas, in the expansion valve the gas does work (but less than the compressor does).
The same cycle is the basis of air conditioning, though in this case the heat is removed from inside the house and dumped outside.
phy141kk/lectures/37.txt · Last modified: 2016/12/03 16:32 by kkumar
|
CommonCrawl
|
Articles / Quantum Computing
Revisions (13)
Posted 26 Jun 2019
quantum-computing
Quantum Computation Primer - Part 2
Daniel Vaughan
26 Jun 2019CPOL18 min read
Learn the fundamentals of quantum computation. In this part we look at using gates to create quantum states.
$ \newcommand{\bra}[1]{\left< #1 \right|} \newcommand{\ket}[1]{\left| #1 \right>} \newcommand{\bk}[2]{\left< #1 \middle| #2 \right>} \newcommand{\bke}[3]{\left< #1 \middle| #2 \middle| #3 \right>} \newcommand{\mzero}[0]{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} \newcommand{\mone}[0]{\begin{bmatrix} 0 \\ 1 \end{bmatrix}} \newcommand{\ostwo}[0]{\frac{1}{\sqrt{2}}} \newcommand{\mhadamard}[0]{\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}} $
Building Quantum Circuits
Representing Gates as Unitary Matrices
The Importance of Gate Reversibility
Measuring Qubits
Negating a Qubit with the Pauli-X Gate
Using the Hadamard Gate to a Create 50/50 Superposition
Generating Superpositions of all Basis States
Defining an Operator
Using a Controlled Not Gate
Converting a Truth Table to a Matrix
Creating Bell States
Generating a GHZ State
In the previous article, we explored the theory underpinning quantum computation. In this article we delve further and look at quantum gates. These are the nuts and bolts that allow you to build quantum circuits. We explore many of the gates commonly used in quantum computation, and we look at using some of these gates to create various quantum states.
If you haven't already, I recommend reading Part 1 in this series before continuing.
In quantum computing, an algorithm is implemented as a quantum circuit that consists of input and output qubits, and gates that alter the quantum states of the qubits. See Figure 1.
There are one-qubit gates and multi-qubit gates. When a measurement is performed on a qubit, its state collapses to one of its basis states: |0⟩ or |1⟩. It can then be thought of as a classical bit. This is signified by the double lines emerging from the Measurement symbol.
Figure 1. Example quantum circuit (quantum entanglement circuit)
Quantum gates are represented by unitary matrices. A matrix is unitary if its conjugate transpose is also its inverse. That is, U†U = I. In other words, if you multiply a unitary matrix by its conjugate transpose, you end up with the identity matrix. For a revision on conjugate transpose, please refer back to Part 1.
The identity matrix is a diagonal matrix comprised of all zeros apart from a diagonal of 1's, as shown:
$ I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} $
The subscript next to the '\(I\)' denotes the dimension size.
If you multiply a matrix by an identity matrix, it's a no-op. It's the same as multiplying by 1.
When you multiply a matrix by its inverse, you get the identity matrix.
An important feature of unitary matrices is that when you multiply a matrix by a unitary matrix, the norm is preserved. This means that when a unitary matrix is applied to a quantum state, the sum of the probabilities remains equal to 1, and the state stays on the surface of the Block sphere. The norm is the length of the vector from the center of the sphere.
The dimensions of the matrix representing a gate is equal to 2n × 2n, where n is the number of inputs. For quantum gates, the number of inputs always equals the number of outputs.
Unlike gates in classical computing, quantum gates have to be reversible. As Gidney (2016) states, quantum gates must be reversible because quantum mechanics is reversible. While classical computers are able to discard accumulated information during processing, doing so in a quantum computer would count as a measurement; collapsing the quantum state.
To maintain the consistency of a superposition, the sum of all probabilities across all states must equal 1. No information can be lost when applying a gate.
Now, because the matrices representing gates are unitary, they are reversible. If you apply a gate U to a state |ψ⟩, you can undo it by applying U's conjugate transform, which is its inverse, as shown:
$ U^\dagger U \ket{\psi} = \ket{\psi} $
As we learned in the previous section, if you multiply a unitary matrix by its conjugate transform, the result is the identity matrix; effectively leaving the state unchanged.
When we compare quantum gates to classical gates, we see that gates like the classical AND and OR gates aren't as easy to implement in quantum computing. These classical gates are irreversible. They each have two inputs and only one output, which means you can't reconstruct the initial input bits from the output. Information is lost. See Figure 2.
Figure 2. Classical AND & OR gates
To implement quantum AND and OR gates, we need to make them reversible. We can do that using a controlled swap gate. We see how to do that later in the article.
Conversely, classical gates that are already reversible, such as the NOT gate, can be implemented more easily as quantum gates. See Figure 3. It's easy to see that the input to a NOT gate can be inferred from it's output. If the output is 1, the input was 0; if the output is 0, the input was 1.
Figure 3. Classical Not gate
We've learned that a measurement collapses a qubit's state to one of its basis states: |0⟩ or |1⟩, and that quantum gates must be reversible. Since taking a measurement is an irreversible activity, measurements are not technically gates. Though, they are sometimes called Measurement gates.
Measurements are represented by a symbol that features a gauge. See Figure 3. The output of a measurement is a double line indicating a classical bit.
Figure 3. The Measurement circuit symbol
Let's now return to the classical NOT gate and see how it is implemented as a quantum gate.
The quantum counterpart of the classical NOT gate is the Pauli-X gate (a.k.a. NOT gate or X gate), whose matrix representation looks like this:
$ X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $
Just like a classical NOT gate negates 0 to 1, and 1 to 0, the Pauli-X gate flips the basis states as shown:
$ \begin{align} X \ket{0} &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \mzero = \mone = \ket{1} \\ X \ket{1} &= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \mone = \mzero = \ket{0} \end{align} $
The Pauli-X gate rotates a qubit on the Bloch sphere around the x-axis by π radians (180°).
The Pauli-X gate quantum circuit symbol is shown in Figure 4.
Figure 4. Circuit Symbol for Pauli-X gate
There are two other Pauli gates: the Pauli-Y gate and the Pauli-Z gate. Each performs an antipodal rotation on the Bloch sphere—around the axis specified in the name. We explore them later.
When starting with a basis state |0⟩ or |1⟩, its common to transform it into an equal probability superposition. That is, to one of the following two states:
$ \begin{align} \ket{+} &= \frac{\ket{0} \color{red}{+} \ket{1}}{\sqrt{2}} \end{align} $
$ \begin{align} \ket{-} &= \frac{\ket{0} \color{red}{-} \ket{1}}{\sqrt{2}} \end{align} $
Notice that the plus and minus symbols correspond to the sign of the amplitude of the |1⟩ state.
|+⟩ and |-⟩ are a common starting point for more complex state manipulations. |+⟩ and |-⟩ give equal probability that the qubit's superposition will collapse to either |0⟩ or |1⟩.
Recall that to calculate the probability of a basis state, we take its coefficient, in this case \(\frac{1}{\sqrt{2}}\), and square its conjugate. Since the number is Real (its imaginary part is 0) we don't need to worry about taking the conjugate. So the probability P(0) is calculated as follows:
$ P(0) = \left \vert \frac{1}{\sqrt{2}} \right \vert^2 = \frac{1}{2} $
and because the sum of the probabilities must equal 1, we can subtract P(0) from 1 to find P(1):
$ P(1) = 1 - P(0) = \frac{1}{2} $
But how do we transform |0⟩ or |1⟩ into |+⟩ or |-⟩? For that we turn to the Hadamard gate (a.k.a., H gate).
The Hadamard gate shows up everywhere in quantum computing because it allows you to transform a qubit's |0⟩ or |1⟩ state into a superposition with equally split probability amplitudes.
The Hadamard gate turns |0⟩ into |+⟩ and |1⟩ into |-⟩, which is just what we need.
The matrix representation of the Hadamard gate is:
$ \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} $
If we apply the Hadamard gate to |0⟩, the result is |+⟩, as shown:
$ H \ket{0} = \ostwo \mhadamard \mzero = \ostwo \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \ostwo \left( \mzero + \mone \right) = \frac{\ket{0} + \ket{1}}{\sqrt{2}} $
Notice how we factor [1, 1]T into [1, 0]T + [0, 1]T. That gives our |0⟩ and |1⟩ basis states.
On the Bloch sphere, |+⟩ and |-⟩ are located on the equator. See figure 5. They reside halfway between the |0⟩ and |1⟩ basis states; having a probability of \(\left \lvert \frac{1}{\sqrt{2}} \right \rvert^2 = \frac{1}{2}\), or 50%.
The Hadamard gate rotates the qubit around the x-axis by π radians (180°) followed by a clockwise rotation around the y-axis by \(\frac{\pi}{2}\) radians (90°). Another way of thinking about it is as a π radians (180°) rotation around the x+z diagonal.
Figure 5. Applying the H operator to |0⟩ on the Bloch sphere
The circuit diagram symbol for the Hadamard gate is shown in Figure 6.
Figure 6. Circuit Symbol for Hadamard gate
We can increase the number of observable states by applying the Hadamard gate to other qubits. Each observable state is composed of a unique set of bits. The set of all these observable states form the basis states. For example \(\ket{00}\) can be split into \(\frac{\ket{00} + \ket{01} + \ket{10} + \ket{11}}{2}\).
Starting with a state of n qubits |0...0n⟩, if we apply the Hadamard gate to each qubit it results in the following state:
$ \ket{\psi} = \frac{\ket{0 \ldots 000} + \ket{0 \ldots 001} + \ldots + \ket{1 \ldots 111}}{\sqrt{2^n}} $
We end up with 2n observable states.
A gate can be thought of as function. In quantum mechanics, any function that maps linearly from one value to another value in complex space is called an operator. That's why you sometimes find gates described as operators.
You can tell if a mathematical function is a linear map because it is said to preserve addition and multiplication. That means if you call the function with a argument equal to u + v, then the result would be the same as calling the function twice; once for u and once for v, and summing the result. See the following:
$ f(u + v) = f(u) + f(v) $
Also, if you call the function with an argument of c × u, then the result must equal the same as when calling the function with an argument of u and multiplying the result by c, as shown:
$ f(c \times u) = c \times f(u) $
Stated another way, if the function is a linear map, it doesn't matter whether the function is called before or after the operations of addition and multiplication (Wikipedia).
Despite its name, the Controlled-Not gate (CNOT) is analogous to the XOR gate (Exclusive OR) in classical computing. In classical computing the XOR operation takes two inputs. If both of the inputs are 1, then the output is 0; otherwise the result is unaffected.
The truth table for a classical XOR gate is as follows:
$ \begin{array}{ccc} A & B & A \oplus B \\ \hline 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{array} $
The quantum CNOT gate has two inputs, and thus two outputs. The target input is negated only if the control input is set to 1. If the control input is 0, the gate has no effect. The control qubit is not changed by the gate.
The CNOT gate's circuit symbol is depicted in Figure 7. (minus the text)
Figure 7. Circuit Symbol for the CNOT gate
TIP: The way I remember which input is which on the CNOT is that the target input looks like a reticle.
The CNOT gate is another name for the Controlled Pauli-X gate. We cover the Pauli gates in Part 3 of the series.
The truth table for the CNOT gate is given below. I've omitted the Dirac notation for states. 0 corresponds to |0⟩ and 1 to |1⟩.
$ \begin{array}{cc|cc} \rlap{In} & & \rlap{Out} & \\ \hline Control & Target & Control & Target \\ \hline 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1 \\ 1 & 1 & 1 & 0 \end{array} $
Observe how the Target output column matches the A ⊕ B column of the classical XOR gate.
Since the CNOT gate has two inputs and two outputs, it is represented by a 4 × 4 matrix (2inputCount × 2outputCount), shown below:
$ CNOT = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} $
Let's look at an example of applying a CNOT gate to a |00⟩ state. The first qubit serves as the control; the second, the target.
Recall that the matrix for |00⟩ is [1, 0]T ⊗ [1, 0]T = [1, 0, 0, 0]T.
$ CNOT \ket{00} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \ket{00} $
Here the first qubit is 0, which means the second, target, qubit is unchanged.
Look at what happens when we apply a CNOT to a |10⟩ state.
The second qubit is flipped, just as expected.
As Yanofsky and Mannucci (2008, p. 172) describe, there's a useful technique for converting a truth table to its matrix representation. See figure 8.
You start with enough space to contain your 2inputCount × 2inputCount matrix. Starting at row 0 column 0, you label the columns and rows consecutively in binary, from 00 to 11 for example. You then place a 1 in a cell if the input maps to the output; 0 otherwise. Voilà, you're left with a matrix for your gate.
Figure 8. Converting a truth table to a matrix
When we looked at entanglement in Part 1 of this series, we touched upon the Bell state \(\ket{\Phi^+}\). There are three other Bell states, all of which are shown below:
$ \ket{\Phi^\color{red}{+}} = \frac{\ket{00} \color{red}{+} \ket{11}}{\sqrt{2}} $
$ \ket{\Phi^\color{red}{-}} = \frac{\ket{00} \color{red}{-} \ket{11}}{\sqrt{2}} $
$ \ket{\Psi^\color{red}{+}} = \frac{\ket{01} \color{red}{+} \ket{10}}{\sqrt{2}} $
$ \ket{\Psi^\color{red}{-}} = \frac{\ket{01} \color{red}{-} \ket{10}}{\sqrt{2}} $
Each state is a permutation of the + or - operation and the basis states |00⟩ & |11⟩ and |01⟩ & |10⟩
The Bell states are maximally entangled two qubit states. Maximally entangled means that they're entangled and there is a uniform probability distribution. In other words, there is an equal probability across the observable states. For the Bell states the probability is \(\left \vert \ostwo \right \vert^2 = \frac{1}{2}.\)
You use two gates in combination to create a Bell state: a Hadamard gate and a CNOT gate. The circuit diagram for this gate combination is shown in Figure 9.
The output of the Hadamard gate becomes the control input for the CNOT gate.
Figure 9. Applying H Gate and then a CNOT
Let's see how this works algebraically, starting with with a |00⟩ state.
Recall that applying an Hadamard gate to the |0⟩ state places it into the |+⟩ state:
$ H \ket{0} = \frac{\ket{0} + \ket{1}}{\sqrt{2}} $
To begin, we pass the first qubit of the |00⟩ state through a Hadamard gate. The second qubit is unaffected:
$ H \ket{00} = \frac{\ket{0} + \ket{1}}{\sqrt{2}} \otimes \ket{0} $
The two qubits are still separable. We need to apply a CNOT to create entanglement. But before we do, let's use matrices to derive the same result.
$ \begin{align} H \ket{00} &= \left(\ostwo \mhadamard \mzero \right) \otimes \mzero \\[10pt] & = \ostwo \begin{bmatrix} 1 \\ 1 \end{bmatrix} \otimes \mzero = \ostwo \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \end{align} $
We see that H|00⟩ can be represented as \(\ostwo \begin{bmatrix} 1, & 0, & 1, & 0, \end{bmatrix}^T\). We can use this matrix result to give us an understanding of what is happening when we apply the CNOT gate:
$ \begin{align} CNOT(H \ket{00}) &= \ostwo \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix} = \ostwo \begin{bmatrix} 1 \\ 0 \\ 0 \\ 1 \end{bmatrix} \\[10pt] &= \frac{\mzero \otimes \mzero + \mone \otimes \mone}{\sqrt{2}} \\[10pt] &= \frac{\ket{00} + \ket{11}}{\sqrt{2}} \end{align} $
You can see that the multiplier (\(\ostwo\)) is commutative. Moving it to the start of the equation does not change the result.
When applying the CNOT gate, the first qubit becomes the control and the second qubit, the target. When the first bit is 1, the second bit is flipped, giving us the desired state.
To generate the other Bell states, we just start with a different basis state, as listed below:
$ \ket{\Phi^+} = CNOT(H \ket{00}) \\ \ket{\Phi^-} = CNOT(H \ket{10}) \\ \ket{\Psi^+} = CNOT(H \ket{01}) \\ \ket{\Psi^-} = CNOT(H \ket{11}) $
The Greenberger–Horne–Zeilinger state (GHZ state) is a three or more qubit entangled state.
The GHZ state resembles the \(\ket{\Phi^+}\) state, but while \(\ket{\Phi^+}\) has only two qubits in the same configuration, the GHZ state has three or more. The simplest form of the GHZ state is:
$ \ket{GHZ} = \frac{\ket{000} + \ket{111}}{\sqrt{2}} $
To create a GHZ state, you apply an Hadamard gate to the first qubit, and then a CNOT to every other qubit; using the first qubit as the control.
There are numerous other notable quantum states that you can create using combinations of gates. Figuring out how to create them can be challenging and fun.
TIP: Quirk is a terrific browser-based tool for experimenting with quantum circuits. See Figure 10. It has a drag and drop interface, built-in support for many common gates, and the ability to easily share circuits by URL. Quirk was created by Craig Gidney.
Figure 10. Quirk allows you to visually design and run quantum circuits in your browser.
In this article we looked at some of the most commonly employed quantum gates. We learned that gates are represented as unitary matrices, and of the importance of gate reversibility. We saw how a Measurement is not technically a gate because it isn't reversible. We looked at putting a qubit into superposition with the Hadamard gate, and at entangling qubits with the CNOT gate. Finally, we saw how to create Bell states and the GHZ state using these gates.
In the next part, we continue our exploration of quantum gates. We see how to rotate and swap qubits, and we delve into more advanced controlled gates, which enable you to apply a control qubit to just about any gate. I hope you'll join me.
Thanks for reading and I hope you found this article useful. If so, then I'd appreciate it if you would please rate it and/or leave feedback below.
Yanofsky, Noson S., & Mannucci Mirco A. (2008). Quantum Computing for Computer Scientists
Anton, H., (2000) Elementary Linear Algebra, 8th Edition, Wiley
Nielsen, M. & Chuang, I., (2010) Quantum Computation and Quantum Information, 10th edition, Cambridge University Press, Cambridge UK
Glendinning, Ian. (2005) The Bloch Sphere, accessed on 10 June 2019, retrieved from http://www.vcpc.univie.ac.at/~ian/hotlist/qc/talks/bloch-sphere.pdf
Dutta, S., If quantum gates are reversible how can they possibly perform irreversible classical AND and OR operations? accessed on 9 June 2019, retrieved from https://quantumcomputing.stackexchange.com/questions/131/if-quantum-gates-are-reversible-how-can-they-possibly-perform-irreversible-class
Wolf, R., Quantum Computing: Lecture Notes, accessed on 9 June 2019, retrieved from https://homepages.cwi.nl/~rdewolf/qcnotes.pdf
Wikipedia, Quantum logic gate, accessed on 9 June 2019, retrieved from https://en.wikipedia.org/wiki/Quantum_logic_gate
Wikipedia, Linear map, accessed on 9 June 2019, retrieved from https://en.wikipedia.org/wiki/Linear_map
Glendinning, I., (2010) Rotations on the Bloch Sphere, accessed on 9 June 2019, retrieved from http://www.vcpc.univie.ac.at/~ian/hotlist/qc/talks/bloch-sphere-rotations.pdf
Ramanan, A., Quantum Gates and Circuits: The Crash Course, accessed on 26 June 2019, retrieved from https://blogs.msdn.microsoft.com/uk_faculty_connection/2018/02/26/quantum-gates-and-circuits-the-crash-course/
Gidney, C., Why do quantum gates have to be reversible?, accessed on 26 June 2019, retrieved from https://physics.stackexchange.com/questions/270266/why-do-quantum-gates-have-to-be-reversible
Engineer Microsoft
Daniel is a senior engineer in Technology and Research at the Office of the CTO at Microsoft, working on next generation systems.
Previously Daniel was a nine-time Microsoft MVP and co-founder of Outcoder, a Swiss software and consulting company.
Daniel is the author of Windows Phone 8 Unleashed and Windows Phone 7.5 Unleashed, both published by SAMS.
Daniel is the developer behind several acclaimed mobile apps including Surfy Browser for Android and Windows Phone. Daniel is the creator of a number of popular open-source projects, most notably Codon.
Would you like Daniel to bring value to your organisation? Please contact
Blog | Twitter
Xamarin Experts
Windows 10 Experts
My vote of 5
heuerm21-Nov-20 4:32
heuerm 21-Nov-20 4:32
thank you, but if I han't studied Physics this Info would be too compressed. An addendum with more prose would be nice like the no-bullshit Version here: http://twistedoakstudios.com/blog/Post8887_what-quantum-computers-do-faster-with-caveats
Missing figures in Part 2
Member 796319128-Jun-19 1:36
Member 7963191 28-Jun-19 1:36
I miss the figures in Part 2, while they are clearly displayed in Part 3.
Re: Missing figures in Part 2
Daniel Vaughan28-Jun-19 6:03
Daniel Vaughan 28-Jun-19 6:03
Are all of them missing? Have you tried refreshing the page?
LinkedIn | Twitter | Blog | Microsoft MVP | Codon | Outcoder
On my phone, the images are also not being displayed. I've reported it in the Site Bugs forum.
Jan Heckman27-Jun-19 1:07
Jan Heckman 27-Jun-19 1:07
Great! Helps me fill a blank hole
Re: My vote of 5
Thanks Jan! Glad you like it.
Article Copyright 2019 by Daniel Vaughan
|
CommonCrawl
|
Spatial–temporal heterogeneity and determinants of HIV prevalence in the Mano River Union countries
Idrissa Laybohr Kamara1,2,
Liang Wang3,
Yaxin Guo2,
Shuting Huo2,
Yuanyuan Guo1,2,
Chengdong Xu4,
Yilan Liao4,
William J. Liu2,
Wei Ma1 &
George F. Gao ORCID: orcid.org/0000-0002-3869-615X1,2,3
Infectious Diseases of Poverty volume 11, Article number: 116 (2022) Cite this article
Utilizing population-based survey data in epidemiological research with a spatial perspective can integrate valuable context into the dynamics of HIV prevalence in West Africa. However, the situation in the Mano River Union (MRU) countries is largely unknown. This research aims to perform an ecological study to determine the HIV prevalence patterns in MRU.
We analyzed Demographic and Health Survey (DHS) and AIDS Indicator Survey (AIS) data on HIV prevalence in MRU from 2005 to 2020. We examined the country-specific, regional-specific and sex-specific ratios of respondents to profile the spatial–temporal heterogeneity of HIV prevalence and determine HIV hot spots. We employed Geodetector to measure the spatial stratified heterogeneity (SSH) of HIV prevalence for adult women and men. We assessed the comprehensive correct knowledge (CCK) about HIV/AIDS and HIV testing uptake by employing the Least Absolute Shrinkage and Selection Operator (LASSO) regression to predict which combinations of CCKs can scale up the ratio of HIV testing uptake with sex-specific needs.
In our analysis, we leveraged data for 158,408 respondents from 11 surveys in the MRU. From 2005–2015, Cote d'Ivoire was the hot spot for HIV prevalence with a Gi_Bin score of 3, Z-Score 8.0–10.1 and P < 0.001. From 2016 to 2020, Guinea and Sierra Leone were hot spots for HIV prevalence with a Gi_Bin score of 2, Z-Score of 3.17 and P < 0.01. The SSH confirmed the significant differences in HIV prevalence at the national level strata, with a higher level for Cote d'Ivoire compared to other countries in both sexes with q-values of 0.61 and 0.40, respectively. Our LASSO model predicted different combinations of CCKs with sex-specific needs to improve HIV testing uptake.
The spatial distribution of HIV prevalence in the MRU is skewed and the CCK about HIV/AIDS and HIV testing uptake are far below the threshold target set by UNAIDS for ending the epidemic in the sub-region. Geodetector detected statistically significant SSH within and between countries in the MRU. Our LASSO model predicted that different emphases should be implemented when popularizing the CCK about HIV/AIDS for adult women and men.
After 40 years of hard work and global cooperation in the fight against AIDS, yet, in 2020, the world recorded 37.7 million people living with HIV including 10.2 million who were not in treatment, and 680,000 deaths from AIDS-related illnesses [1, 2]. Among those not on treatments, approximately 4.1 million did not know their HIV status. On top of this, new HIV infections remain unacceptably high at 1.5 million people despite tremendous efforts by world leaders for reducing AIDS-related death and new HIV infections to less than 500,000 by the end of 2020 [3, 4]. In the African region, 880,000 people acquired HIV, and 460,000 HIV-related deaths occurred in 2020 [4]. Specifically, in the Mano River Union (MRU), there were 6200, 5300, 1400 and 5400 new HIV infections in Cote d'Ivoire, Guinea, Liberia and Sierra Leone respectively in 2020, and the sub-region has one of the lowest comprehensive correct knowledge (CCK) about AIDS and HIV testing uptakes in Africa [5, 6]. The United Nations General Assembly agreed to end the AIDS epidemic by 2030 and assumed that interim targets (90-90-90), should be achieved by the end of 2020. However, the HIV epidemic has slipped such interim assumptions [6]. In 2020, new cases from countries in Western and Central Africa accounted for 37% of new HIV infections worldwide, leading the region far off the track of the 90-90-90 target for ending the epidemic [4].
Narrowing down our fight against HIV to a community-centred approach in low-income countries is vital to adequately leverage the available scarce resources as international funding declines [7, 8]. Numerous studies have indicated the potential benefits of adjusting HIV programs to focus on the populations and locations with the highest need for interventions [8, 9]. The broad array of CCK about HIV/AIDS in the Demographic and Health Survey (DHS) questionnaire(s) is so elaborate that it is very difficult for people with lower education to be able to remember them all, especially for those in post-conflict and fragile nations. These challenges are further compounded by the low literacy rate and a limited healthcare system that makes the situation even worse to improve the HIV testing uptake in these countries. CCK about HIV/AIDS is directly correlated with HIV testing uptake, and research has proved that adults with higher CCK have a higher ratio of HIV testing uptake than those with lower CCK [10, 11]. However, not all knowledge about HIV is relevant to every community due to varied demographic patterns and social dynamics, characterized by dramatic differences in socioeconomic status. Besides, due to differences in human cognition, different people might pay attention to different CCK items. Consequently, when disseminating CCK to different groups of people, it is necessary to have different focuses to ensure that the dissemination of CCK will significantly increase the rate of voluntary HIV tests.
Geographic Information System (GIS) tools are global headlines in recent epidemiological research [8, 12]. Looking at several types of indicators from a spatial perspective can integrate valuable context to human activities with the outstanding visualization benefits that maps can provide [8, 13]. For example, a dataset of health indicators containing their locations and attributes can decipher the patterns of population dynamics and then elucidate the implication of interaction between human activities and environmental factors which can lead to an increase in disease prevalence [8, 14, 15]. Linked DHS and Global Positioning System data are being used to improve the planning for familial interventions, profile the correlation between malaria prevalence and anaemia in children in West Africa, and analyze the effect of environmental factors accounting for early child mortality [16]. The Global Fund's 2017–2022 strategy notes that fine, spatial and lower level estimates are essential for good decision-making and are a prerequisite for the success and long-term impact of HIV and other health programs [17, 18].
Spatial stratified heterogeneity (SSH) is a phenomenon that describes the differences within and between strata in geographic space. Geodetector is a novel statistical tool that measures SSH and its attributes among data [19]. Geodetector could also utilize q- statistics to test the interaction outcomes between one or more independent factors.
The MRU is a sub-regional community with four countries (Cote d'Ivoire, Guinea, Liberia and Sierra Leone) located on the west coast of Africa with English and French as official languages [20, 21]. While Cote d'Ivoire and Guinea have French as their official language, neighbouring Liberia and Sierra Leone accepted the English language for their official communications. The "Mano River Union" (Fig. 1) was named after the Mano River in West Africa which originates in the Guinea Highlands and forms part of the Liberia-Sierra Leone border [21, 22]. Despite the continued efforts by the MRU governments over the last decades to stabilize the region, it remains unstable as a result of porous borders which facilitate unhindered movement of armed and criminal groups [20, 22].
The geographic locations of countries that constitute the Mano River Union
These countries and areas were selected for this study because of their similarities in social culture, ethnicity and demographic patterns (Fig. 1). Despite the different languages, the four countries have similar geographic and climatic conditions [11, 21, 23, 24]. Because of their contiguity, past epidemics of some diseases or disasters in one country were seen as bonded and overlapped with its neighbour in the region. This can be corroborated by the outbreak of HIV-2 which was first reported in Cote d'Ivoire and Sierra Leone in the 1980s and later disseminated to Guinea, Liberia and beyond [21, 25, 26]. The Ebola epidemic was the recent episode that originated in Guinea in 2014 incubated and transmitted across, within and beyond the union in which the MRU's weak healthcare system have been stretched and economies ruined to their "death beds" [21, 27]. With such threatening and perilous experience in these communities, HIV/AIDS would not be an exception to be possibly mutated, reassorted and propagated in the region and transformed quietly into an uncontrollable epidemic if left unchecked. Worse still, HIV-positive individuals can harbour a variety of viruses with resultant virulent variants which can wreak havoc in the already strained global public health systems [28]. This chain of events prompted the inception of this study to determine the spatial patterns and temporal heterogeneity of HIV prevalence and hot spots utilizing DHS data and GIS techniques at the national and sub-national levels. Conditions at the border areas are a potential catalyst for the spread of HIV/AIDS putting the region and the globe at risk [18]. Given the untrammelled links between the four countries and the fluidity of the population movement across borders, the battle against HIV/AIDS cannot be won at the country level [20, 22].
Dwyer-Lindgren et al. employed three models (Generalized Additive Model, Boosted Regression Model and Lasso Regression Model) to predict HIV prevalence in countries in Sub-Saharan Africa in 2019 [29]. In 2020, Katia et al. leveraged mathematical models (Shiny90) to Predict knowledge of HIV status and the efficiency of HIV testing, in Sub-Saharan Africa either [30]. No prediction about combined CCKs about HIV/AIDS has been made for sex-specific interventions to bolster HIV testing uptake in the MRU. This prompted us to fill this knowledge gap by making use of data from population-based surveys to assess trends in spatial and temporal distributions of HIV and predict which combinations of CCKs are correlated with an increase in HIV testing and receiving test results. Several DHS conducted in Sub-Saharan Africa have informed policymakers that comprehensive correct knowledge about HIV/AIDS and HIV testing uptake in many countries are still far below the benchmarks set by UNAIDS at 90% by 2020 [4, 31, 32]. Precision public health provides the knowledge to tailor studies using population-specific data to provide the right intervention to the right population at the right time [33]. Comprehensive correct knowledge about HIV/AIDS empowers adults to know about the epidemic in communities [31, 34], and by extension, prevent pregnant women and children from the disease through voluntary HIV testing and treatment. This research aims to perform an ecological study to determine HIV prevalence patterns and the spatial–temporal heterogeneity of HIV prevalence in the MRU.
Study area and data sources
The "Mano River Union" (Fig. 1) was named after the Mano River in West Africa which originates in the Guinea Highlands and forms part of the Liberia-Sierra Leone border. The MRU comprises Cote d'Ivoire, Guinea, Liberia and Sierra Leone [35,36,37,38]. See Additional file 1 for more information on the method and data.
In this study, we utilized DHS and AIDS Indicator Survey (AIS) data at the DHS STATcompiler database [39], and their corresponding spatial data at the DHS Spatial Data Repository which are available in shapefiles and geodatabase format [40, 41]. The DHS and AIS are nationally representative cross-sectional surveys in which data are collected for a wide range of health indicators [9].
In this study, we performed an ecological study utilizing DHS and AIS data to describe HIV prevalence, HIV testing uptake and CCK about HIV/AIDS at the population level and location concerning to age, sex, and socioeconomic status of respondents. In this sense, we compared our parameters within and between countries, and over time. Our data are aggregated and designed for groups of adult males and females aged 15–49 years.
The DHS STATcompiler is a tool designed for comparisons across countries and over time. Participating countries' data are recalculated to match standard definitions. These recalculations are made to meet different time frames, different denominators and country-specific definitions for researchers to be able to compare survey data across countries and over time. However, for researchers who are interested in one single data point, from 1 year and one country, the final DHS report is the best source [40].
The DHS Spatial Data Repository (SDR) and STATcompiler provided the information used in creating the shapefiles with DHS data. To maintain respondents' confidentiality, the centres of survey clusters are displaced to about 0–2 km for urban clusters and 0–5 km for rural clusters [40]. Then 10% of all survey clusters were further displaced to 10 km (masking). Even though the resulting data is affected by scale and modifiable areal unit problem (MAUP), linking displaced data to very smooth surfaces will likely have little impact on analysis results because covariate values obtained from displaced data will be very similar to those associated with the true, non-displaced location.
In DHS, participants were asked about their comprehensive knowledge about HIV, knowledge of prevention methods of HIV, misconceptions about HIV/AIDS, and their accepting attitude towards people living with HIV etc. Respondents were further asked whether they had been tested for HIV, and when the last test was conducted. The outcome of interest was self-reporting of undergoing an HIV test and receiving test results in the last 12 months. Eligible respondents in the subsample of the selected households were then tested for HIV.
Spatial distribution of HIV prevalence by country and region
We constructed Choropleth maps employing ArcGIS software 10.4 to show temporal variations in the spatial distribution of the regional HIV prevalence among adults from 2005 to 2020. ArcGIS version 10.4 is developed by the Environmental Systems Research Institute (ESRI) in California, United States of America (USA). Utilizing the same GIS software, we also build choropleth maps to monitor the spatial distribution patterns of women with secondary education or higher. We further utilized Spearman Rank correlation (rho) in SPSS version 26 to measure the association between HIV prevalence and women with secondary education or higher. The Statistical Package for Social Sciences (SPSS) was developed by SPSS Incorporated in Chicago, USA and is now owned by the International Business Machines (IBM) in New York, USA.
Spatial distribution of HIV hot spots and spatial stratified heterogeneity (SSH) analyses
In this study, we used the index of global Moran's I to identify the spatial autocorrelation of HIV prevalence in MRU countries. We employed the Getis Ord Gi* statistic as the index of local spatial autocorrelation to identify statistically significant spatial clusters of high HIV prevalence (hot spots) and clusters of low HIV prevalence (cold spots) in the MRU countries. To achieve this, the DHS data were divided into three groups (2005–2010, 2011–2015 and 2016–2020) and downloaded as shapefiles format from the DHS Spatial Data Repository. The Getis-Ord Gi statistic provides us with the Z-score, P-value, and confidence level bin (Gi_Bin). The Z-scores and P values are measures of statistical significance that tell us whether or not to reject the null hypothesis.
Moreover, we utilized the Geodetector software (www.geodetector.cn) to calculate the index of q-value for the SSH feature for HIV prevalence in the MRU. If the q-value is 0, there is an absence of stratified heterogeneity (SH), and a q-value of 1 represents perfect SH between strata. To perform this analysis, the spatial data were divided into strata factors; country name, region, and survey year. The dominant strata dividing the HIV prevalence in the MRU would be selected from these factors with the highest significant q-value.
We leveraged data on the knowledge about HIV/AIDS, accepting attitudes toward people living with HIV, Knowledge of prevention methods from HIV, knowledge of prevention of mother-to-child transmission of HIV, and misconceptions about HIV etc., which we collectively referred to as comprehensive correct knowledge (CCK) about HIV/AIDS in this study for ease of analysis (Additional file 1: Tables S2 and S3). We utilized the Least Absolute Shrinkage and Selection Operator (LASSO) to predict which combinations of CCK that will be accompanied by an effective increase in voluntary HIV testing and receiving test results. We targeted two testing indicators; adults who ever tested for HIV and received test results (T2) and adults receiving an HIV test and receiving test results in the last 12 months preceding the survey (T6) (Additional file 1: Tables S2 and S3). LASSO is a regularized Gaussian linear regression. R software version 4.1.2, and the Generalized Linear Model via Penalized Maximum Likelihood (glmnet 2.0-18) were employed to automatically select the best combination of CCKs (CCK 1–18) that can predict the ratio of men/women to perform T2 and T6 (Additional file 1: Tables S2 and S3). The regularization parameter γ was selected to maximize the area under the curve (AUC) with tenfold cross-validation and the value of alpha (α) was always kept at 1. The largest value of gamma (γ) found within one standard error of γ with the highest AUC (known as lambda.1se) was selected as the final model, as it has fewer parameters than the best model and its accuracy was comparable to the best model. To test the stability and accuracy of the final model, we repeat the above analysis 1000 times. The recurrence rate of a parameter that appears in the final models of these 1000 repetitions was used to assess the stability of this parameter. Only parameters with a recurrence rate > 80% were considered robust parameters in the final model as presented in (Table 5). R software was created by Ross Ihaka and Robert Gentleman in Auckland, New Zealand. It is currently developed by the R Development Core Team.
LASSO is a machine learning method [42] which performs both variable selection and regularization to enhance the interpretability of the statistical results. The glmnet is a package in R software [43] that fits the generalized linear model and other similar models via penalized maximum likelihood. It fits linear, logistic, multinomial, Poisson, and cox, regression models. The package is composed of methods of prediction and plotting, and functions for cross-validation.
The problem below can be solved by glmnet
$$\mathop {\min }\limits_{\beta 0}, \frac{1}{N}\sum_{i-1}^{N}{w}_{i}l\left({y}_{i},{{\beta }_{0}+{\beta }^{T}{x}_{i}}\right)+\lambda \left[(1-\alpha) {\Vert \beta \Vert_2^2}/2+\alpha \Vert {\beta} \Vert {1}\right]$$
In the formula above, l(yi, ηi) is the negative log-likelihood contribution of observations i; the elastic net penalty is controlled by α, and bridges the gap between LASSO regression (α = 1), which is the default, and ridge regression (α = 0).
A detailed description of LASSO can be found in Additional file 1 (S1.3).
Distribution of CCK and the regional spatial patterns of HIV prevalence
The urban–rural residence has been implicated in the response rates in all the DHS included in this study. Rural residences are accompanied by a higher response rate than urban residences for both adult women and men (Additional file 1: Table S1). While the CCK about HIV, HIV testing uptake, and the knowledge of prevention of mother-to-child transmission of HIV is lower in the rural residence than the urban ones, HIV prevalence is higher in urban residences than their rural counterparts (Tables 1, 2).
Table 1 The specific percentages of comprehensive correct knowledge about HIV/AIDS, HIV testing and prevalence by spatial–temporal-population distribution
Table 2 Percentage HIV prevalence and prevalence of women with secondary education or higher among adults aged 15–49 years by regions in the MRU
Among the four countries, Cote d'Ivoire, Guinea, Liberia and Sierra Leone have eleven, eight, six and four DHS regions respectively, as their administrative or subnational level 1. In Cote d'Ivoire 2005 AIS and 2012 DHS, the capital, Abidjan, has the highest HIV prevalence. In the 2005 AIS, the HIV prevalence is concentrated in Abidjan and its neighbouring regions (South, East Central, Central and Southwest), while Northwest has the lowest. In 2012 DHS, the HIV prevalence is highest in North Central, East Central and Southwest regions. Conversely, the lowest HIV prevalence occurs in the Northwest, North, West Central, and Northeast regions (Fig. 2A; g, h).
HIV prevalence in the Mano River Union countries. A Represents choropleth maps for the spatial variation of HIV prevalence in the general population among adults aged 15–49 years by country and region concerning DHS. a–c Guinea 2005, 2012, and 2018 DHS respectively; d–f Sierra Leone 2008, 2013, and 2019 DHS respectively; g Cote d'Ivoire 2005 AIS, h Cote d'Ivoire 2012 DHS; i, j Liberia 2007 and 2013 DHS respectively. The Demographic and Health Survey Program. ICF International. Available from spatialdata.dhsprogram.com (26 November 2020) [41]. B represents choropleth maps for the spatial variation of adult women aged 15–49 years with secondary education or higher by country and region concerning DHS. a–c Guinea 2005, 2012, and 2018 DHS respectively; d–f Sierra Leone 2008, 2013, and 2019 DHS respectively; g–i Liberia 2007, 2013 and 2019 DHS respectively; j Cote d'Ivoire 2005 AIS, k Cote d'Ivoire 2012 DHS; The Demographic and Health Survey Program. ICF International. Available from the spatialdata.dhsprogram.com (26 November 2020) [41]. HIV Human Immunodeficiency Virus, DHS Demographic and Health Surveys, AIS AIDS Indicator Survey, ICF International Classification of Functionality, Disability and Health
Guinea has 3 DHS from 2005 to 2020 in this analysis. The capital city, Conakry, has the highest HIV prevalence for two DHS (from 2005 to 2015). In 2005 DHS, Labe, Faranah and N'zerekore have the highest HIV prevalence, while Momou and Kindia have the lowest. In 2012 DHS, HIV prevalence was more prominent in Mamou and N'zerekore regions, and less prominence of HIV prevalence was seen in Kindia and Faranah regions. 2018 DHS is accompanied by unique features wherein HIV prevalence was highest in Boke and Kindia, and the lowest is seen in Kankan, Faranah and Mamou regions. Conakry city is overtaken by Boke region in HIV prevalence (Fig. 2A; a–c).
In Liberia, the regional HIV prevalence follows a definite pattern. In 2007 DHS, Monrovia, the capital, was considered as a region on its own with the highest HIV prevalence followed by the South Eastern B region. In the 2013 DHS, Monrovia was incorporated in the South Central region which led the region to have the highest HIV prevalence and again followed by South East B. In both surveys, North Central has the lowest HIV prevalence in the country (Fig. 2A; i, j). The Liberia 2019 HIV prevalence data were not made public, notably; it was exempted from our spatial analysis.
The Sierra Leone 2008 and 2013 DHS have regional variations of HIV prevalence over time. In 2008 DHS, the Western Area where the capital city, Freetown, is located is seen with the highest HIV prevalence followed by the Eastern Region. In contrast, the Southern Region has the lowest HIV prevalence in that survey year. In 2013 DHS, the Western region has no available data on HIV prevalence; consequently, the Southern region takes the lead this time, while the Northern region took the lowest HIV prevalence. In 2019 DHS, Sierra Leone has five DHS regions, thus, HIV prevalence is concentrated in the Northwest and Western region. Conversely, the Eastern region has the lowest HIV prevalence (Fig. 2A; d–f).
The spatial distribution of HIV prevalence and women with secondary education or higher
Investigating the spatial distribution of "women with secondary education or higher" follow similar patterns and dynamics to that of HIV prevalence in the general population among adults aged 15–49 years. Our correlation analysis between HIV prevalence and women with secondary education or higher revealed a weak positive linear correlation of 0.32 and a P-value of 0.01, thus indicating there is a weak significant relationship between HIV prevalence and women with secondary education or higher in the MRU (Fig. 2B; a–k). People with secondary education or higher typically have higher comprehensive correct knowledge about HIV/AIDS and higher HIV testing uptake than their lower and non-educated peers (Tables 1, 2). Locating such groups in communities and studying their spatial patterns and dynamics can reveal changes in HIV interventions and foster improvements. There is a policy implication for this group of women. In a situation where women with secondary education or higher have unacceptably high percentages of HIV prevalence among women aged 15–49 years, targeting higher social classes among women in HIV interventions is as important as continuing education to increase HIV knowledge. This is a worrying sign that the gravity of HIV disease and its implication for our behaviour has not been fully grasped. HIV prevalence in Africa is higher among women than among men and the main focus of many HIV programs is on females [44, 45].
HIV hot spots and stratified heterogeneity (SH) by gender and country level
In the first and second 5 years of this study (2005–2010, 2011–2015), Abidjan, South, Southwest, Central, East Central, North Central, West and West Central regions in Cote d'Ivoire are the hot spots for HIV prevalence, with 99, 95 and 90% confidence, Gi_Bin Score of 3, Z-Score of 3.77–3.93 and P < 0.01 (Fig. 3A, B), while Guinea, Liberia and Sierra Leone are the cold spots for HIV with 90, 95, and 99% confidence. In the third 5 years of this study (2016–2020), only Guinea and Sierra Leone have data on HIV prevalence. Conakry and Kindia regions in Guinea and the Western region in Sierra Leone are hot spots for HIV with 95, and 90% confidence, Gi_Bin Score of 2, Z-Score of 3.12, and P < 0.01 (Table 3). All other regions are cold spots and spots of no significant (Fig. 3C).
HIV significant spatial clusters of high HIV prevalence (hot spots) and low HIV prevalence (cold spots) in the MRU countries for 15 years (2005–2020). A Represents the HIV hot spots in the general population for the first group of our geostatistical analysis from 2005 to 2010 concerning DHS. B Represents the HIV hot spots in the general population for the second group in our geostatistical analysis from 2011 to 2015 concerning DHS. C Represents the HIV hot spots in the general population for the third group in our geostatistical analysis from 2016 to 2020 concerning DHS. Spatial Data Repository; The Demographic and Health Surveys Program. ICF International. Available from spatialdata.dhsprogram.com (26 November 2020) [41]. HIV Human Immunodeficiency Virus, MRU Mano River Union, DHS Demographic and Health Surveys, ICF International Classification of Functionality, Disability and Health
Table 3 Summary of geospatial peaks scores for HIV hot spot in the MRU
In our Geodetector statistical analysis, the strata in the MRU were measured for their interactions and risk factors in HIV prevalence concerning gender. The q-values and p-values indicate that there are significant differences in risk factors at the country level than at the regional level. Women in Cote d'Ivoire had the highest risk for HIV prevalence. Additional significant risk is seen in the Cote d'Ivoire 2005 AIS for women (Table 4). Among adult men, there is no significant difference in these four factors.
Table 4 The spatial stratified heterogeneity of HIV prevalence among adult women and men aged 15–49 years
HIV temporal changes
Examining spatial–temporal changes in HIV prevalence for 15 years revealed unique features of the disease in the MRU countries. While other regions see a percentage decrease in HIV prevalence, some regions are accompanied by a surge in HIV prevalence. In Cote d'Ivoire for example, in the North Central, North West, South West and West regions, the proportions of adults in the general population with HIV prevalence increased by 0.8, 0.6, 0.1, and 0.1% respectively (Fig. 4a). From 2005 to 2015, Cote D'Ivoire saw a 1.6% total increase in HIV prevalence in four regions and a 9.8% total decrease in HIV prevalence in seven regions.
Represents temporal changes in HIV prevalence in the MRU countries from 2005 to 2020. A The bar charts in the map represent changes in HIV prevalence by region. The green bar represents 2005–2010 HIV prevalence in the first DHS, the blue bar represents 2011–2015 HIV prevalence in the second DHS, and the red bar represents the changes (increase or decrease) in HIV prevalence. B The green bar represents 2011–2015 the second DHS, the blue bar represent 2016–2020 the third DHS, and the red bar represents the changes (increase or decrease) in HIV prevalence. Spatial Data Repository, The Demographic and Health Surveys Program. ICF International. Available from spatialdata.dhsprogram.com (26 November 2020) [41]. HIV Human Immunodeficiency Virus, MRU Mano River Union, DHS Demographic and Health Surveys, ICF International Classification of Functionality, Disability and Health
Guinea has eight DHS regions from 2005 to 2015. During this period, there is a 2.7% increase in HIV prevalence in five regions and a 0.4% decrease in HIV prevalence in two regions. However, the percentage of HIV prevalence in one region (Nzerekore) remains at 1.7% from 2005 to 2015 (Fig. 4A). In 2018 DHS, Boke has the highest HIV prevalence, followed by Kindia and Conakry by 2%, 1.8% and 1.7% respectively. The lowest HIV prevalence is seen in the Kankan region at 0.7%. Overall, the Guinea 2018 DHS saw a decrease in HIV prevalence in five regions, with an opposing increase in two regions (Boke and Kindia), and Labe which remains unchanged at 1.6% (Fig. 4B).
Liberia has six DHS regions in the 2007 DHS and five DHS regions in the 2013 survey. In the 2007 DHS, the capital city, Monrovia, was treated as a region on its own, while in the 2013 DHS, Monrovia was incorporated into the South Central region. After the adjusted analysis for the differences in DHS regions, Liberia has a 0.7% increase in HIV prevalence in three regions and a 0.3% decrease in HIV prevalence in one region. The HIV prevalence in South East A region remains at 1.3% from 2005 to 2015 (Fig. 4A).
Sierra Leone has four DHS regions for the 2008 and 2013 DHS. However, the re-demarcation of the country at the admin 1 and admin 2 levels in 2011 altered the regional boundaries in the Western region of the country. As a result, Spatial Data Repository and STATcompiler do not have data for the Western Region in the 2013 DHS for HIV prevalence. Changes in HIV prevalence follow a similar pattern with sister countries in the region. There is a 1.6% increase in HIV prevalence in the Southern Region and a 0.1% decrease in HIV prevalence in the Northern Region. The Eastern Region remains at 1.4% from 2005 to 2015 (Fig. 4A). In Sierra Leone 2019 DHS, the country has five DHS regions. The Northern region is divided into Northwest and Northern regions. After the adjustment by averaging the two new regions, the Northern region has a surge of 0.6% in HIV prevalence. Overall, there are decreases in HIV prevalence in Eastern, Southern and Western regions with 0.2%, 1.0% and 0.4% respectively. At the national level, Cote D'Ivoire has done better than all the other countries in the MRU in reversing the HIV epidemic by having the highest number of DHS regions with a decrease in HIV prevalence (7 out of 11) at 9.8% from 2005 to 2015. Only four regions have an increase in HIV prevalence with 1.6%. Despite this, the country remains the hot spot for HIV in the MRU. Guinea is the least performing country in reversing the HIV epidemic from 2005 to 2015 in which there is an increase in HIV prevalence in five regions (5 out of 8) with 2.7%. Two regions have a decrease in HIV prevalence with only 0.4%. One region remains unchanged from 2005 to 2015. Liberia is a lower performer in reversing the HIV epidemic with a surge in HIV prevalence in three regions with 0.7% and a decrease in only one region with 0.3% from 2005 to 2015. Sierra Leone has an increase in HIV prevalence in one region (1 out of 4) by 1.6% and a decrease in one region by 0.1% from 2005 to 2015. In the last 5 years of this study (2016–2020), only Guinea and Sierra Leone have data on HIV (Fig. 4B). Ultimately, Guinea made inroads by a decrease in HIV prevalence in 6 regions and an increase in only 2 regions. Sierra Leone has an increase in HIV prevalence in one region and a decrease in 3 regions.
The combinations of predicted CCK sets
Improving voluntary HIV testing in the MRU can further increase case finding and case identification. In this regard, our model predicted different combinations of CCKs when targeting different populations in this community. From the LASSO result, we could find that the CCK12 (prevention knowledge of mother-to-child transmission (MTCT)-(HIV can be prevented by mother taking special drugs during pregnancy) was the only CCK that positively contributed to the ratio of both men and women "ever tested for HIV and received test results" (T2) and "receiving an HIV test and receiving test results during the last 12 months preceding the survey" (T6) (Table 5), indicating its significant importance in these four countries. When we paid attention to which set of CCK related to enhancing the ratio of men receiving HIV testing, we found that CCK7-9, CCK12, CCK14, CCK17, and CCK18 all contributed to the ratio of men ever tested for HIV and received test results (T2). However, only CCK10 (no incorrect beliefs about AIDS—composite of 3 components) and CCK12 positively contributed to the ratio of men receiving an HIV test and receiving test results during the last 12 months before the survey (T6) (Table 5). This result suggested the ratio of men receiving an HIV test and receiving test results during the last 12 months preceding the survey could be easily increased by popularizing CCK10 and CCK12 in these four countries. For women, the CCKs that contributed to the ratio of receiving an HIV test were different from those for men. CCK7, CCK12, CCk13, and CCK15 all positively contributed to the ratio of women ever tested for HIV and received test results (T2) (Table 5). While, CCK7, CCK12, and CCK13 all positively contributed to the ratio of women receiving an HIV test and receiving test results during the last 12 months before the survey (T6) (Table 5). All these results suggested that different emphases are needed when popularizing the comprehensive correct knowledge about HIV/AIDS for men and women to increase the ratio of receiving HIV tests and getting the result.
Table 5 Predicted CCK among adults aged 15–49 years in the MRU countries
This study provides a precise quantification of spatial–temporal heterogeneity of HIV prevalence in the MRU nations from 2005 to 2020 and predicts the combinations of CCK that will improve voluntary HIV testing using DHS data. To the best of our knowledge, this is the first study of HIV prevalence in the MRU with geospatial techniques to describe clusters and hot spots of HIV prevalence, and leverage machine learning to predict the combinations of CCKs that will be accompanied by an increase in future HIV testing uptake using DHS data. This is important because the results of this study can be used by policymakers to fine-tune HIV interventions and popularize our predicted CCKs through social and mass media to bolster voluntary HIV testing in the MRU and interrupt the transmission chain to save lives.
Map literacy and map use have immensely increased in the areas of research, project planning, advocacy, monitoring and evaluation of programs [29, 46]. HIV prevalence is higher in urban residents than in rural residents among adults aged 15–49 years in all the surveys included in our study (Tables 1, 2). The stealth behaviour of HIV should be continuously investigated in communities as HIV has the potential to spread quietly unnoticed. For example, the first case of HIV-1 infection in the USSR was detected in the late 1980s, but the expansion of HIV-1 started only in 1986 when a subtype A strain was introduced in people who inject drugs. In 2016, over 100,000 new cases of HIV were reported due to a lack of frequent surveillance because the epidemic was growing rapidly unknown to the authorities [14, 26]. Another astonishing instance is the VB HIV-1 variant which was highly aggressive and contagious that was silently spreading in the Netherlands for decades. Researchers discovered that people infected with the VB variant have higher viral loads, making them more likely to transmit the virus to others [28].
Geodetector is a statistical tool that utilizes q-statistics to measure stratified heterogeneity, dominant driving force detection and an interaction relationship investigation. It involves five functions; risk detector, interaction detector, ecological detector, factor detector and an auxiliary Geodetector [19]. In this study, the country level is the dominant factor dividing HIV prevalence for adult women and men, and Cote d'Ivoire is the highest risk point among MRU countries (Table 4). Additionally, the survey year shows promoting interaction impacting HIV prevalence for adult women and men aged 15–49 years. However, the ecological detector indicates no significant association between the strata factors and HIV prevalence for adult men and the q-value of country strata for adult men aged 15–49 years is lower than for adult women. This possibly resulted from the fact that adult men in MRU were better informed to reach out for CCK and implement more preventive methods against HIV than women (Additional files 2, 3).
In the MRU countries, HIV prevalence is higher among women than among men, and the illiteracy rate is higher among women than among men either (Tables 1, 2) [34]. Narrowing down CCKs about HIV/AIDS to the most important parameters that are best fit for certain nations in surveys and programs can mitigate the fear and stigma among adults for receiving an HIV test and getting test results. The HIV testing uptake is very low in the MRU countries and is decreasing because of COVID-19 [14, 47]. Research on HIV prevalence conducted on Ebola suspected patients during the 2014 Ebola epidemic at the Sierra Leone-China Friendship Biosafety Level 3 Laboratory revealed higher percentages of HIV-positive persons, thus indicating HIV prevalence in these nations might be underestimated due to lower testing uptake [48]. HIV testing is the first 95 of the United Nations General Assembly Political Declaration on HIV/AIDS and it's a prerequisite for people to determine HIV prevalence and status. This study is informing policymakers and program managers that to increase the HIV testing uptake for adults aged 15–49 years, "ever tested for HIV and received test results" (T2) among women in the MRU, CCKs 7, 12, 13 and 15 are enough to tell them on radio, TVs, meetings etc. with 100% recurrence rates (Table 5). Conversely, our model predicted that CCKs 7, 8, 9, 10, 12, 14, 17 and 18 can bolster the proportion of men who ever tested for HIV and received test results (T2) with up to 90% recurrence rates (Table 5). To improve the ratio of adult women "receiving an HIV test and receiving test results in the last 12 months preceding the survey" (T6), CCKs 7, 12, and 13 are quite sufficient to popularize in antenatal clinics (ANC) and beyond with 100% recurrence rate. On the other hand, CCK 10 and 12 are adequate to inform adult men aged 15–49 years to increase the ratio of "receiving an HIV test and receiving test results in the last 12 months before the survey" (T6) with a 100% recurrence rate (Table 5). Finally, CCK12 (prevention knowledge of mother-to-child transmission (MTCT)-HIV can be prevented by the mother taking special drugs during pregnancy) is the only CCK that positively contributed to the ratio of both men and women and for both T2 and T6 (Table 5). Policymakers and program managers should pay great attention to popularizing this CCK in the MRU because adults in this sub-region are very conscious about the health of pregnancies and that of their newborns.
Previous studies have revealed that secondary education or higher are an important marker for the status of health in countries and communities. There are assumptions that People with secondary education or higher typically have lower odds of contracting HIV/AIDS, especially for women, with resultant better health outcomes and economic development than those with no education or lower [44]. Socioeconomic status such as education, employment and wealth has been reported to be associated with a decrease in HIV prevalence in communities.
HIV has long been known to be the disease of the poor. Illiteracy and low levels of education among women are the vulnerabilities that cause higher HIV prevalence among women than among men in Africa. In a situation like this, early and continuous testing of all social classes with a stamp-out strategy could significantly disrupt the transmission chain in the MRU. The most effective way to reverse the HIV epidemic is to offer HIV testing services at locations and among populations with the highest HIV burden to initiate treatments.
Both the data used in this analysis and the methods are subject to limitations. First, HIV testing uptakes in surveys rely on self-reporting data which is subject to recall bias. Second, DHS HIV prevalence surveys target adult men aged 15 to 59 year and women aged 15–49 years. However, our analysis was limited to aged 15–49 years for both sexes for proper comparison and ease of effective analysis. These might have excluded some HIV-positive cases out of this age range and consequently bias our results. Third, the masking of cluster locations may affect our spatial analyses. Fourth, self-reporting on CCK about HIV/AIDS and HIV testing depend on human cognition and memory, the elaborate nature of CCKs on DHS in post-conflict and fragile nations like the MRU might provide misleading information during the surveys. Notably, our LASSO model might over predict CCK combinations in the MRU.
Cote d'Ivoire has the highest number of regions with a decrease in HIV prevalence but yet it exists as a hot spot for HIV in the region from 2005 to 2015. At the national level, Guinea, Liberia and Sierra Leone have a high regional increase in HIV prevalence despite relative cold spots. The HIV testing uptake and CCK are far below the benchmarks set by UNAIDS for ending the epidemic, irrespective of residence, sex, age and socioeconomic status. The elaborate CCKs about HIV/AIDS, compounded with the lower literacy rate and instability in the sub-region might have translated into lower knowledge about HIV and HIV testing uptake among adults aged 15–49 years. Further research is needed among individuals out of our age limit and determines why HIV is higher among adults with higher socioeconomic status than their lower peers. Coupled with the above, more work on HIV education is needed in this sub-region to scale up HIV testing uptake for early ART interventions and disrupt the transmission chain.
The datasets analyzed in our study are publicly available from the DHS website www.DHSprogram.com and public access to the database is restricted. Administrative permission to access the data was obtained.
AIS:
AIDS Indicator Survey
CCK:
Comprehensive correct knowledge
Demographic and Health Surveys
ESRI:
Environmental Systems Research Institute
Geographic positioning system
IBM:
International Business Machines
ICF:
International Classification of Functionality, Disability and Health
LASSO:
Least Absolute Shrinkage and Selection Operator
MRU:
Mano River Union
MTCT:
Mother-to-child transmission
USAID:
United States Agency for International Development
SDR:
Spatial Data Repository
INS:
Institute of National Statistics
SSL:
Statistics Sierra Leone
Monitoring and evaluation to assess and use results
SSH:
Spatial stratified heterogeneity
UNAIDS. 2021 UNAIDS global AIDS update: confronting inequalities, lessons for pandemic responses from 40 years of AIDS. Geneva: Joint United Nations Programme on HIV/AIDS. 2021. www.unaids.org. Accessed 5 July 2021.
Fauci AS. Victories against AIDS have lessons for COVID-19. World view world's AIDS day. Nature. 2021;600:9.
UNAIDS. Global AIDS update 2020: seizing the moment: tackling entrenched inequalities to end epidemics. Geneva: Joint United Nations Programme on HIV/AIDS; 2020.
WHO. Global progress report on HIV, viral hepatitis and sexually transmitted infections, accountability for the global health sector strategies 2016–2021: actions for impact. Geneva: World Health Organization; 2021.
UNAIDS. The response to HIV in western and central Africa. Geneva: Joint United Nations Programme on HIV/AIDS; 2021.
WHO. Consultation on the global health sector strategies on HIV, viral hepatitis and sexually transmitted infections (STIs), 2022–2030. Copenhagen, Denmark and online 2021. In: Virtual meeting report. 2021. Geneva: World Health Organization; 2022.
UNAIDS. UNAIDS data, 2020. Geneva: Joint United Nations Programme on HIV/AIDS. www.unaids.org. Accessed 10 Sept 2021.
Li Z, Gao GF. Infectious disease trends in China since the SARS outbreak. Lancet Infect Dis. 2017;17(11):1113–5.
Mayala BK, Dontamsetti T, Fish TD, Croft TN. Interpolation of DHS survey data at subnational administrative level 2. In: DHS spatial analysis reports no 17. Rockville: ICF; 2019.
Statistics Sierra Leone, Stats SL, ICF. Sierra Leone demographic and health survey 2019, Freetown, Sierra Leone, and Rockville, Maryland, USA: Stats SL and ICF; 2020.
Institut National de la Statistique, (INS) et ICF International: Enquête démographique et de santé et à indicateurs multiples de Côte d'Ivoire 2011–2012. Calverton, Maryland, USA, INS et ICF International; 2012 (in French).
Cuadros DF. Assessing the role of geographical HIV hotspots in the spread of the epidemic. Department of geography and GIS, University of Cincinnati; 2019. https://www.pangea-hiv.org/files/190820-diego-cuadros.pdf.
Mosser JF, Gagne-Maynard W, Rao PC, Osgood-Zimmerman A, Fullman N, Graetz N, et al. Mapping diphtheria-pertussis-tetanus vaccine coverage in Africa, 2000–2016: a spatial and temporal modelling study. Lancet. 2019;393(10183):1843–55.
Hlongwa M, Mashamba-Thompson T, Makhunga S, Hlongwana K. Mapping evidence of intervention strategies to improving men's uptake to HIV testing services in sub-Saharan Africa: a systematic scoping review. BMC Infect Dis. 2019;19(1):496.
Janocha B, Donohue RE, Fish TD, Mayala BK, Croft TN. Guidance and recommendations for the use of indicator estimates at the subnational administrative level 2. In: DHS spatial analysis report 20. Rockville: ICF; 2021.
Gething PW, Casey DC, Weiss DJ, Bisanzio D, Bhatt S, Cameron E, et al. Mapping Plasmodium falciparum mortality in Africa between 1990 and 2015. N Engl J Med. 2016;375(25):2435–45.
Benjamin M, Thomas DF, David E, Trinadh D. The DHS program geospatial covariate datasets manual, 2nd edition). Rockville; 2018.
Angad S, Singh SK. Covariates of HIV/AIDS prevalence among migrants and non-migrants in India. Migration. 2019.
Wang J, Xu C. Geodetector: principle and prospective. Acta Geogr Sin. 2017;72:116–34.
Martins R, Cerdeira J, Fonseca M, Barrie M. Foreign direct investment determinants in Mano River Union countries: micro and macro evidence. S Afr J Econ. 2021. https://doi.org/10.1111/saje.12301.
Gao GF, Feng Y. On the ground in Sierra Leone. Science. 2014;346(6209):666–666.
Sue N, Bassene M. Mano river union conflict assessment and peacebuilding results framework. Basic education and policy support (BEPS) activity. The George Washington University GroundWork: USAID; 2003.
Liberia Institute of Statistics and Geo-Information Services (LISGIS) MoHaSWL, National AIDS Control Program [Liberia], and ICF International. Liberia demographic and health survey 2013. Monrovia: Liberia Institute of Statistics and GeoInformation Services (LISGIS) and ICF International; 2014.
Institut National de la Statistique (INS): Guinée: enquête démographique et de santé et à indicateurs multiples (EDS-MICS) 2012. MEASURE DHS, ICF International; 2013 (in French).
Campbell-Yesufu OT, Gandhi RT. Update on human immunodeficiency virus (HIV)-2 infection. Clin Infect Dis. 2011;52(6):780–7.
Dukhovlinova E, Masharsky A, Vasileva A, Porrello A, Zhou S, Toussova O, et al. Characterization of the transmitted virus in an ongoing HIV-1 epidemic driven by injecting drug use. AIDS Res Hum Retrovir. 2018;34(10):867–78.
Bah SM, Aljoudi AS. Taking a religious perspective to contain Ebola. Lancet. 2014;384(9947):951.
Wymant C, Bezemer D, Blanquart F, Ferretti L, Gall A, Hall M, et al. A highly virulent variant of HIV-1 circulating in the Netherlands. Science. 2022;375(6580):540–5.
Dwyer-Lindgren L, Cork MA, Sligar A, Steuben KM, Wilson KF, Provost NR, et al. Mapping HIV prevalence in sub-Saharan Africa between 2000 and 2017. Nature. 2019;570(7760):189–93.
Giguère K, Eaton JW, Marsh K, Johnson LF, Johnson CC, Ehui E, et al. Trends in knowledge of HIV status and efficiency of HIV testing services in sub-Saharan Africa, 2000–20: a modelling study using survey and HIV testing programme data. Lancet HIV. 2021;8(5):e284–93.
Sharma B, Nam EW. Role of knowledge, sociodemographic, and behavioral factors on lifetime HIV testing among adult population in Nepal: evidence from a cross-sectional national survey. Int J Environ Res Public Health. 2019;16(18):3311.
Wang W, Alva S, Wang S. HIV-related knowledge and behavior among people living with HIV in eight high HIV prevalence countries in sub-Saharan Africa. Calverton: ICF International; 2012. USAID, DHS analytical studies no. 29.
Velmovitsky PE, Bevilacqua T, Alencar P, Cowan D, Morita PP. Convergence of precision medicine and public health into precision public health: toward a big data perspective. Front Public Health. 2021;9: 561873.
Gebregziabher M, Dai L, Vrana-Diaz C, Teklehaimanot A, Sweat M. Gender disparities in receipt of HIV testing results in six sub-Saharan African countries. Health Equity. 2018;2(1):384–94.
Liberia Institute of Statistics and Geo-Information Services—LISGIS/Liberia, Ministry of Health and Social Welfare/Liberia: National AIDS Control Program/Liberia, and Macro International. Liberia demographic and health survey 2007. Monrovia: LISGIS and Macro International; 2008. http://dhsprogram.com/publications/Citing-DHS-Publications.cfm.
Statistics Sierra Leone (SSL), and ICF Macro. Sierra Leone demographic and health survey 2008. Calverton: Statistics Sierra Leone (SSL) and ICF Macro; 2009.
Direction Nationale de la Statistique (DNS): Guinée enquête démographique et de santé 2005. Calverton: INS and ORC Macro; 2006 (in French).
Institut National de la Statistique, Ministère de la Lutte contre le Sida: Côte d'Ivoire enquête sur les indicateurs du sida 2005. Calverton: INS and ORC Macro; 2006 (in French).
The DHS Program. The demographic and health survey (DHS) program STATcompiler, funded by USAID. 2015. www.statcompiler.com.
DHS Spatial Interpolation Working Group. Spatial interpolation with demographic and health survey data: key considerations. Rockville: ICF International; 2014. DHS spatial analysis report no. 9.
The DHS Program. Spatial data repository, the demographic and health surveys program. ICF International. 2020. https://spatialdata.dhsprogram.com.
An C, Lim H, Kim D-W, Chang JH, Choi YJ, Kim SW. Machine learning prediction for mortality of patients diagnosed with COVID-19: a nationwide Korean cohort study. Sci Rep. 2020;10(1):18716.
Friedman J, Hastie T, Tiemessen R. Regularization paths for generalized linear models via coordinate descent. J Stat Softw. 2010;33(1):1–22. https://doi.org/10.18637/jssv033i01.
Bärnighausen T, Hosegood V, Timaeus IM, Newell ML. The socioeconomic determinants of HIV incidence: evidence from a longitudinal, population-based study in rural South Africa. AIDS. 2007;21(Suppl 7):S29–38.
WHO. A conceptual framework for action on the social determinants of health. Geneva: World Health Organization; 2010. http://www.who.int/sdhconference/resources/ConceptualframeworkforactiononSDH_eng.pdf. Accessed 2 July 2022.
Frank TD, Carter A, Jahagirdar D, Biehl MH, Douwes-Schultz D, Larson SL, et al. Global, regional, and national incidence, prevalence, and mortality of HIV, 1980–2017, and forecasts to 2030, for 195 countries and territories: a systematic analysis for the global burden of diseases, injuries, and risk factors study 2017. Lancet HIV. 2019;6(12):e831–59.
Msomi N, Lessels R, Mlisana K, de Oliveira T. Africa: tackle HIV and COVID-19 together. Nature. 2021;600:33–6.
Liu WJ, Hu HY, Su QD, Zhang Z, Liu Y, Sun YL, et al. HIV prevalence in suspected Ebola cases during the 2014–2016 Ebola epidemic in Sierra Leone. Infect Dis Poverty. 2019;8(1):15.
We thank DHS for providing us access to the data. We extend our sincere gratitude to the Chinese government and the Department of Epidemiology and Biostatistics, School of Public Health, Cheloo College of medicine, and Shandong University for academic support and funding. We also thank the National Natural Science Foundation of China and the National Institute for Viral Disease Control and Prevention, the Chinese Center for Disease Control and Prevention (China CDC), and Beijing for their advice and support throughout this study.
This work was supported by Shandong University and the Project of International Cooperation and Exchanges. Projects of International Cooperation and Exchanges NSFC (82161148008). WJL is supported by the Excellent Young Scientist Program of the NSFC (81822040). Funding was also received from the Ministry of Finance and Commerce (MOFCOM), People's Republic of China.
Department of Epidemiology, School of Public Health, Cheeloo College of Medicine, Shandong University, Jinan, 250012, China
Idrissa Laybohr Kamara, Yuanyuan Guo, Wei Ma & George F. Gao
NHC Key Laboratory of Biosafety, National Institute for Viral Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, 102206, China
Idrissa Laybohr Kamara, Yaxin Guo, Shuting Huo, Yuanyuan Guo, William J. Liu & George F. Gao
CAS Key Laboratory of Pathogen Microbiology and Immunology, Institute of Microbiology, Center for Influenza Research and Early-Warning (CASCIRE), CAS-TWAS Center of Excellence for Emerging Infectious Diseases (CEEID), Chinese Academy of Sciences, Beijing, 100101, China
Liang Wang & George F. Gao
State Key Laboratory of Resources and Environmental Information System, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing, 100101, China
Chengdong Xu & Yilan Liao
Idrissa Laybohr Kamara
Liang Wang
Yaxin Guo
Shuting Huo
Yuanyuan Guo
Chengdong Xu
Yilan Liao
William J. Liu
Wei Ma
George F. Gao
ILK, GFG, WM and WJL planned and conceived the study. WL developed the LASSO regression model. CX performed the Spatial stratified heterogeneity (SSH), YL reviewed the spatial maps and geostatistical analysis. SH produced output tables. ILk, YG and YYG wrote the initial draft. All authors contributed to the interpretation of the results and the final draft. All authors read and approved the final manuscript.
Correspondence to Wei Ma or George F. Gao.
The procedure and questionnaires for DHS and AIS data collections have been reviewed and approved by the ICF International institution review board (IRB) in individual countries. Interviews are conducted only if the respondent provides voluntary informed consent. A written and signed informed consent was obtained from all participants. The ICF International IRB ensures that surveys comply with the ethics of each country and the US Department of Health and Human Services regulations for the protection of human subjects. In this regard, authors analyzing DHS and AIS data do not require ethics approval.
Spatial and statistical analyses for predicted CCKs.
Stratified Heterogeneity (SH) for HIV risk among women determined by Geodetector.
Stratified Heterogeneity (SH) for HIV risk among men determined by Geodetector.
Laybohr Kamara, I., Wang, L., Guo, Y. et al. Spatial–temporal heterogeneity and determinants of HIV prevalence in the Mano River Union countries. Infect Dis Poverty 11, 116 (2022). https://doi.org/10.1186/s40249-022-01036-1
Spatial distribution of HIV prevalence
Geodetector
|
CommonCrawl
|
Scaling-up of a novel, simplified MFC stack based on a self-stratifying urine column
Xavier Alexis Walter1,
Iwona Gajda1,
Samuel Forbes1,
Jonathan Winfield1,
John Greenman2 &
Ioannis Ieropoulos1
Biotechnology for Biofuels volume 9, Article number: 93 (2016) Cite this article
The microbial fuel cell (MFC) is a technology in which microorganisms employ an electrode (anode) as a solid electron acceptor for anaerobic respiration. This results in direct transformation of chemical energy into electrical energy, which in essence, renders organic wastewater into fuel. Amongst the various types of organic waste, urine is particularly interesting since it is the source of 75 % of the nitrogen present in domestic wastewater despite only accounting for 1 % of the total volume. However, there is a persistent problem for efficient MFC scale-up, since the higher the surface area of electrode to volume ratio, the higher the volumetric power density. Hence, to reach usable power levels for practical applications, a plurality of MFC units could be connected together to produce higher voltage and current outputs; this can be done by combinations of series/parallel connections implemented both horizontally and vertically as a stack. This plurality implies that the units have a simple design for the whole system to be cost-effective. The goal of this work was to address the built configuration of these multiple MFCs into stacks used for treating human urine.
We report a novel, membraneless stack design using ceramic plates, with fully submerged anodes and partially submerged cathodes in the same urine solution. The cathodes covered the top of each ceramic plate whilst the anodes, were on the lower half of each plate, and this would constitute a module. The MFC elements within each module (anode, ceramic, and cathode) were connected in parallel, and the different modules connected in series. This allowed for the self-stratification of the collective environment (urine column) under the natural activity of the microbial consortia thriving in the system. Two different module sizes were investigated, where one module (or box) had a footprint of 900 mL and a larger module (or box) had a footprint of 5000 mL. This scaling-up increased power but did not negatively affect power density (≈12 W/m3), a factor that has proven to be an obstacle in previous studies.
The scaling-up approach, with limited power-density losses, was achieved by maintaining a plurality of microenvironments within the module, and resulted in a simple and robust system fuelled by urine. This scaling-up approach, within the tested range, was successful in converting chemical energy in urine into electricity.
The Microbial Fuel Cell (MFC) is a technology in which microorganisms employ an electrode (the anode) as the end-terminal electron acceptor in their electroactive anaerobic respiration. This results in the direct transformation of chemical energy (reduced organic matter) into electrical energy, which in turn means that substrates such as wastewater or urine can be used as fuel. The first working MFC was reported by Potter in 1911 [1], and despite the long history of the technology, it is only in the last 20 years that significant breakthrough has been reported [2–5], which includes reactor design [6–10], component optimisation [11–14], and stack configuration [5, 15, 16]. Another major consideration inherent to the advancement of the technology, is the need to identify inexpensive materials, and there has been significant progress in this direction, making practical implementation a realistic proposition. In this respect, ceramics have become a cheap yet viable alternative material now used by a number of research groups that are developing MFCs [9, 17–19].
Amongst the various types of organic waste used as fuel for MFCs, urine is of interest since it can be employed undiluted as an energy source to power applications [20]. Moreover, the fact that urine represents 75 % of the nitrogen present in domestic wastewaters—yet accounts for only 1 % of the total volume [21]—supports the widespread idea of source-separated urine for a more efficient treatment of both urine and faecal matter [22, 23]. Most of the nitrogen in fresh urine is in the form of urea. However, urea gets quickly hydrolysed (within hours) by bacterial urease into ammonium and bicarbonate [24]. In this respect, MFCs have been shown to efficiently treat the ammonia present in urine, whilst producing electrical energy by the consumption of organic matter [25, 26]. All these studies were carried out in compartmentalised MFCs, and the ammonium abstraction was mainly driven by the pH difference between the anodic and cathodic compartments. In these reports, the ammonium migrates from the anode to the cathode through the cation exchange membrane, and production of hydroxyl at the cathode increases the pH, resulting in the formation of gaseous ammonia that subsequently escapes the system [25, 27]. In some specifically designed set-ups employing biocathodes, a denitrification process occurs, and the nitrogen escapes as di-nitrogen [28, 29]. Moreover, recent results have demonstrated that anaerobic electroactive ammonium oxidation could occur in the anodic compartment [30] and that such activity results in the accumulation of nitrate.
In order to scale-up the MFC technology towards out-of-the-lab applications, a number of approaches are being pursued. One method is the stacking of small-scale MFC units for useful electricity generation [5]. Hence, the implementation of MFC technology implies (1) a plurality of units, and (2) grouping of the units into series and parallel electrical connections. For this to work, each unit needs to have a simple design allowing easy replication and multiplication to build a stack. It has previously been shown with the use of a ceramic plate that an anode and a cathode can share the same liquid electrolyte [31]. Based on this, and to simplify a collective system, the MFC design presented here was developed with the aim of all sub-units within each module being connected in parallel, since they would be subject to the same embodiment and exposed to the same liquid electrolyte. This allows for the production of larger units that would then be electrically connected in series.
In MFCs, electroactive populations develop in biofilms and form exchange interfaces [32] that can also act as a bio-exchange membrane between the anodes and cathodes [33]. The principle behind the present design was to increase the area of these exchange interfaces within a defined volume. At the macro-scale (>250 mL), single units have diffusion limitations at the scale of the half-cell compartment and at the scale of the electrode itself. In the first case, the performance of the MFC is limited by the diffusion of anolyte/catholyte (reservoir) to the anode/cathode (consumption). For the same reason, thick anodic materials (of large MFCs) would limit the diffusion of nutrients towards the centre of the electrode, thereby limiting the levels of performance; this holds true especially for biofilm communities. In general, MFCs with high surface area of exchange to volume ratios are more power dense than MFCs with small surface area to volume ratios [5, 34, 35]. In other words, smaller MFCs are more power dense than larger ones. Hence, the approach in the present study was to enlarge the unit size whilst increasing bio-interfaces at a fixed ratio, in order to maintain a similar power density to that generated by smaller units. Such stack systems naturally comprise a collective of these interfaces, characterised by many microenvironments. Having multiple microenvironments means that increasing unit size does not increase diffusion limitations, since at the electroactive scale, all interfaces and diffusion distances would be kept to a minimum. In the present study, we demonstrate the results of membraneless MFC stacks, in which the anodes and cathodes are exposed to the same electrolyte (urine) as a self-stratifying column (SSC-MFC). This design relied on the self-stratification of a liquid under the natural activity of the microbial consortia thriving in the system. The MFC module size was increased from 900 to 5000 mL via compartmentalised multiplication, increasing total power output whilst maintaining similar power densities.
Two different configurations were employed in this study; the first was a tubular reactor configuration that was used to investigate whether anodes and cathodes could operate in the same liquid medium and the second were boxed reactors designed to optimise surface area to volume ratio and power density.
Tubular reactor configuration
A preliminary test was performed using a tubular reactor configuration to investigate the effects of two MFCs and all their electrodes being submerged in the same liquid (Fig. 1). This experiment used an MFC that consisted of two anodes and two cathodes surrounding a single ceramic support. Two self-stratifying urine column MFCs (SSC-MFC) were fitted into a PVC tube of 48-mm diameter. Each SSC-MFC consisted of a 2-mm thick ceramic support (3.5 × 12 cm; Fig. 1a) surrounded by two cathodes on the upper-half and two anodes on the bottom-half; a perforated acrylic sheet (2 mm thick) was used to separate the anode from the cathode. The perforations were 2 mm wide, 120 mm long and spaced 5 mm apart. The main function of the acrylic separator was to prevent any physical contact between anode and cathode, thus avoiding any electrical short-circuit between them. Nonetheless, the perforations in the separator allowed the electrolyte to circulate. The cathodes were made from a mixture of activated charcoal powder (G.Baldwin & Co, London, UK) and polytetrafluoroethylene (60 wt% dispersed in H2O, Sigma-Aldrich, UK) spread over a 30 % PTFE treated carbon veil support (1.5 × 12 cm; PRF Composite Materials Poole, Dorset, UK) [36]. The displacement volume of urine was 60 mL and the liquid level reached 75 % of the cathode height (i.e. 75 % was submerged and the remaining 25 % was exposed to air). The anode consisted of 20 g m−2 carbon veil (PRF Composite Materials Poole, Dorset, UK): each unit comprised a 250 cm2 surface area folded into a 1.5 cm × 12 cm projected surface area. Each MFC had two of these 250 cm2 anodes, one on each side of the ceramic support (Fig. 1a). The two MFCs had independent electrical connections, but shared the same electrolyte. The applied load was initially fixed at 600 Ω, which was consistent with previous set-ups of similar electrode surface area.
Illustration of the self-stratifying column membraneless microbial fuel cells (SSC-MFC). a schematic of a cross section of a tubular SSC-MFC stack comprising two elongated ceramic supports. b schematic of a rectangular SSC-MFCs with one out of two ceramic supports removed. In both a, b, the "C" and "A" stand for cathode and anode, respectively. c top view of the mounted stack comprising 20 MFCs and 21 ceramic supports inserted in a 5 L box (M5–21)
Boxed reactors configuration
Small rectangular stack (M0.25–6)
From the results of the tubular test-MFC, a small rectangular box was constructed in order to optimise the surface of exchange in a stack comprising flat MFCs (Fig. 1c). The performance of this stack was also used to compare power density with the larger stack described in the next section. This stack was built with the objective of having a scalable design towards ease of practical implementation: simple to build and configure (electrically and hydraulically) and of minimal maintenance (one inlet and one outlet per stack).
This small stack consisted of six SSC-MFCs (M0.25–6), all connected in parallel. These test units comprised six elongated supports—12-cm long, 9-cm height and 2-mm thickness—with an exchange volume of 250 mL (M0.25–6), in an attempt to optimise fuel percolation between anodic and cathodic depths, and to maintain homogeneous redox potential gradients at the biofilms' level [37, 38]. Similar with the tubular reactor, the 6 anodes were below and six cathodes were above the support. Each anode was made from 600 cm2 carbon fibre veil electrode (20 g m−2; PRF Composite Materials Poole, Dorset, UK), folded into 10 × 4.5 cm projected surface area. The cathode consisted of a stainless steel mesh (10 × 3.5 cm), serving as the current collector, on which a layer of activated-carbon charcoal was spread (8 ± 0.1 g). This paste was of a mix of 5 ± 0.05 g activated carbon powder, 1.9 ± 0.1 mL of polytetrafluoroethylene solution (PTFE; 60 wt %, Sigma-Aldrich, UK) and 10 mL of water. Each cathode had a final size of 10 × 4 cm.
Large stacks: 5 L, 21 separators (M5–21) or 11 separators (M5-11)
In order to create a system more suitable for powering real-life applications, a larger unit was designed to accommodate the volume of urine produced daily by 1-3 individuals (2.4 ≤ Vol ≤ 6 Ld−1). This larger unit (M5–21) was the scale-up, in the x and y dimension, of the M0.25–6. The M5–21 stack consisted of 20 MFCs connected electrically in parallel and held by 21 vertical ceramic supports: each MFC had the same height and thickness as in M0.25–6 but twice the length (Fig. 1c).
To this end, 5-L rectangular plastic "euro stacking" containers (12 × 17 × 27 cm) were chosen as the housing for each MFC stack (Plastor Limited, Berkshire, UK). The first box was assembled with 21 separators making up 20 MFCs (as defined in the previous paragraph; the first and last supports did not have an anode on the side facing the box), each of 21-cm length (Fig. 1c). All anodes (n = 20) and cathodes (n = 20) of the MFCs contained in the same boxes were electrically connected in parallel. Each anode consisted of 1500 cm2 carbon fibre veil electrode (20 g m−2; PRF Composite Materials Poole, Dorset, UK), folded into 20 cm × 4.5 cm projected surface area. Each cathode consisted of a pair of stainless steel mesh (21 × 3.5 cm), serving as the current collector, on which a layer of activated carbon was spread (16 g ± 0.1 g) prepared in the same way as for the M0.25–6 box. Each cathode sub-unit had a final size of 21 × 3.8 cm and 2-mm thickness. All anodes and cathodes were configured in pairs around each ceramic support, and interconnected with stainless steel wire. Each ceramic support consisted of a pair of 9 × 10.5 cm terracotta sheets (2 mm thick). The displacement volume was of 1.5 L for the M5–21 box. Based on results, the units employed in the cascade were assembled with only 11 ceramic supports (M5–11; Fig. 1b) instead of 21 (M5–21). Although the displacement volume of the M5-11 was higher (1.75 L), the two types of box had the same surface areas of anode and cathode in total, i.e. 30,000 and 3612 cm2, respectively.
MFC operation and monitoring
Feedstock and MFC maturation
The MFCs were inoculated with anolyte effluent removed from an established MFC that had been running under long-term continuous flow conditions of urine. Urine (pH between 6.4–7.4) was collected anonymously from healthy individuals with no known previous medical conditions, on a daily basis, and pooled together. Following the inoculation period, the subject MFCs were fed with urine from a collector tank. The urine age ranged between 1 and 3 days when fed to the MFCs. The MFCs were operated in ambient temperature conditions (22.0 ± 0.5 °C). All MFC designs were pulse-fed, either by the use of a pump (tubular reactor and M0.25–6 box), or by the use of a syphon (M5–21/M5–11), which allowed for mixing of the stratified urine column and periodic replenishment of fuel.
Data capture and polarisation experiments
Voltage output was monitored against time using an Agilent LXI 34972A data acquisition/switch unit (Farnell, UK). Measurements were recorded every minute. Recorded raw data were processed and analysed using Sigma Plot v11 software (Systat Software Inc, London, UK). The polarisation scans were performed, using an automated resistorstat [39], and the resistive load values ranged from 38,000 to 4 Ω. Each resistor was connected for a period of 10 min. The resistorstat was not accurate at resistances lower than 10 Ω, which corresponded to the optimum load for the M5–21 and M5–11 units. Therefore, for the M5–21 and M5–11 units, the polarisation experiments were complemented by manually sweeping 57 resistor values covering the range 30,000–1 Ω. The time interval between resistance changes was as before 10 min.
Ammonium measurements
The ammonium concentration profiles were performed in the geometrical centre of the M5–11 stack. Samples of 100 µL were taken at 5 depths (-2, -18, -34, -52 and -64 mm) and directly diluted 10×. These dilutions were then carried further to a final dilution of 104. Ammonium concentrations were determined immediately after sampling using test kit reagents for ammonium (Lovibond, Amesbury, UK), and analysed using a Camlab MC500 colorimeter. A sample was taken from each of the five different depths and analysed. This was repeated three times so as to obtain triplicate samples from each depth. Ammonium concentration gradients were converted into flux units employing Fick's law of diffusion using the following equation:
$$J_{{{\text{NH}}_{ 4} }} = D_{{{\text{NH}}_{ 4} }} \left( {\frac{\Delta C}{\Delta Z}} \right)$$
where J NH4 is the net flux of the ion µmol.cm−2 s−1; D NH4 is the diffusion coefficient of ammonium (2.09 × 10−5 cm−2 s−1) [40]; ∆C is the concentration gradient in µmol cm−3; and ∆Ζ the distance, in cm, between the two points. For ease of calculation, it was assumed that the media, through which ammonium was diffusing, was only liquid and the porosity and tortuosity of carbon veil were ignored. The total ion flux across the stack was estimated by multiplying the calculated flux by the surface area of the liquid phase of the stack using Eq. (2), defined as following:
$${\text{SA}}_{\text{liquid}} = {\text{SA}}_{\text{stack}} {\mathbf{ - }}\left( {{\text{SA}}_{\text{ceramic}} + {\text{SA}}_{\text{cathode}} } \right)$$
where SAliquid is the surface area of the liquid phase (cm2); SAstack is the total surface area of the stack (cm2); SAceramic is the cumulative surface area of all the ceramic support (cross section, cm2); and SAcathode is the cumulative surface area of the cathodes (cross section, cm2).
Independent electrical behaviour of MFCs submerged in the same electrolyte
The two MFCs, of the tubular MFC, were initially connected electrically in parallel and results showed that up to 500 µW of power was generated (data not shown), despite the anodes and cathodes sharing the same electrolyte and only being 2 mm apart from each other. The question that arose from this was in relation to the 'boundary' that would constitute an individual microbial fuel cell. Thus, the electrical connection of the test-MFC was such that each MFC (Fig. 1a), was electrically independent. The voltage of each MFC was then monitored under separate, variable resistive loads. Results demonstrated that the two MFCs displayed independent behaviours when different loads were applied to them (Fig. 2). It is worth noting that the optimum urine fluid level i.e. ¾ submerged cathode was empirically found after testing different configurations: ¼ submerged cathode (¾ exposed to air) gave lower but stable performance and a completely submerged cathode initially gave a higher output, but collapsed after only 2 h (eventually decreasing to zero).
Voltage monitoring of two independent MFCs submerged in the same liquid. Each MFC was loaded with a 600-Ω resistor. a Star indicates when only one of the two MFCs was put in open circuit. The blue dotted line is the voltage monitoring when the two MFCs were electrically connected in series (double star, 1200 Ω resistor). b The arrows indicate when the resistive loads were swapped between 600 and 400 Ω. Data show the independent electrical behaviours of both MFCs
Connecting the two MFCs in series resulted in an instant drop in power from one of the MFCs (Fig. 2a), emphasising that in this set-up, the MFCs can only be electrically connected in parallel. Nonetheless, if the definition of an MFC, as stated above, is valid, then this notion can be pushed further. The space between the two MFCs was only 3–4 mm; hence, the distance between their respective neighbouring anodes and cathodes was short. The two MFCs showed nevertheless different electrical behaviours, although not entirely independent. This demonstrates that what was considered as a single MFC unit of two cathodes and two anodes surrounding a single ceramic support was in fact two MFCs electrically connected in parallel, and the series connection behaviour suggests that there is a temporal element connected with fully independent MFC units. Based on this, and considering the dynamic bridging behaviour at the molecular level, which is time- and fluid flow-dependent, it was determined that in this set-up, where a batch of MFCs shares the same electrolyte (i.e. preventing series connection), an individual MFC consisted of a single cathode above a single anode.
Feeding rate of self-stratifying MFC systems
Small rectangular stack: 0.25 L, six separators (M0.25–6)
Normalised to the total displacement volume of 250 mL, this stack of a simple design was characterised by a high power density of ≈14 W/m3 at ≈380 mV (Fig. 3). The implementation of a fuel-based biotechnology implies that the feeding rate is adapted so that steady-state conditions are maintained in terms of power. After 1 week, the M0.25–6 reached steady power production levels under semi-continuous feeding (replenishment of media once every day). The M0.25–6 was then subjected to various feeding rates by connecting an automated feeding system (pump + timer). As discussed below, the semi-continuous flow was not adapted to the design of the MFC.
Polarisation and power curves of the stack comprising six MFCs connected in parallel. The resistive load values ranged from 38,000 to 4 Ω. Each resistor was connected for a period of 10 min. The volumetric power densities were obtained by normalising the power by the displacement volume of the whole reactor (250 mL)
In this line of experiments, four feeding regimes were implemented, which were empirically determined, and also were subject to practical considerations (i.e. continuous pump flow vs a single exchange of liquid volume). Both the 3 and 6-h pulse-feeding regimes were characterised by a stable absolute power output of ~2 mW (Fig. 4). The drop in power, observed between days 2 and 4 was in fact due to a mechanical failure in the form of blockage and concomitant "drowning" of the cathode (i.e. fully exposed to the liquid bulk instead of being partially exposed to O2). When the 12-h regime was introduced, power levels dropped, which suggested that the supply rate was not sufficient to support stable output. However, results from the 24-h regime contradicted this because steady-state power output conditions were observed, indicating that daily replenishment of the medium was sufficient (Fig. 4). This suggests that the limitation observed under the 12-h regime was due to some factor other than the pulse-feed rate. The flow rate of the pump could not exceed 20 mL min−1, and the replacement of the whole electrolyte occurred only after 12 min 30 s. Conversely, the 24-h regime was done manually with the removal of all the electrolyte prior to replenishment. Therefore, the results tend to confirm that a flow rate of 20 mL min−1, even if 250 mL was pumped in, was insufficient to obtain a homogeneous replacement of the electrolyte, resulting in a power-output limitation. Because of this heterogeneous mixing, some-to-most of the "fresh" added fuel was leaving the system without being consumed, and as spent fuel accumulated towards the bottom of the column, such as in natural stratified bodies (i.e. meromictic water column, lakes). Under the 24-h pulse-feed regime, the system was able to reach higher absolute power of ~3.8 mW (Fig. 4). These results emphasise that this MFC system requires a pulse-feed regime with a high flow rate (>20 mL min−1) in order to ensure homogeneous replenishment of the electrolyte. However, understanding better the feedstock replenishment rate, as suggested by the need of thorough mixing, requires further investigation.
Power produced by a stack of six MFCs connected in parallel. Data illustrate the influence of the feeding regime on the stack behaviour. The resistive load was 40 Ω. Asterisk indicates that the stack was manually replenished twice during the same day. The power drop from day 2 to 3 was due to the drowning of the cathode (mechanical blockage)
Larger stack: M5–21 and M5–11
Based on the above and in order to have a homogeneous mix of the electrolyte, the syphon principle was employed, which provided a fixed quantity in bursts of high flow rate (≈4 L min−1). The syphon mechanism would also allow the whole system to better adapt to variations in inflow rates thus ensuring a consistent supply of fuel (urine).
A polarisation sweep was performed on the stack (Fig. 5a), and the maximum power point was higher (≈20 mW) than the output produced (≈11 mW) during stable running conditions (Fig. 5b) under the equivalent load (9 Ω). In preparation for the polarisation run, the system was allowed to settle under open-circuit conditions as fresh urine was fed into the stack. The polarisation sweep only lasted 4 h, which was half the hydraulic retention time (HRT) of the stack under running conditions. Therefore, it is safe to assume that the nutrients were not depleted by the end of the polarisation experiment. In addition, results from the M0.25–6 stack indicate that the 8-h HRT, under which the M5–21 stack was operated, should have been sufficient for stable current production. Therefore, it was hypothesised that the discrepancy between the power produced under polarisation compared to that under running conditions was due to an insufficient mixing of the fuel.
Electrical characterisation of the M5-21 SSC-MFC unit (a, b) and the M5–11 SSC-MFC unit (c, d). a Polarisation and power curves. The resistive load values ranged from 38,000 to 4 Ω. Each resistor was connected for a period of 10 min. Power normalised by total displacement volume (1.5 L); b Temporal power production under a 2 L 8 h−1 feeding regime. c Polarisation and power curves. The resistive load values ranged from 30,000 to 1 Ω. Each resistor was connected for a period of 10 min. Power normalised by displacement volume (1.7 L). d Temporal power production under two feeding regimes, with an identical daily rate
To test the hypothesis that the stack was too densely packed for allowing the fresh fuel to be homogeneously mixed, half of the ceramic supports were removed, leaving a 2-mm channel between one of the two anodes and cathodes. This stack was designated as the M5–11 SSC-MFC unit because it now had 11 ceramic dividers as opposed to 21 used previously. Using this new configuration, a polarisation sweep produced the same maximum power as the previous, more packed, design (Fig. 5c).
During the first 18 h, the M5–11 produced more power, than the M5–21 design, and reached an absolute power of 15 mW (Fig. 5d). This verifies that the power limitation under running conditions, for the M5–21 design was due to a heterogeneous mixing during fuel replenishment and not nutrient depletion. The power decreased over the next 20 h illustrating that the feeding rate (2 L 8 h−1) was not optimal (Fig. 5d). Following the hypothesis that the nutrients for electroactive anaerobic respiration were not limiting, but the electrolyte mixing was, a new feeding rate of 1 L 4 h−1 was applied, which maintained the same daily feeding rate (6 L d−1). At this feeding rate, the stack was able to reach a maximum power output of 21.5 mW (Fig. 6), during the first 3–4 days. After 8 days, the stack produced approximately 17.5 ± 2 mW. After the 40th day, the stack was connected to other units in cascade, in order to investigate the behaviour of this design under a series electrical connection (Fig. 6).
Power produced by the SSC-MFC stack of 20 MFCs connected in parallel (M5–11). Data illustrate the stability of the stack behaviour over the 80 days of the experiment. The resistive load was 9 Ω during the first 44 days. After that, the unit was connected in series with other MFC stacks under various loads and conditions, and hence, the noticeable variability of the produced power
Ammonium vertical stratification
The anodes and cathodes shared the same electrolyte with only 5 mm of urine separating each other (Fig. 1c). In the MFC described here, it is the chemical stratification that allows both electrodes to function as cathodes and anodes. The electroconductivity profiling of the "water-column" (or urine-column), performed at the same time as the analysis, indicated two gradients that corresponded to both the anode and cathode. These measurements confirmed that the electrolyte was vertically stratified (Fig. 7), as typically occurs in natural water columns (e.g. lakes). However, the stratification was weak since neither a thermal nor a pH gradient was observed (data not shown). Early measurements of oxygen concentration showed that at a depth of 2–4 mm, the electrolyte was completely anoxic, suggesting that oxygen was not the dominant electron acceptor in the liquid phase. Moreover, since a fully submerged cathode was not sustainable, having part of the electrode open to air was vital for operation (optimally ¼ of the cathode area exposed). The hypothesis emerging from these observations was that part of the cathode operation is ultimately in the nitrogen redox cycle. To support this, ammonium and nitrate concentration profiles were determined (Fig. 7).
Vertical profiles, within the M5–11 stack, of electroconductivity and of ammonium concentrations. The 0 mm depth indicates urine level. Green and Red circles are two different stratification periods. Green circles show the median of triplicates and error bars indicate the lowest and highest measured values. Red circles are single data points. The hours shown, indicate the time spent between the last feed and the ammonium analysis
No nitrates were detected, and the ammonium concentration profiles showed a clear distinction between the cathodic and anodic parts (Fig. 7). It is worth noting that the third point of measurement was just in-between the anodic and cathodic parts of the electrolyte. On the one hand, the ammonium concentrations were relatively homogeneous in all the anodic layers of the electrolyte (bottom last three points = 9255 ± 271 mg L−1 NH4; green circles, Fig. 7). On the other hand, a depletion of ammonium was occurring in the middle of the cathodic part (8466 ± 251 mg L−1 NH4; green circles, Fig. 7). When the SSC-MFC was left unfed for a period of time exceeding the replenishment period, the ammonium concentration profiles indicate that the gradient extended from the bottom part of the cathode to its upper part (red circles, Fig. 7), 444 and 300 mmol L−1 NH4, respectively (8000 and 5400 mg L−1). This spatio-temporal behaviour suggests that a process of ammonium abstraction was initiated within the cathodic part of the "urine column" and not at its oxycline, i.e. at the urine/air interface. However, the 2- and 8-h concentration profiles were not measured from the same feed cycle. Therefore, to confirm that the 8-h ammonium profile was the evolution of the 2-h one, the gradients of ammonium concentration were converted into fluxes and compared.
The 2-h concentration profile displayed two gradients (green circles, Fig. 7): one from the surface towards the middle of the cathode and a second one from the bottom of the cathode towards the middle of the cathode, 38.89 and 33.33 µmol cm−3, respectively. The conversion of these gradients using Fick's law of diffusion (Eq. 1) resulted in two ammonium fluxes of 5.08 × 10−4 µmol cm−2 s−1 and 4.35 × 10−4 µmol cm−2 s−1, respectively. The 8-h gradient (144.44 µmol cm−3), extending from the cathode lower part to its higher part, corresponded to a flux of 9.43 × 10−4 µmol cm−2 s−1. The total daily ion flux across one box (×mmol box−1 d−1) was estimated by multiplying the calculated flux by the surface area of the liquid phase alone (Eq. 2; 245 cm2). The obtained upper and lower fluxes for the 2-h profile were 10.76 and 9.11 mmol d−1 per box, respectively. The combined fluxes gave a daily NH4 depletion of 19.87 mmol d−1 per cascading box. The calculated 8-h daily NH4 flux was of 19.97 mmol d−1 per box. As both 2- and 8-h profiles have similar fluxes, the hypothesis that the 8-h profile is the temporal evolution of the 2-h one is supported, which also implies a stable and continuous NH4 abstraction rate.
Results demonstrate that there is a NH4 abstraction process occurring in the cathodic part of the electrolyte. Moreover, results tend to support the hypothesis that the ammonium abstraction is initiated in the middle of the cathodic layer. Over time, the gradient extends from the bottom of the cathodic layer to the surface of the urine column. As no pH gradient was measured (from top to bottom: 8.85 < pH < 9.04), the stripping of nitrogen as NH3 was difficult to assess [27]. The pH (9.04) of the electrolyte was close to that of the ammonium pKa (9.25), and therefore ammonia stripping could have occurred. However, because conclusive evidence was not measured, the ammonia stripping is assumed to be the cause of NH4 abstraction. In any case, as the oxycline was about 2 mm from the surface, an aerobic nitrification could not have dominated the anaerobic part where the NH4 abstraction was initiated. Moreover, as no nitrates were detected, and as it was a cathodic ammonium abstraction, ammonium was not the electron donor of an electroactive respiration [30]. Hence, the bio-electrochemical process leading to this NH4 abstraction was not identified in the urine column. Moreover, the cathodes are still open to air (¼) and the presence of oxygen is integral to its operation. This then raises the question of the importance of an alternative electron acceptor and its contribution to power output. Because the functioning of the SSC-MFC design relies on two principles—the vertical stratification of the redox state of various elements, and the multiplicity of microenvironments maximising local horizontal gradients—any further work aimed at elucidating the process of NH4 abstraction, and/or identifying the cathodic electron acceptor, should be carried out at the biofilm level.
Power production and series connection
The previous sections demonstrated that individual MFC boxes are capable of practical, stable power output and that complex sets of reaction are taking place during the production of bioelectricity. Another important consideration is how best to configure stacks in order to optimise volume and ultimately power density. Previous studies have shown that when individual reactors are enlarged, there is a loss in power density [5]. This is verified in the current study where the power density of the small stack (M0.25) was ~14 W/m3 which was higher than the ~12 W/m3 of the larger stack (M5–11; Figs. 3, 5c; Table 1). However, the power-density decrease due to the scaling-up of the MFCs (from 250 to 1700 mL) was not as pronounced as in other studies [5, 34]. For example, while the size was increased by a factor of 6, the drop in power density was only 9 % (Table 1) which differs from other studies where, for example, a 4.7-size increase resulted in a 14 % power-density drop (Table 1 [5]).
Table 1 Relationship between the size increase of MFC and the subsequent power-density decrease
The scale-up in the current study focused on increasing the overall footprint, while at the same time maintaining microenvironments, so that the distances between electrodes were maintained, regardless of overall footprint. This resulted in less-pronounced mass-transfer losses compared to systems where each component has simply been enlarged in relation to the total volume. In addition, the close proximity of the electrodes submerged in the same fluid ensured there were fewer 'dead spaces' compared to previous reports of scale-up. Moreover, compared to previous set-ups with a relatively close design (benthic MFC), the power density of M5-11 stack was higher with 21.1 mW produced (Fig. 8b) for a total footprint volume of 5 dm3 instead of 36 mW for 30 dm3 [8]. When normalising to the total footprint surface (0.0459 m2), the power density of the M5–11 stack was 460 mW/m2, which is in the high range of existing systems employing more complex architectures [41] and even 1 order of magnitude higher than the most cost-effective single chamber system with similar volume (1.1 L; [28]). This is nevertheless not an accurate comparison, since these different set-ups employed fuels other than urine. Comparison with other MFCs employing urine as fuel indicates that the design developed here generated higher power densities: ~12.6 instead of ~10 [34], or 2.6 W/m3 [42]. This was all performed using inexpensive sustainable materials such as ceramics and a natural waste product—urine-as fuel.
a Electrical outputs of the serial connections of increasing numbers of units in a single cascade. Resistances were set from the result of the first polarisation sweep. b Polarisation sweep of the cascade comprising four units connected in series. c Polarisation sweep of individual stack within the cascade. The dotted squares indicate the last points, prior to a non-planned refuelling, that are considered as the maximum power points. Each resistor, ranging from 30 k to 11 Ω, was connected for a period of 10 min
Practical implementation of the MFC technology implies (1) plurality of units, and (2) configuration of units into series and parallel electrical connections. For these two reasons, the SSC-MFC design was developed with the aim that all sub-units, i.e. each box, would be connected in parallel with all the constituent MFCs sharing the same embodiment and electrolyte. This leads to the production of larger units that can be connected together electrically in series. Therefore, identical units to the M5–11 (M5–11A) were progressively connected in series, as they were built (M5–11B,C,D). All these boxes/stacks were assembled in a cascade manner, recycling the same feedstock at a pulse-feed rate of 1 L 4 h−1. There was no direct fluid bridging connection between each of the distinct boxes; in other words, there were air-gaps introduced between the outlet of the upstream box and the inlet of the downstream box.
Once the first cascade (M5–11B and M5–11C) reached steady state (Fig. 8a; 100 h) the increase in unit numbers resulted in a corresponding voltage increase: the current output was stable over time (~45 mA; Fig. 8a). This demonstrates that stacking individual MFC units can be achieved with stable results and without cell reversal. Polarisation experiments were carried out firstly on the four units connected in series (Fig. 8b) where a maximum power density of 14 W/m3 was achieved. To investigate the contribution of each box to the overall stack output, polarisation sweeps were performed on each of the four units at the same time. Figure 8c shows that the four units were comparable irrespective of their position in the cascade. Furthermore, the combined actual power from the four units was equivalent to that produced when they were connected together in series as a stack, and this shows the reproducibility and robustness of the system. Interestingly, during the polarisation sweeps, pumping of fresh urine occurred at the point in the sweep when the external resistances were reaching the lower values (21 Ω). This occurred after the points in the dotted boxes as shown in Fig. 8. The influx of fresh urine at these points, when the power should be dropping, resulted in an unexpected increase and hence in an extension of the power curve, and in fact, it demonstrated how healthy the biofilm was in this system. Further work will investigate the capabilities of the microbes at these lower resistances when they are fed with a steady stream of fresh urine.
This novel MFC stack has all of its anodes and most of its cathodes submerged in the same electrolyte without short-circuiting. The rationale behind this design is that a phenomenon of horizontal self-stratification of a liquid feedstock column, i.e. urine, by microbial activity was occurring, similarly to what happens in natural environments colonised by microorganisms. Results have demonstrated that this phenomenon could be used to develop a novel and simplified type of membraneless MFC. Moreover, the stability and power density of this type of design are in the high range of existing more complicated designs. Due to the plurality of microenvironments in its macro structure, the SSC-MFC allows for the scale-up with limited losses in terms of power densities. This results in simplified stacks where all parallel units—consisting of a single cathode over a single anode—share the same embodiment. Results have shown that the performances were dependent of the pulse-feeding mechanism. Interestingly, an ammonium abstraction was measured in the anoxic part of the submerged cathodic layer. This anaerobic ammonium abstraction gives rise to the hypothesis that in such a set-up, part of the cathode operation is ultimately in the nitrogen redox cycle. Further investigations are needed to identify the process of electron and NH4 abstraction at the cathode.
M0.25–6:
250 mL displacement volume, 6 × ceramic supports
M5-21:
5000 mL displacement volume, 21 × ceramic supports
MFC:
Microbial fuel cell
SSC-MFC:
Self-stratifying urine column MFC
Potter MC. Electrical effects accompanying the decomposition of organic compounds. Proc Royal Soc B. 1911;84:260–76.
Bennetto HP. Electricity generation by microorganism. Biotechnol Educ. 1990;1:163–8.
Kim HJ, Park HS, Hyun MS, Chang IS, Kim M, Kim BH. A mediator-less microbial fuel cell using a metal reducing bacterium, Shewanella putrefaciense. Enzyme Microb Tech. 2002;30:145–52.
Rabaey K, Verstraete W. Microbial fuel cells: novel biotechnology for energy generation. Trend Biotechnol. 2005;23:291–8.
Ieropoulos I, Greenman J, Melhuish C. Microbial fuel cells based on carbon veil electrodes: stack configuration and scalability. Int J Energ Res. 2008;32:1228–40.
Park DH, Zeikus JG. Improved fuel cell and electrode designs for producing electricity from microbial degradation. Biotechnol Bioeng. 2003;81:348–55.
Rabaey K, Clauwaert P, Aelterman P, Verstraete W. Tubular microbial fuel cells for efficient electricity generation. Environ Sci Technol. 2005;39:8077–82.
Tender LM, Gray SA, Groveman E, Lowy DA, Kauffman P, Melhado J, Tyce RC, Flynn D, Petrecca R, Dobarro J. The first demonstration of a microbial fuel cell as a viable power supply: powering a meteorological buoy. J Power Source. 2008;179:571–5.
Gajda I, Stinchcombe A, Greenman J, Melhuish C, Ieropoulos I. Ceramic MFCs with internal cathode producing sufficient power for practical applications. Int J Hydrogen Energ. 2015;40:14627–31.
Winfield J, Chambers LD, Rossiter J, Stinchcombe A, Walter XA, Greenman J, Ieropoulos I. Fade to green: a biodegradable stack of microbial fuel cells. ChemSusChem. 2015;8:2705–12.
Pham TH, Jang JK, Moon HS, Chang IS, Kim BH. Improved performance of microbial fuel cell using membrane-electrode assembly. J Microbiol Biotechnol. 2005;15:438–41.
Qiao Y, Li CM, Bao S-J, Bao Q-L. Carbon nanotube/polyaniline composite as anode material for microbial fuel cells. J Power Source. 2007;170:79–84.
Martin E, Tartakovsky B, Savadogo O. Cathode materials evaluation in microbial fuel cells: a comparison of carbon, Mn2O3, Fe2O3 and platinum materials. Electrochim Acta. 2011;58:58–66.
Santoro C, Agrios A, Pasaogullari U, Li B. Effects of gas diffusion layer (GDL) and micro porous layer (MPL) on cathode performance in microbial fuel cells (MFCs). Int J Hydrogen Energ. 2011;36:13096–104.
Aelterman P, Rabaey K, Pham HT, Boon N, Verstraete W. Continuous electricity generation at high voltages and currents using stacked microbial fuel cells. Environ Sci Technol. 2006;40:3388–94.
Ledezma P, Greenman J, Ieropoulos I. MFC-cascade stacks maximise COD reduction and avoid voltage reversal under adverse conditions. Bioresource Technol. 2013;134:158–65.
Behera M, Jana PS, Ghangrekar MM. Performance evaluation of low cost microbial fuel cell fabricated using earthen pot with biotic and abiotic cathode. Bioresource Technol. 2010;101:1183–9.
Ajayi FF, Weigele PR. A terracotta bio-battery. Bioresource Technol. 2012;116:86–91.
Winfield J, Greenman J, Huson D, Ieropoulos I. Comparing terracotta and earthenware for multiple functionalities in microbial fuel cells. Bioproc Biosyst Eng. 2013;36:1913–21.
Ieropoulos IA, Ledezma P, Stinchcombe A, Papaharalabos G, Melhuish C, Greenman J. Waste to real energy: the first MFC powered mobile phone. Phys Chem Chem Phys. 2013;15:15312–6.
Maurer M, Schwegler P, Larsen TA. Nutrients in urine: energetic aspects of removal and recovery. Water Sci Technol. 2003;48:37–46.
Dalemo M. The modelling of an anaerobic digestion plant and a sewage plant in the ORWARE simulation model. Sveriges Lantbruksuniversitet Institutionen for Lantbruksteknik Rapport. 1996;1:1–34.
Ledezma P, Kuntke P, Buisman CJN, Keller J, Freguia S. Source-separated urine opens golden opportunities for microbial electrochemical technologies. Trend Biotechnol. 2015;33:214–20.
Mobley HLT, Hausinger RP. Microbial ureases—significance, regulation, and molecular characterization. Microbiol Rev. 1989;53:85–108.
Kuntke P, Smiech KM, Bruning H, Zeeman G, Saakes M, Sleutels THJA, Hamelers HVM, Buisman CJN. Ammonium recovery and energy production from urine by a microbial fuel cell. Water Res. 2012;46:2627–36.
Jadhav DA, Ghangrekar MM. Effective ammonium removal by anaerobic oxidation in microbial fuel cells. Environ Technol. 2015;36:767–75.
Cord-Ruwisch R, Law Y, Cheng KY. Ammonium as a sustainable proton shuttle in bioelectrochemical systems. Bioresource Technol. 2011;102:9691–6.
Zhang G, Zhang H, Zhang C, Zhang G, Yang F, Yuan G, Gao F. Simultaneous nitrogen and carbon removal in a single chamber microbial fuel cell with a rotating biocathode. Process Biochem. 2013;48:893–900.
Kim J, Kim B, Kim H, Yun Z. Effects of ammonium ions from the anolyte within bio-cathode microbial fuel cells on nitrate reduction and current density. Int Biodeterior Biodegr. 2014;95:122–6.
Qu B, Fan B, Zhu S, Zheng Y. Anaerobic ammonium oxidation with an anode as the electron acceptor. Env Microbiol Rep. 2014;6:100–5.
Gajda I. Self sustainable cathodes for microbial fuel cells. PhD Thesis, University of the West of England; 2016. (in press)
Greenman J, Holland O, Kelly I, Kendall K, McFarland D, Melhuish C. Towards robot autonomy in the natural world: a robot in predator's clothing. Mechatronics. 2003;13:195–228.
Santoro C, Lei Y, Li B, Cristiani P. Power generation from wastewater using single chamber microbial fuel cells (MFCs) with platinum-free cathodes and pre-colonized anodes. Biochem Eng J. 2012;62:8–16.
Ren H, Lee H-S, Chae J. Miniaturizing microbial fuel cells for potential portable power sources: promises and challenges. Microfluid Nanofluid. 2012;13:353–81.
Walter XA, Forbes S, Greenman J, Ieropoulos IA. From single MFC to cascade configuration: the relationship between size, hydraulic retention time and power density. Sustain Energ Technol Assess. 2016;14:74–9.
Gajda I, Greenman J, Melhuish C, Ieropoulos I. Simultaneous electricity generation and microbially-assisted electrosynthesis in ceramic MFCs. Bioelectrochemistry. 2015;104:58–64.
Cole AC, Semmens MJ, LaPara TM. Stratification of activity and bacterial community structure in biofilms grown on membranes transferring oxygen. Appl Environ Microb. 2004;70:1982–9.
Jorgensen BB, Revsbech NP, Cohen Y. Photosynthesis and structure of benthic microbial mats—microelectrode and SEM studies of four cyanobacterial communities. Limnol Oceanogr. 1983;28:1075–93.
Degrenne N, Buret F, Allard B, Bevilacqua P. Electrical energy generation from a large number of microbial fuel cells operating at maximum power point electrical load. J Power Source. 2012;205:188–93.
Donini A, O'Donnell MJ. Analysis of Na(+), Cl(−), K(+), H(+) and NH(4)(+) concentration gradients adjacent to the surface of anal papillae of the mosquito Aedes aegypti: application of self-referencing ion-selective microelectrodes. J Exp Biol. 2005;208:603–10.
Liu B, Weinstein A, Kolln M, Garrett C, Wang L, Bagtzoglou A, Karra U, Li Y, Li B. Distributed multiple-anodes benthic microbial fuel cell as reliable power source for subsea sensors. J Power Source. 2015;286:210–6.
Zang G-L, Sheng G-P, Li W-W, Tong Z-H, Zeng RJ, Shi C, Yu H-Q. Nutrient removal and energy production in a urine treatment process using magnesium ammonium phosphate precipitation and a microbial fuel cell technique. Phys Chem Chem Phys. 2012;14:1978–84.
II, developed the original concept of cascading MFCs. XAW and IG developed the original concept of SSC-MFCs. XAW developed the design of SSC-MFC, and SF built the various set-ups involved in the study. XAW set the experimental designs, conducted the experiments, collected data, produced the figures, and wrote the initial draft manuscript. XAW, JW, JG and II were involved in the data analysis and interpretation, and equally contributed to the improvement of manuscript. All authors read and approved the manuscript.
This work was funded by the EPSRC under the Grant No. EP/L002132/1.
Bristol BioEnergy Centre (B-BiC), Bristol Robotics Laboratory, T-Block, Frenchay Campus, University of the West of England, Bristol, BS16 1QY, UK
Xavier Alexis Walter, Iwona Gajda, Samuel Forbes, Jonathan Winfield & Ioannis Ieropoulos
Microbiology Research Laboratory, Department of Biological, Biomedical and Analytical Sciences, Faculty of Applied Sciences, Frenchay Campus, University of the West of England, Bristol, BS16 1QY, UK
John Greenman
Xavier Alexis Walter
Iwona Gajda
Samuel Forbes
Jonathan Winfield
Ioannis Ieropoulos
Correspondence to Xavier Alexis Walter or Ioannis Ieropoulos.
Walter, X.A., Gajda, I., Forbes, S. et al. Scaling-up of a novel, simplified MFC stack based on a self-stratifying urine column. Biotechnol Biofuels 9, 93 (2016). https://doi.org/10.1186/s13068-016-0504-3
Partially submerged cathodes
Scaling-up
Ammonium abstraction
Microbial fuel cell stack
|
CommonCrawl
|
Why mothers still deliver at home: understanding factors associated with home deliveries and cultural practices in rural coastal Kenya, a cross-section study
Rodgers O. Moindi1,
Moses M. Ngari2,
Venny C. S. Nyambati1 &
Charles Mbakaya3
BMC Public Health volume 16, Article number: 114 (2016) Cite this article
Maternal mortality has declined by 43 % globally between 1990 and 2013, a reduction that was insufficient to achieve the 75 % reduction target by millennium development goal (MDG) five. Kenya recorded a decline of 18 % from 490 deaths in 1990 to 400 deaths per 100,000 live births in 2013. Delivering at home, is associated with higher risk of maternal deaths, therefore reducing number of home deliveries is important to improve maternal health. In this study, we aimed at establishing the proportion of home deliveries and evaluating factors associated with home deliveries in Kilifi County.
The study was conducted among mothers seeking immunization services in selected health facilities within Kilifi County using Semi-structured questionnaires administered through face to face oral interviews to collect both quantitative and qualitative data. Six Focus Group Discussion (FGD) and ten in-depth interviews (IDIs) were used to collect qualitative data. A random sample of 379 mothers was sufficient to answer the study question. Log-binomial regression model was used to identify factors associated with childbirth at home.
A total of 103 (26 %) mothers delivered at home. From the univariate analysis, both mother and the partners old age, being in a polygamy marriage, being a mother of at least two children and staying ≥5 Kms radius from the nearest health facility were associated with higher risk of delivering at home (crude P < 0.05). Both mother and partner's higher education level were associated with a protective effect on the risk of delivering at home (RR < 1.0 and P < 0.05). In multivariate regression model, only long distance (≥10Kms) from the nearest health facility was associated with higher risk of delivering at home (adjusted RR 3.86, 95 % CI 2.13 to 7.02).
From this population, the major reason why mothers still deliver at home is the long distance from nearest health facility. To reduce maternal mortality, access to health facility by pregnant mothers need to be improved.
Kenya has made progress towards reducing maternal mortality, though insufficient to achieve MDG 5 [1]. Nearly all maternal deaths can be prevented if mothers could deliver at a health facility under care of skilled birth attendant [2]. The presence of skilled birth attendant during childbirth in a hygienic environment with necessary skills and equipment to recognize and manage any emerging complications reduces the likelihood of birth complications, infections or death of either the baby or mother. According to the 2008–09 Kenya Demographic and Health Survey (KDHS) [3], more mothers died during childbirth in 2008–09 compared to 2003 [4]; 488 deaths in 2008–09 vs. 412 deaths per 100,000 live births in 2003. There was not much change on the proportion of women delivering in health facility under watch of a skilled birth attendant from 2003 (42 %) to 2008 (44 %), however in 2014, 62 % of deliveries were attended by skilled birth attendant in a health facility [5] In Kilifi County, only 52.3 % of all women who gave birth were attended by skilled birth attendant at a health facility in 2014 [5].
The Kenya government has rolled up many interventions and policies to ensure all childbirth is in a health facility and attended by skilled birth attendants. These policies and interventions include making child delivery in public health facility free since 2013, putting up of a maternal shelter (commonly referred to as 'waiting homes') that is currently underutilized, Output Based Approach (OBA) and the 'Beyond zero campaign' spearheaded by the first lady to stop preventable maternal deaths by providing fully equipped mobile clinic. OBA projects have been running in Kenya since 2006 targeting subsidies for safe motherhood in many parts of the country, with Kilifi County as one of them but their major weakness has been overreliance on external funding [6]. In addition, the County government and the national government via constituency development funds have built new health facilities. The uptake of antenatal care services (ANC) has been impressive in the Kilifi County, but seems like mothers opt to deliver at home after attending the ANC and later have their babies receive immunizations in the health facilities [5]. Despite these deliberate interventions by the government, it's still not clear why mothers would opt to deliver at home. Therefore, this study seeks to understand what makes mothers deliver at home, home delivery cultural practices and predictors of delivery at home after the intervention implemented by Kenya government.
Kilifi County is one of the 47 counties in Kenya located along the Kenya coastline and covers 12,609.7 km2 of land. In 2012 it had a population of 1,217,892, with more than 68 % of the population living below poverty line and the main economic activities being subsistence farming (maize and cassava farming), fishing in the Indian Ocean and tourism [7]. The entire road network covers about 3000 Kms. Only 30 kms of rural roads are tarmacked, the rest are in poor state and mostly impassable especially in rainy seasons [3].
The county has nine level 4 public hospitals, 20 level 3 public health Centres, 197 level 2 public dispensaries, one mission hospital, two private hospitals, one armed forces hospital, five private nursing homes and 107 private clinics. Level 4 public hospitals are the primary hospitals, level 3 are health centres, maternities or nursing homes and level 2 are Dispensaries or clinics [7].
The study was carried out in three health facilities within Kilifi County; Kilifi County hospital (level 4), Ganze health centre and Bamba sub-district hospital (level 3). The three health facilities were picked because of their geographical locations, high volume facilities and evenly cover the study location.
This was a facility based cross sectional study interviewing mothers in study health facilities who had brought their children for routine immunization services and delivered within six months prior to commencement of the study. The outcome of interest was childbirth either at home or at a health facility.
Size of the study
The sample size n was calculated using the formula of Fishers et al. [8]
$$ \begin{array}{ll}\hfill n& = {{\mathrm{z}}^2}_{1\hbox{--} \upalpha}\ \mathrm{P}\ \left(1\hbox{--}\ \mathrm{P}\right)/{\mathrm{d}}^2\hfill \\ {}& = {(1.96)}^2\ (0.56)(0.44)/{(0.05)}^2\hfill \\ {}& = 378.63\hfill \end{array} $$
Where Z = standard normal distribution curve value for 95 % CI which is 1.96
P = proportion of home deliveries according to KDHS of 2007/08–0.56.
d = absolute precision (0.05)
Attrition of 10 % = 0.1* 379 = 38
Therefore a sample size of 417 mothers (379 + 38) was enough to answer the study question after adjusting for 10 % of attrition.
The study population was women of child bearing age from 18–49 years attending the study health facilities during the study period. Women attending the study health facilities who had given birth in the last six months prior to study period and a resident of Kilifi County were screened and those eligible were asked to provide written consent to participate in the study. Mothers with very sick children were excluded in the study. The population of females in reproductive age (15–49 years) in the County was 257,521 (23 %) in 2009 [7]. In 2008/09, 56.2 % of mothers delivered at home without being attended by skilled birth attendant, maternal mortality rate was 488 per 100,000 live births in the County [1].
Data management and statistical analysis
Trained research assistants were used to collect data from the mothers using structured questionnaires. Every questionnaire was cross-checked for completeness after the interview. After data collection, double entry was done on a password protected Microsoft Access database and exported to STATA 13.1 (College Station, TX, USA) for statistical analysis. The distance from the household to the nearest health facility was categorized into 4 groups; <5, 5 to 10, ≥10 kms and don't know. Categorical variables were summarized using proportions and associations tested using chi-square or fisher's exact test where applicable. Continuous variables were summarized using means and standard deviations for normally distributed data while skewed data were summarized using medians and interquartile range. A two tailed independent t-test was used to test difference of means for normally distributed continuous variables and Mann–Whitney U test for skewed continuous variables. To identify risk factors of delivering at home, we computed relative risks (RR) using log-binomial regression model, retaining all variables with a crude P-value < 0.1 (10 %) in the multivariate model. Statistical significance was evaluated using 95 % confidence interval and a two-tailed p-value of <0.05.
The study was approved by Kenya Ethical Review Committee and conducted in accordance to good clinical practices principles. Permission to conduct the study was also granted by Kilifi County Director of Health Services and the study health Facilities in-charges.
We recruited a total of 410 mothers; 100 from Bamba sub-district hospital, 100 from Ganze health centre and 210 from Kilifi county hospital but only 394 records had data on the outcome of interest and were used in the analysis. Of the 394 mothers, 103 (26 %) reported to have delivered at home during their last pregnancy. The mean age in years of the mothers and their partners (standard deviation) was 25.9 (5.6) and 31.5 (7.3) respectively. Mothers who delivered at home were older (P = 0.002). The distribution of marital status and whether the partner was the household main bread winner was not different between mothers who delivered at home and those who delivered at a health facility. The distribution of other main socio-demographic characteristics (type of marriage, number of children, religion and level of education) were different between mothers who delivered at home and those who delivered at a health facility (P < 0.05) Table 1.
Table 1 Participants characteristics stratified by either home or health facility delivery
Among the 103 mothers who delivered at home, only 44 (43 %) were massaged on the abdomen during labor. A total of 59 (57 %) of mothers who delivered at home used new washed/unwashed clothes to wrap the baby and 42 (41 %) wrapped the baby with old washed/unwashed clothes. Of the babies delivered at home 46 (44 %) were wrapped within five minutes of delivery and 81 (79 %) were wrapped in the first 30 min. Mother's breast milk was the common food first fed to the baby 71 (69 %) and 16 (17 %) babies were fed with glucose water first, 81 (79 %) of these babies delivered at home were fed within the first hour of life. Cotton and gauze was used to control bleeding by 35 (36 %) mothers, 47 (48 %) used clean washed/new cloth and 15 (15 %) used unwashed cloth. Eighthly nine (88 %) of mothers who delivered at home went to the hospital after delivery, 59 (65 %) visited the hospital after a day of delivery, 70 (78 %) visited hospital for the baby immunization, 13 (14 %) for check-up and 4 (4 %) because there was complication during home delivery. Among those babies delivered at home and taken to hospital later, 87 (92 %) got vaccinated, 72 (76 %) of the babies were vaccinated after a day. Majority of mothers who deliver at home were assisted by their mother-in-laws (28 %) and only 4.9 % were assisted by a skilled birth attendant Table 2. Despite very few of home deliveries being supervised by skilled birth attendant, 98 (95 %) of mothers who delivered at home used new or boiled blade to cut the umbilical cord.
Table 2 Some reasons for delivering at home and home delivery practices among mothers who delivered at home
Living far from a health facility was reported as a reason for delivering at home by 54 % of mothers who delivered at home and 32 % of mothers who delivered at home were not able to get to a health facility because of onset of labor before the expected delivery date. Only 12 % of mothers reported high cost as a reason for not delivering at health facility Table 2.
Although only 68 (17 %) mothers reported to have planned to deliver at home, 103 (26 %) delivered at home. This is despite the fact that 350 (87 %) of all the interviewed mothers acknowledged knowing the difference between delivering at home and in a health facility and that delivering at home was dangerous. Although only 18/103 (17 %) of mothers who delivered at home reported to have been attended by the traditional birth attendant (TBA), 84 (21 %) of all the mothers reported TBA services to be good. A total of 81 (21 %) of all mothers thought TBA services were essential, majority being mothers who delivered at a health facility (P < 0.0001). However, 210 (53 %) believed TBA services in managing home deliveries was bad. We found 373 (92 %) of all mothers to have attended antenatal care (ANC) albeit 61 (16 %) started attending ANC in third trimester and only 183 (49 %) attended ANC at least four times as recommended. Majority of mothers who attended the ANC clinics reported their services were good 253 (68 %) and 305 (82 %) reported to have been advised on mode of delivery during the ANC clinic visits Table 3.
Table 3 Mothers' Antenatal Care (ANC) experience and perceptions towards home delivery
From the univariate analysis, both mother and the partners old age, being in a polygamy marriage, being a mother of at least two children and staying ≥ 5 Kms from the nearest health facility were associated with higher risk of delivering at home (crude P < 0.05). However, both higher education level of the mother and the partner were associated with a protective effect on the risk of delivering at home in the univariate regression model (RR < 1.0 and P < 0.05). In multivariate regression model, only long distance from the nearest health facility was associated with higher risk of delivering at home. Living ≥10 Kms away from the nearest health facility was associated with adjusted RR of 3.86 (95 % CI 2.13 to 7.02, P < 0.0001) Table 4.
Table 4 Risk factors associated with home delivery
Our study established that 26 % of the mothers attending child welfare clinics for child immunization delivered their last child at home, a prevalence lower than 2014 national prevalence of 39 % and 51 % in Kilifi County [9]. Studies done in other Counties in Kenya like Pokot and Nyandarua reports higher prevalence [10–12]. Our lower proportion of mothers delivering at home could be due to efforts by Kenya government of increasing hospital access to mothers especially in rural areas. Kenya government in collaboration with other health stakeholders has been carrying out high-level campaign and interventions to reduce maternal mortality in line with Millennium Development Goal 5. In particular the current first lady is leading a campaign dubbed "Beyond zero campaign", to stop preventable maternal deaths by providing fully equipped mobile clinic to provide medical care during delivery to women who have no access to hospitals. By only including mothers bringing their babies for immunization services, we might have left out mothers who don't take their babies for vaccination or those who lost their infants in the first six months of life.
Previous studies investigating factors associated with place of delivery, reported age of the mother as one of risk factor associated with home delivery [13–16]. In the univariate analysis, both age of the mother and that of the partner were associated with the risk of home delivery. Higher age of the mother would be a function of successful previous deliveries experience and some cultural norms. Mothers who have previously delivered successfully with no complications tend to deliver at home than the young new mothers. On the other hand, older women may belong to more traditional cohorts and thus be less likely to use modern facilities than young women [17]. High education levels of both the partner and the mother were associated with protective effect on the risk of home delivery. Educated women tend to give birth to few children and deliver at a health facility compared to women with little or no education [8, 18, 19].
Contrary to previous hypothesis, cost of delivery at health facility was not reported as a major hindrance to accessing hospital delivery services rather distance from the nearest health facility was the major risk factor of home delivery. After adjusting for other risk factors, staying ≥10 kms from nearest health facility was associated with nearly 4-fold risk of home delivery. This key finding concurs with what other researchers in developing countries have reported [20–26]. Most pregnant women are not able to access transport services when they develop labor mostly due to the poor road net-work and infrastructure especially in rural and poor urban regions in Africa [20, 22]. Within rural Kilifi County, health facilities are sparsely distributed with very poor road net-work and erratic public transport system. Most of the women could have developed labor at night when the public means of transport is not available. Interventions such as "waiting homes" near health facilities to accommodate the expectant mothers residing far from the nearest health facilities days before delivery day can be helpful in such scenarios [19]. Government run health facilities in Kenya, offer free maternal health services but this may not help the targeted mothers if the mothers can't access the health facilities. The 'beyond zero initiative' by the first lady was conceived to reach to these mothers who may not be able to access health facilities to deliver however the initiative is yet to have significant impact on reducing maternal mortality. Despite the fact that Kilifi County is one of the Counties that received the mobile clinics, it has not been able to reach to the rural Kilifi where our study population reside [27].
Majority of the women who delivered at home were assisted by untrained birth attendants like their mother-in-laws or their biological mothers, very few were attended by TBA or skilled birth attendants. This poses great danger to the pregnant woman and the baby since the untrained birth attendants 'delivery services' are informed by some detrimental cultural practices like massaging of the abdomen before delivery. Massaging is done to align the baby in the right position in readiness for delivery. Most of the massaged women are unaware of the dangers associated with it that include placenta praevia and abruption, asphyxiating the fetus and increased chances of trauma to the baby and premature delivery. During focus group sessions, mothers complained of many uncomfortable vaginal examinations which could have kept some pregnant women away from the maternity. Mothers delivering at home don't get the professional advice on how to treat the umbilical cord, it's a common practice to apply wood ash after cutting the umbilical cord along the Kenya coast. These cultural practices especially by mothers delivering at home are associated with the excess burden of neonatal admissions with such common diagnosis like neonatal sepsis and neonatal Tetanus at Kilifi County Hospital [28, 29].
The major limitation of this study was selection bias since the study was carried out in health facilities excluding mothers who don't take their children to child welfare clinic and whose children died in the first six months. Mothers of deceased infants and who don't take their children for immunization might have different reasons for delivering at home. Therefore the proportion of home delivery reported by this study was an underestimation.
Despite the government efforts to offer free maternal services in Kenya, long distance from the nearest health facility rather than economic and cultural factors was the major reason why mothers still deliver at home. Government should consider investing in innovative ways of increasing access to health facility by pregnant women.
Kenya National Bureau of Statistics (KNBS) and ICF. Macro Kenya Demographic and Health Survey 2008–09. Calverton, Maryland: KNBS and ICF Macro; 2010.
National Co-ordinating Agency for Population and Development (NCPDA). Maternal Deaths on the Rise in Kenya: A Call to Save Women's Lives. Policy Brief No. 9 – June 2010. 2010.
Kenya National Bureau of Statistics (KNBS). Kenya Population and Housing Census Report published in August 2010. 2010.
CBS, MOH and ORC Macro. Kenya Demographic and Health Survey,2003. Calverton, Maryland: Central Bureau of Statistics (Kenya), Ministry of Health (Kenya) and ORC Macro; 2004.
Kenya Demographic and Health Survey 2014. Available on www.http://dhsprogram.com/pubs/pdf/PR55/PR55.pdf [accessed on 07May2015].
Njuki R, Abuya T, Kimani J, Kanya L, Korongo A, Mukanya C, et al. Does a voucher program improve reproductive health service delivery and access in Kenya? BMC Health Serv Res. 2015;15:206. doi:10.1186/s12913-015-0860-x.
Kilifi County: Kilifi County Profile Available on: http://www.kilifi.go.ke/index.php?com=1#.VnaoB1In504 [accessed on 20 Dec 2015]
Fisher AA, Laing EJ, Stoeckel EJ, Townsend WJ. Handbook for family planning operations research design. 2nd. New York, NY, USA: Population Council; 1998. p.36.
Kenya National Bureau of Statistics (KNBS). Kenya Population and Housing Census Report published in April 2015. 2014.
Ogolla J. "Factors Associated with Home Delivery in West Pokot County of Kenya," Advances in Public Health.2015, Article ID 493184, 6 pages,doi:10.1155/493184
Wanjira C, Mwangi M, Mathenge E, Mbugua G, Ng'ang'a Z. Delivery practices and associated factors among mothers seeking child welfare services in selected health facilities in Nyandarua South District Kenya. BMC Public Health. 2011;11:360.
Mokua J. Factors Influencing Delivery Practices among Pregnant Women in Kenya:A Case of Wareng' District in UasinGishu County, International Journal of Innovation and Scientific Research. 2014, ISSN 2351–8014 Vol. 10 No. 1 Oct. 2014, pp. 50–58
Envuladu EA, Agbo HA, Lassa S, Kigbu JH, Zoakah AI. Factors determining the choice of a place of delivery among pregnant women in Russia village of Jos North, Nigeria: achieving the MDGs 4 and 5. Int J Med Biomed Res. 2013;2(1):23–7.
Van Eijik A, Bles H, Odhiambo F, Ayisi J, Blokland I, Rosen D, et al. Use of antenatal services and delivery care among wome n in rural Western Kenya: a community based survey. J Reproductive Health. 2006;3(2):1–9.
Mrisho M, Schellenberg JA, Mushi K, Obrist B, Mshinda H, Tanner M, et al. Factors affecting home delivery in rural Tanzania. Tropical Med Int Health. 2007;12(7):862–72.
Nanang M, Atabila A. Factors predicting home delivery among women in Bosomtwe-Atwima-Kwanwoma district of Ghana: A case control study. Int J Med Public Health. 2014;4:287–91.
Navaneetham K, Dharmalingam A. Utilization of maternal health care services in Southern India. Soc Sci Med. 2006;55:1849–69. doi:10.1016/j.socscimed.2006.11.004.
Mwewa D, Michelo C. Factors associated with home deliveries in a low income rural setting-observations from Nchelenge District Zambia. Med J Zambia. 2010;37(4):234–9.
Abebe F, Berhane Y, Girma B. Factors associated with home delivery in Bahirdar Ethiopia: A case control study. BMC Res Notes. 2012;5:653. doi:10.1186/1756-0500.
Shrestha SK, Banu B, Khanom K, Ali L, Thapa N, Stray-Pedersen B, et al. Changing trends on the place of delivery: why do Nepali women give birth at home? Reprod Health. 2012;9:25.
Kulmala T. Maternal Health and Pregnancy Outcomes in Rural Malawi. Accademic Dissertation, University of Tempere. Medical School, Acta Electronica Universitatis Temperenasis 76; 2000. Available on http://uta32-kk.lib.helsinki.fi/bitstream/handle/10024/67088/951-44-4976-2.pdf?sequence=1 [accessed on 10 May 2015]
Thaddeus S, Maine D. "Too far to walk: maternal mortality in context,". Social Science and Medicine. 1994;38(8):1110.
Doctors of the World USA. Partnership for Maternal and Neonatal Health—West Pokot District Child Survival and Health Program. 2007. Available on: http://pdf.usaid.gov/pdf_docs/Pdack178.pdf [accessed on 30 May 2015]
Gabrysch S, Cousens S, Cox J, Campbell O. The Influence of Distance and Level of Care on Delivery Place in Rural Zambia: A Study of Linked National Data in a Geographic Information System. PLoS Med. 2011;8(1):e1000394. doi:10.1371/journal.pmed.1000394.
Gistane A, Maralign T, Behailu M, Worku A, Wondimagegn T. Prevalence and Associated Factors of Home Delivery in Arbaminch Zuria District, Southern Ethiopia: Community Based Cross Sectional Study. Science J Public Health. 2015;3(1):6–9. doi.10.11648/j.sjph.20150301.12.
Gabrysch S, Campbell OM. Still too far to walk: literature review of the determinants of delivery service use. BMC Pregnancy Childbirth. 2009;9:34.
BeyondZero. Beyond Zero initiative. Available on: http://www.beyondzero.or.ke/ [accessed on 20 Dec 2015]
Mwaniki MK, Gatakaa HW, Mturi FW, Chesaro CR, Chuma JM, Peshu NM, et al. An increase in the burden of neonatal admissions to a rural district hospital in Kenya over 19 years. BMC Public Health. 2010;10:591.
Ibinda F, Bauni E, Kariuki SM, Fegan G, Lewa J, Mwikamba M, et al. Incidence and Risk Factors for Neonatal Tetanus in Admissions to Kilifi County Hospital, Kenya. PLoS One. 2015;10(4):e0122606. doi:10.1371/journal.pone.0122606.
We are grateful to all data collectors and to all study participants for giving us their time during the entire study period.
Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
Rodgers O. Moindi
& Venny C. S. Nyambati
Kenya Medical Research Institute/Wellcome Trust Research Programme, Centre for Geographic Medicine Research-coast (CGMRC), Kilifi, Kenya
Moses M. Ngari
Kenya Medical Research Institute, Nairobi, Kenya
Charles Mbakaya
Search for Rodgers O. Moindi in:
Search for Moses M. Ngari in:
Search for Venny C. S. Nyambati in:
Search for Charles Mbakaya in:
Correspondence to Rodgers O. Moindi.
Authors' contribution
MRO, conceived and designed the study, collected the data and wrote the first draft. MC and NVCS were involved in giving technical guidance in the design of the study and in the revision of the manuscript. MMN did the statistical analysis and was also involved in drafting the manuscript. All authors read and approved the final manuscript.
Moindi, R.O., Ngari, M.M., Nyambati, V.C.S. et al. Why mothers still deliver at home: understanding factors associated with home deliveries and cultural practices in rural coastal Kenya, a cross-section study. BMC Public Health 16, 114 (2015) doi:10.1186/s12889-016-2780-z
Maternal mortality rate
Delivery practices
Cultural factors
Skilled delivery assistance
|
CommonCrawl
|
Alice and Bob play at cards
To pass the time Alice and Bob play a simple game of chance where cards are drawn in pairs from a shuffled standard 52-card deck. If two red cards are drawn Alice wins the round. If two black cards are drawn Bob wins the round. Otherwise the round continues and they draw another pair of cards.
To make things interesting, Alice and Bob wager as follows. Before a pair of cards is drawn each player antes up $1 + n$ dollars, where $n$ is the number of cards drawn in the round so far, with the pot going to the eventual winner of the round. For example, if Bob wins after a sequence black-red,black-black he nets \$4 since each player contributed \$1 before the 1st draw and \$3 before the 2nd.
If the deck is exhausted before a round is won then all cards are gathered and reshuffled and the round continues with the fresh deck (without resetting the count for wagering).
In the interest of moving things along, Alice and Bob are both willing to play multiple rounds without collecting cards and reshuffling as long as they view the expected value of the next round as close to even. Each has their own rough threshold for "close", but Alice is faster to call for reshuffles than Bob.
After starting the game with 3 remarkable rounds, and having reshuffled once due to an exhausted deck, Alice and Bob are both breaking even with \$0 net gain or loss. So far neither player has been close to calling for a reshuffle, but Bob calls for one now before continuing.
How many cards were drawn in each of the 3 rounds, and who won each round?
mathematics game cards
Bob must've won the last round, resulting in fewer black cards in the remaining deck, otherwise he wouldn't call for a reshuffle.
Sum of the won money is $\left(\frac{n}{2}\right)^2$, where $n$ is the number of rounds played. So we have to solve: $\left(\frac{n_{1}}{2}\right)^2 + \left(\frac{n_{2}}{2}\right)^2 = \left(\frac{n_{3}}{2}\right)^2$, where $n_{1},n_{2},n_{3}$ are the number of rounds, but not necessarily in the correct order. These are Pythagorean triples. We know their sum is larger than 26 and smaller than 52. The only valid solution for this puzzle are (5,12,13) and (8,15,17).
So I'd say: The first solution is unlikely because there would be still 44 cards remaining in the set after the last round, giving Bob a good enough chance to continue. Therefore, I'd say he won round 1 with 16 cards tallied. He felt his chances are still good enough, so he continued. Alice won round 2 with 34 cards tallied, leaving 2 cards (one red and one black). Thus, chances of losing in the next round were zero, so they continued. Bob won third round with 30 cards tallied. He felt 24 cards, where 11 were black and 13 were red were too bad so he asked for a reshuffle.
EDIT: typos for the 3rd round - I repeated 34 but meant 30 and incorrectly calculated the remaining cards.
EDIT2: 17 cards in the first round and 35 in the second is more likely, because it avoids a round with 0% winning chance. In the last round we would then have 20 or 19 cars, making it even less likely for Bob to continue.
ArdweadenArdweaden
$\begingroup$ nit-nittting, but should they really have paid antes after the first card oif the third round was drawn ? It is not a potentially round-winning draw because they both knew the color of the (non-winning) card that was about to be drawn... $\endgroup$ – Evargalo Aug 5 '19 at 11:55
$\begingroup$ You're right. Also, I completely overlooked the fact that the number of tallied cards could be odd. So there could be 17 cards tallied in the 1st round and 35 in the second (sum of won money is an integer division of played rounds), leaving no cards. Thus, the round in question would be avoided. Cards would have to be shuffled for round 3. $\endgroup$ – Ardweaden Aug 5 '19 at 13:24
$\begingroup$ @Ardweaden Thanks for your reply! Carlos, who happened to be watching the game, confirms that the rounds were won by Bob, Alice, and Bob with 16, 34, and 30 cards drawn respectively. I should add though that there additional possibilities you didn't discuss (two non-primitive pythagorean triples). I'll expand on this in a separate answer. $\endgroup$ – 53x15 Aug 5 '19 at 15:27
$\begingroup$ @Evargalo Thanks for pointing this out. I guess it would be better to describe the game as drawing two cards at a time with a required ante before each draw so the "potentially round winning" phrasing can be dropped. $\endgroup$ – 53x15 Aug 5 '19 at 15:30
$\begingroup$ @Ardweaden Two other comments on your answer: (1) while the 5-12-13 solution does leave 44 cards after the last round, and this is more than the number remaining with the other candidate discussed, this doesn't really rule anything out since Bob's threshold for "close" is unspecified. (2) I think you meant "chances of losing in the next two draws were zero" rather than "in the next round". The round can cross a reshuffle boundary when the deck is exhausted. I think rephrasing the problem as suggested above where two cards are drawn at a time can reduce the potential ambiguity here. $\endgroup$ – 53x15 Aug 5 '19 at 15:54
The players ante a sequence of odd numbers, so if a round ends after $x$ pairs of cards are drawn then a player's payoff is $\pm x^2$.
Since the players break even after three rounds we must have $x^2 + y^2 = z^2$, where $x,y,z$ are the number of pairs drawn in each round, but not necessarily in that order. The tuple $(x,y,z)$ is therefore a Pythagorean triple, which we can enumerate with Euclid's formula, taking care to include non-primitive triples.
The deck was reshuffled once due to exhaustion, so we know that $52 < 2x + 2y + 2z < 104$. This limits the possible (x,y,z) triples to:
(5, 12, 13)
Each of these can be further permuted 6 ways giving 24 possibilities.
We make the following observations:
(1) A round can't cross a reshuffle boundary unless it started with the players even in the number of prior wins (or else one player's advantage would force a win before the reshuffle). The second round can never cross a reshuffle boundary.
(2) Since there was a reshuffle and since no tuple has a subset that consumes exactly 52 cards, some round must have crossed the reshuffle boundary. It could not have been the first since no tuple admits a round that large, so it must have been the third, with the first two rounds including a win for Alice and a win for Bob. We know that Bob won the third round since it's he who requests the shuffle. This means that Alice won just one round, necessarily the longest, and this longest round was not the last.
(3) We don't know either player's reshuffle threshold, but we can rule out cases where there were fewer cards left in the deck after round 1 than are left in the deck after round 3. In such cases if Bob asks for a reshuffle after round 3 then either he or Alice would have asked for one after round 1.
Applying these criteria we can rule out 22 of the 24 tuple permutations, leaving only (9, 15, 12) and (8, 17, 15), which translate to:
r1(bob): 18 drawn, 34 remaining
r2(alice): 30 drawn, 4 remaining
In the first of these cases there's not much difference between 32 and 34 cards, and the problem stated that previously neither player had been close to calling for a reshuffle. If 32 cards is under Bob's threshold then 34 cards would have been close.
So the best answer is 16, 34, and 30 cards were drawn in rounds 1-3, which were won by Bob, Alice, and Bob respectively.
Not the answer you're looking for? Browse other questions tagged mathematics game cards or ask your own question.
A Game with Strings and Coins
Face Up Poker with Alice and Bob
Alternate Face Up Poker with Alice and Bob (on the floor)
Alice, Bob and Carol, and many cookies
Optimal game of Bluff (not the wikipedia one)
A game with 52 cards
Corollary to "A game with 52 cards"
Two players playing the SET game
Jack and Jill went up the Hill to play
No point shuffling, just pick your cards
|
CommonCrawl
|
POPULATION GENETICS AND EVOLUTION
Rapid Evolution of Diminished Transformability in Acinetobacter baylyi
Jamie M. Bacher, David Metzgar, Valérie de Crécy-Lagard
Jamie M. Bacher
Skaggs Institute for Chemical Biology and Departments of Molecular Biology and Chemistry, The Scripps Research Institute, 10550 N. Torrey Pines Rd., BCC-379, La Jolla, California 92037
For correspondence: [email protected] [email protected]
David Metzgar
Department of Defense, Center for Deployment Health Research, Naval Health Research Center, San Diego, California 92186-5122
Valérie de Crécy-Lagard
DOI: 10.1128/JB.00846-06
The reason for genetic exchange remains a crucial question in evolutionary biology. Acinetobacter baylyi strain ADP1 is a highly competent and recombinogenic bacterium. We compared the parallel evolution of wild-type and engineered noncompetent lineages of A. baylyi in the laboratory. If transformability were to result in an evolutionary benefit, it was expected that competent lineages would adapt more rapidly than noncompetent lineages. Instead, regardless of competency, lineages adapted to the same extent under several laboratory conditions. Furthermore, competent lineages repeatedly evolved a much lower level of transformability. The loss of competency may be due to a selective advantage or the irreversible transfer of loss-of-function alleles of genes required for transformation within the competent population.
Bacteria may achieve genetic exchange by several means (38). Conjugation is typically mediated by extrachromosomal elements that direct the transfer of their own genetic material directly from one organism to another. This also results in the occasional transfer of host genetic material. Transduction involves the accidental packaging and genetic transfer of host DNA in phage capsids. Finally, transformation involves the uptake of free DNA from the environment followed by recombination into the genome. A number of bacterial species, such as Bacillus subtilis, Neisseria gonorrhoeae, and Helicobacter pylori, are known to be naturally competent for transformation (17, 21, 26). Conjugation and transduction are caused by extrachromosomal genetic elements acting in a "selfish" manner and are mechanistically independent of the recipient's genotype. However, the evolutionary derivation of active, recipient-driven competence remains obscure.
The widespread distribution of these mechanisms that result in genetic exchange, as well as eukaryotic sexual recombination, suggests that such genetic recombination provides some benefit (4, 32, 38). However, sex involves the pairing of gametes from individuals that will then undergo meiosis, a fundamentally different process from the mechanisms of genetic transfer listed above. Therefore, a different set of benefits and costs may exist when the mechanism of genetic exchange is transformation rather than sex (38). In addition to recombining beneficial mutations (as in sex), possible benefits of transformation include the uptake of DNA as a nutrient source, the repair of DNA damage, the generation of variation within a population, and the reduction of mutational load.
Sources of DNA for transformation include the genomes and extrachromosomal elements of dead cells of the same species or of unrelated organisms and living cells of species that actively release DNA (see reference 42 and references therein). With regard to reducing the mutational load of a population, simulations have suggested that transformation is actually more likely to decrease fitness when the source of the DNA is from closely related cells that died, due to overrepresentation of low-fitness mutants among dead cells compared to their living counterparts (39). It was suggested by simulations that, in mixed populations of competent and noncompetent bacteria, the potential benefit of transformation with regard to reducing the mutation load was exceeded by the risk of transformation to a noncompetent genetic background (40). This would suggest that a noncompetent phenotype should dominate the population via unidirectional genetic transfer of defective competence alleles from noncompetent mutants to competent recipients. In this case, the possibility of reversion to competence is obviated by the nature of the allele being transferred. In addition, transformation may carry a fitness cost simply due to the metabolic demand of synthesizing the proteins involved in the active uptake of DNA from the environment and recombination of DNA fragments into the genome (Table 1), favoring loss of the trait through mutation and subsequent selection.
Identified genes related to transformability
We sought to address the issue of evolutionary costs and benefits of transformation by using Acinetobacter baylyi strain ADP1 (previously Acinetobacter calcoaceticus BD413 and Acinetobacter sp. strain ADP1). A. baylyi is a gram-negative soil bacterium that displays a broad metabolic versatility (3, 45); indeed, one-quarter of its genome contains a majority of genes dedicated to the breakdown of a large variety of compounds, a feature so unique that this region has been termed an "archipelago of catabolic diversity" (3). Importantly, A. baylyi is characteristically highly competent and recombinogenic (27, 43, 45). Furthermore, A. baylyi does not have any sequence requirements for DNA uptake (though genomic homology greatly increases the recombination rate), and the species is maximally competent at the onset of exponential growth phase (35). These features make A. baylyi an attractive target for genetic manipulation (33). Indeed, a number of operons and genes whose functions are associated with competency have been identified by targeted mutagenesis (7, 20, 25, 31, 36) and by genomic sequence analysis (3). Of these genes, knockouts of comB and comC result in noncompetent phenotypes, while knockouts of comE and comF result in 10- and 1,000-fold-diminished transformability, respectively. By comparing rates of evolution in competent and engineered noncompetent A. baylyi strains, we sought to investigate the effect of genetic exchange on laboratory evolution of bacteria and the reciprocal effect of laboratory evolution on genetic exchange mechanisms.
Strain construction.Wild-type Acinetobacter baylyi strain ADP1 (PS8004) was streaked out, and five randomly picked clones were used to initiate five cultures which were stored at −80°C and later used to initiate five wild-type lineages of A. baylyi. These were designated PS8135 through PS8139. (Strains are summarized in Table 2.)
Strains used in this study
Strains were constructed essentially as described previously (33). In order to eliminate the competence of A. baylyi, we made a DNA construct in which regions external to comFECB flanked a cassette that consisted of the kanamycin resistance (Kanr, which allows positive selection) gene and the saccharase B gene (sacB, which allows negative selection due to induced sensitivity to sucrose). This construct was then used to transform A. baylyi and select for Kanr clones, which should eliminate the comFECB operon and result in a noncompetent strain. The regions flanking comFECB were amplified from genomic DNA of PS8004 by using the primers comOpACF (5′-GCACGTCCGCTGATTCCATAAGCAGTGAT) and CO-ACR-KSB (5′-ggttgtaacactggcagagcATGCAAATTCAAAACTGTGGATAAGCCAA) for the 5′ region and CO-BNF-KSB (5′-gagacacaacgtggctttccTTAGTACGCCTCCAGAAACAAACACGTTGTA) and comOpBNR3 (5′-TTAAACAAGTGATTCAGCGTTTACAGGACTGGGGTGCAGAAGC GCC) to amplify the 3′ flank; CO-ACR-KSB and CO-BNF-KSB add regions of homology to the Kanr-sacB cassette (lowercase). The cassette was amplified using the primers SacBKanF (5′-GGAAAGCCACGTTGTGTCTC) and SacBKanR (5′-GCTCTGCCAGTGTTACAACC) with genomic DNA of strain PS6308 (A. baylyi ΔilvC::Kanr-sacB) (33). Conditions for amplification were identical among the three reactions: HiFi Platinum Taq (Invitrogen, Carlsbad, CA) with 0.2 μM of each primer, 200 μM deoxynucleoside triphosphates, and buffer and MgSO4 as supplied by the manufacturer. Reactions were cycled 35 times at 94°C for 30 s, 58°C for 30 s, and 68°C for 2 min and then at 68°C for 5 min. For PCR of flanking regions, outer primers were used at 0.8 μM. Reaction products were mixed at 3.3 μl for each reaction in a new 100-μl reaction mixture and cycled again, without additional primers, as described above. The entire reaction mixture was added to a growing, 0.5-ml culture of PS8004. This culture was grown for an additional 4 h, and then 200 μl was plated on Luria-Bertani medium (LB) with kanamycin (15 μg/ml) (LB + Kan). This resulted in hundreds of kanamycin-resistant colonies, only ∼1% of which were also sucrose sensitive. Of these, two were subjected to PCR with the primers comOpACF and comOp-BNR3, and both yielded amplified products of a size that was predicted given the cassette replacing comFECB. Strain PS8032 (A. baylyi ΔcomFECB::Kanr-sacB) was streaked out on LB + Kan, and five colonies were stored at −80°C and used to initiate noncompetent lineages (PS8130 to PS8134).
A similar strategy was employed to eliminate the mutS gene. The flanks of mutS were amplified with the primers mutSNF (5′GAGCTGGCAATTGGTGATCAAA) and mutS-NR-KSB (5′-gagacacaacgtggctttccGGTCAGCCATTGTTTCTGTGCTAT) for the 5′ flank and mutS-CF-KSB (5′-ggttgtaacactggcagagcCTAATTACGCTCAAACAGTC) and mutSCR (5′-GGTACGAACAATTCCTTTTA) for the 3′ flank. Overlaps to the Kanr-sacB cassette were included (lowercase). Three-way assembly PCR was carried out as described above and used to transform PS8004, generating strain PS8455 (A. baylyi ΔmutS::Kanr-sacB). The 5′ flank was again amplified, replacing mutS-NR-KSB with the primer RCCFmutSNR (5′-gactgtttgagcgtaattagGGTCAGCCATTGTTTCTGTGCTAT), which includes the reverse complement of primer mutSCF on the 5′ end (lowercase). Similarly, the 3′ flank was reamplified, replacing mutS-CF-KSB with mutSCF (5′-CTAATTACGCTCAAACAGTC). These products were then spliced by two-way assembly PCR with external primers mutSNF and mutSCR. This product was used to transform PS8455 and was plated on LB plus 6% sucrose (without NaCl) (LB + Suc) to take advantage of negative selection against sacB. Surviving colonies were tested by PCR with mutSNF and mutSCR and were found to have the products that would indicate the clean deletion of the mutS gene. Five such clones were isolated and used to generate lineages for selection (PS8471 to PS8475).
Serial dilution experiments.Strains PS8130 to PS8139, representing five cultures each of Δcom and wild-type A. baylyi, were inoculated from frozen culture in 1 ml of LB and grown to stationary phase at 30°C and 250 rpm. Cultures were diluted 1:10,000 to a bottleneck size of ∼2 × 105, and this process was repeated. Each culture represents ∼13 generations (g), with an effective population size of 2.3 × 106 (Ne = N0 × g) (30). Samples were stored at −80°C every five cultures, representing ∼66 generations. Because of the suspicion that combining the separate lineages might enhance the rate of evolution if recombination was a factor (potentially due to increasing the population diversity), after 400 generations, each lineage was severely bottlenecked (∼200 CFU) and mixed to form one lineage per wild-type and com genotype. These new, mixed lineages were propagated for five serial cultures. These cultures were diluted to initiate five new lineages per genotype, which were each serially diluted 10 times. The process of bottlenecking, serial dilution, and restoration of lineages was repeated, with the restored lineages (five per genotype) being propagated for five transfers when the experiment was halted. This resulted in a total of ∼730 generations of adaptation to benign laboratory conditions.
Later experiments were conducted without the severe bottlenecking steps and subsequent mixing of lineages. To compare the evolution of wild-type and mutator strains of A. baylyi, PS8135 to PS8139 and PS8471 to PS8475 were serially diluted for ∼730 generations in 1 ml LB at 30°C and 250 rpm. To compare the adaptation of Δcom and wild-type A. baylyi strains in harsh conditions, PS8130 to PS8139 were similarly propagated for ∼400 generations in 1 ml LB plus 300 mM NaCl at 40°C and 250 rpm.
Growth rate determination.Growth rates were determined essentially as described previously (2). In brief, overnight cultures were diluted 1:1,000 into 250-μl microplate wells. Growth was monitored by absorbance at 595 nm in a PowerWave 200 microplate reader (Bio-Tek, Winooski, VT), with incubation at the appropriate temperature and shaking between readings. Growth curves were fit by nonlinear regression to the logistic growth equation $$mathtex$$\[N_{t}{=}\frac{K}{1{+}[(K/N_{0})\ {-}\ 1]e^{{-}rt}}\]$$mathtex$$(1) in which K is the carrying capacity, N is the population size, t is time, and r is the specific growth rate. Rates shown are means and standard deviations (SDs) for five replicate lineages, measured at least twice.
Fitness assays.Fitness was assayed by direct competition at high and low densities. Overnight cultures of lineages to be competed were diluted and mixed into the same culture at either 1:100,000 for low-density competitions (∼103 CFU/ml) or 1:10 for high-density competitions (∼108 CFU/ml). Low-density competitions lasted for ∼15 generations, but the linear relation between number of generations and time did not change at the densities examined (r2 was at least 0.98 for all experiments), indicating that culture conditions had not dramatically changed. High-density cultures reached stationary phase quickly and then stayed at high density (∼109 CFU/ml) for the remainder of the competition. At various time points (on the scale of hours for low-density competition and ∼12 h apart for high-density competitions), samples from the competition were serially diluted, and titers were determined in duplicate on LB + Kan (specifying Δcom lineages) or LB + Suc (specifying wild-type lineages due to the sensitivity of the Δcom lineage to sucrose). While the Sucs phenotype is lost at a frequency of ∼10−7, competitions were carried out such that the difference between types was much greater than this frequency and therefore was insensitive to reversion. The change in the ratio of the two types being competed was fit to the equation $$mathtex$$\[\mathrm{ln}\ R_{t}{=}\mathrm{ln}\ R_{0}\ {+}\ s\ {\times}\ t\]$$mathtex$$(2) in which R is the ratio of the two types, t is time, and s is the selection coefficient, i.e., the difference in fitness between the two types (29). Fitness values are reported in terms of hours because in high-density fitness competitions, there is no change in population size. Even though there is turnover within the population, the generation time cannot be measured. Although the actual change in total number of bacteria is known for the low-density cultures, t is reported in terms of hours as well so that the two measures can be directly compared. Calculating the low-density fitness value in terms of generations does not change the conclusions reached here.
Transformability assay.Strains to be tested for transformability were diluted from an overnight culture 1:10 into 250-μl microplate wells. After 2 h of shaking and incubation at 30°C, 1 μg of genomic DNA was added to each well. DNA was prepared as described previously (1) from either PS8025 (= PS6315; A. baylyi ΔilvC::Specr-sacB) or PS8041 (= PS6372; A. baylyi ΔtrpGDC::Kanr-tdk) (33). Microplates were shaken and incubated for an additional 4 h. Cultures were serially diluted, and titers were determined on selective (LB + Kan or LB plus spectinomycin [200 μg/ml] [LB + Spec], as appropriate) and nonselective (LB) media to determine the fraction transformed. It should be noted that this procedure was optimized for throughput rather than absolute transformability.
In order to test the distribution of transformability, the ancestral and evolved lineages were tested as described above, with 1 μg of genomic DNA from each of PS8025 and PS8041. The fractions of single and double transformants were determined by measuring titers on LB, LB + Kan, LB + Spec, and LB + Kan + Spec. The expected number of double transformants was the product of the Kanr and Specr frequencies multiplied by the total titer of the culture.
DNA release assay.Strains were tested for the amount of free DNA produced in culture at stationary phase. These strains were grown for 16 h at 30°C and 250 rpm. Cultures were spun down at 10,000 × g for 5 min. The supernatant was then filtered through 0.22-μm polyvinylidene difluoride syringe filters. The filtrate was phenol-chloroform extracted, chloroform extracted, and ethanol precipitated. The pellet was resuspended in 10 mM Tris, pH 8.0. A portion of the purified DNA was used in a transformation assay in which PS8025 was transformed with DNA from all tested strains. Transformability was assayed by determining the fraction of the culture that had reverted to ilvC+ by comparing the number of colonies formed on minimal medium plus glucose (MSglc) (41) to the number formed on LB. Because of the linearity of transformability over the DNA ranges found here (data not shown), the amount of transformable DNA released in cultures could be estimated by generating a standard curve and relating the fraction of PS8025 transformed to known concentrations of transforming DNA.
Mutation fixation calculations.A slight fitness differential at high density was observed in parallel fitness assays between the noncompetent and competent variants of A. baylyi. It can be asked whether the measured fitness differential can account for the observed loss of transformability. The number of generations required for a spontaneous loss-of-function mutation with this selective advantage to go to fixation can be calculated by several methods, provided that several assumptions are made (see below). If a difference exists between the calculated number of generations required for the loss of competency to sweep to high frequency and the observed number of generations required to lose competency in the actual experiment (no more than 730 generations), then we can conclude that competitive growth rate differences alone cannot account for the rate at which we observe the evolution of diminished competency in A. baylyi.
All of the assumptions required for the calculations favor a faster sweep of the loss of competency and therefore minimize the number of generations required for selective fixation of com alleles in the following estimates. The result is that these estimates of the number of generations required for the loss of transformability should be taken as the minimal number of generations required, and almost certainly the true number is greater than what is calculated. The assumptions used include that (i) the selection coefficient is converted to per generation, rather than per hour, which effectively results in estimating the doubling time as 1 hour at high density (from the growth rate of the ancestral wild-type clones, the doubling time in logarithmic growth is 49.6 min), and (ii) the high-density fitness benefit is in effect at all growth phases (lag, exponential, and stationary). These assumptions almost certainly cause an overestimate of the total fitness difference between the wild-type and Δcom lineages; any calculated difference is therefore an estimate of the minimal difference between the two genotypes.
In addition, assume that (iii) the upper estimate for mutation frequency (see below) is the mutation rate after adjusting to a per-generation basis and (iv) a point mutation in any gene responsible for either competency or recombination will result in the loss of transformability that was observed. Finally, for simplicity it is assumed that (v) a single mutation was required for the observed loss of transformability and that this mutation swept to fixation in the culture during the last generation of the experiment (at 730 generations).
One method to estimate the time in generations, t, for a mutation to sweep to high frequency can be given as (5) $$mathtex$$\[t{=}2(1/s){\times}\mathrm{ln}\ R_{0}\]$$mathtex$$(3) The initial ratio, R0, can be calculated in two ways. The first of these requires an estimate of the mutation rate. This can be estimated as 7.5 × 10−11 generation−1 base pair−1 (from the upper measured limit of 1.5 × 10−9 spontaneous mutants generated over 20 generations [data not shown]). The number of base pairs is taken from the genes predicted to be involved in competency and recombination in the A. baylyi genome (Table 1). Together, they total 53,166 base pairs. The mutation rate is calculated as μ = 7.5 × 10−11 generation−1 base pair−1 × 53,166 base pairs = 3.99 × 10−6 generation−1. Estimates of R0 may now be made as μ/s, the fraction of mutations that remain after selection, which assumes that the mutation was present in the initial population but was initially deleterious: R0 = μ/s = 3.99 × 10−6 generation−1/0.012 generation−1 = 3.3 × 10−4. Alternatively, assume that the mutation was immediately beneficial as soon as it appeared in the first generation of the experiment. In this case, the initial frequency was 1/Ne, the effective population size was 2.3 × 106, and therefore R0 = 3.7 × 10−7.
Another approach would be to estimate the time for the mutant fraction to become 95% of the population. In this case, we can use the equation (13) $$mathtex$$\[t{=}\mathrm{log}_{2}(R_{f}/{\mu})/s\]$$mathtex$$(4) in which the final ratio Rf = 0.95/0.05 and μ is estimated as described above. Because the selection coefficient is much greater than the mutation rate (s ≫ μ), selection will account for driving the mutations to fixation (13).
The comFECB operon of A. baylyi strain ADP1 was replaced with a Kanr-sacB cassette, which resulted in a 10-million-fold diminished level of transformability and conferred kanamycin resistance and sucrose sensitivity (strains are listed in Table 2). The growth rates of the isogenic wild-type and ΔcomFECB::Kanr-sacB (Δcom) strains were measured, and a nested analysis of variance (ANOVA) revealed a significant difference in growth rate between the strains (Table 3) (F[1,8] = 16.2; P < 0.01). At low density the Δcom lineage had a nonsignificant apparent fitness advantage in direct competition with the wild type (s = 0.02 ± 0.06 h−1 [mean ± SD]; P = 0.22 by two-tailed t test; n = 10), while at high density, the Δcom lineage appeared to be significantly more fit (s = 0.012 ± 0.0127 h−1 [mean ± SD]; P = 0.016 by two-tailed t test; n = 10) (see Materials and Methods for a discussion of units used for fitness assays). An average of 131 colonies were counted at each of four time points for each strain, and competing strains were distinguished by resistance to either kanamycin or sucrose.
Nested ANOVA testing for differences in growth rates between ancestral and evolved Δcom and wild-type clonesa
We recognized that one-way transfer of the Δcom::Kanr-sacB marker to competent wild-type cells (rendering them identical to Δcom cells) would reveal itself in a similar way as would competitive changes in frequency through differential growth rates, and this test cannot directly discriminate between these two possible mechanisms. However, one-way genetic transfer should operate only transiently. Since A. baylyi is most competent at the transition to log-phase growth, genetic transfer is mistimed for operation in the high-density regimen.
It has been argued that competency in bacteria may exist for the sake of the organism-level advantage it may confer, particularly the import of nucleotides for salvage (19, 38). Growth rate analyses of the ancestral strains were carried out in MSglc in the presence of a broad range of concentrations of genomic DNA from A. baylyi ΔilvC::Specr-sacB. Importantly, free DNA conferred no growth benefit to the competent cells; conversely, the growth rates of all strains declined in response to free DNA, with growth rates of the wild-type strains possibly declining at an increased rate relative to those of the Δcom strains (Fig. 1). This supports the previous conclusion that competence is not maintained in A. baylyi for the sake of nutrient acquisition (34).
Free DNA is detrimental to the growth of A. baylyi. Competent and noncompetent lineages were grown in minimal medium in the presence of genomic DNA from A. baylyi ΔilvC::Specr-sacB. Dashed lines indicate the range of concentrations of genomic DNA found in the medium after 16 h of growth of ancestral, competent A. baylyi. Curve fits have r2 values of 0.85 and 0.67 for Δcom and wild-type lineages, respectively; shown are means and standard deviations for five lineages.
We then sought to test whether competency could be of evolutionary benefit by allowing a higher rate of adaptation. Wild-type and Δcom strains were adapted in parallel in LB at 30°C in a laboratory environment. Five clones of each genotype were selected randomly to initiate lineages. These were thereafter diluted 10,000-fold every 12 h. This allowed the cultures to reach stationary phase, resulting in a maximum population density of ∼2 × 109 CFU in 1 ml. Because the effective population (Ne) is the bottleneck size (∼2 × 105 CFU) multiplied by the number of generations that the culture is permitted to grow, serial dilution resulted in an Ne of ∼2.7 × 106. Every five serial transfers (corresponding to ∼66 generations), aliquots were stored at −80°C for later analysis (see Materials and Methods). After 55 serial transfers (∼730 generations), the growth rate had improved in all evolved populations (Fig. 2A). However, no consistent or significant differences in evolved growth rate were noted between the evolved competent and noncompetent lineages, as confirmed by a nested ANOVA (Table 3). In order to assess the fitness differences between the evolved competent and noncompetent lineages, a fitness assay was conducted in which evolved populations were competed against the ancestral clone of the opposite genotype. Fitness was determined at low density (representing the stage of fresh dilutions in the serial transfer experiment) and at high density (representing the stationary-phase culture that was inevitably achieved) (see Materials and Methods) (Fig. 2B). The fitness increase at high density was considerable and was significantly greater in the competent wild-type lineages (the difference between means is s = 0.057 ± 0.012 h−1 [mean and standard error of the mean]; P = 0.00012 by two-tailed t test; n = 10; two measurements for each of five lineages). However, the fitness increase at low density was not significantly different for the competent and noncompetent lineages (the difference between the means is s = 0.033 ± 0.021 h−1 [mean and standard error of the mean]; P = 0.13 by two-tailed t test; n = 10; two measurements for each of five lineages). As can be observed by the similar growth rates over the course of the evolution experiment (Fig. 2A), it seems likely that the population dynamics experienced by each lineage over the course of evolution were similar. In other words, even though serial transfers occurred approximately every 12 hours for the entire experiment (all 10 lineages, 5 wild-type and 5 Δcom, were diluted at the same time), lineages would have spent more time in log phase in the earlier stages of the experiment. As the growth rate increased, more time would have been spent at stationary phase awaiting the next transfer. Nevertheless, it remains unclear what about these culture conditions might have favored the wild-type lineages at high density, since A. baylyi becomes nontransformable in stationary phase. These data, along with the similarly ambiguous and mixed findings related to growth rate (Fig. 2A), suggest that neither strain has a consistent evolutionary advantage under laboratory conditions.
Competent and noncompetent Acinetobacter baylyi strain ADP1 lineages adapt equivalently to laboratory conditions. (A) Growth rates of competent and noncompetent lineages are improved to comparable extents. Shown are means and standard deviations of three measurements each for five lineages of each genotype. (B) The fitness of each evolved lineage was determined by competition with the ancestor of the opposite type at high and low densities. The high-density fitness of the evolved competent lineages had increased by ∼0.06 h−1 greater than the increase of the noncompetent lineages. Shown are the means and standard deviations for five lineages, with two replicates each.
If the extent of transformability itself were to evolve, this could help reveal the role of that characteristic in evolution (6). In all five evolved competent lineages, the fraction of the population transformed in a standardized assay had decreased by 80 to 85% (Fig. 3A). Furthermore, the level of transformability decreased over the course of the selection (Fig. 3B). In order to disentangle the contributions of lineage and evolution, we performed an ANOVA to determine whether the variation observed was significant (F = 3.3; P < 0.05) (Table 4). The variation was found to be significant, which suggests that some lineages may have evolved in significantly different ways in terms of transformability. In particular, the variation at ∼130 and ∼600 generations (Fig. 3B) may indicate that some lineages were acquiring an evolutionary benefit from transformability. As they adapted to the culture conditions, the evolved populations acquired one or more mutations that resulted in the observed changes in transformability levels. These mutations may have spread either by selection against competence or by the process of unidirectional transformation from competence to noncompetence described above.
A. baylyi evolves a diminished transformability in response to adaptation to laboratory conditions. (A) Transformability in the evolved and ancestral competent lineages was assayed. Transformation was tested with two markers in order to ensure that the transforming marker was not having an effect. Regardless of marker, evolved lineages have transformability that is ∼15 to 20% of that of the ancestral lineages. Shown are means and standard deviations for five lineages. (B) Transformability of lineages over the course of the evolution experiment. Transformability was normalized to the initial fraction transformed within each lineage; means and standard deviations are shown for five lineages. (C) Six clones from each of five lineages, evolved and ancestral, were assayed for transformability. Rank-ordered clones are shown, with similar fill patterns of bars indicating clones from the same lineage.
ANOVA testing response of transformability to evolutiona
The diminished average transformability may have been due to complete fixation of a partial loss-of-function com mutation or to the presence of a more complete loss-of-function com mutation in a subpopulation. The majority of clones from the evolved populations have diminished transformability comparable to that measured on the population level (Fig. 3C). A more sensitive assay measured the distribution of transformability within a culture. A mixture of genomic DNAs from two donors, each with a distinct marker, was used to transform each of the evolved and ancestral competent lineages. The observed frequency of double transformants was greater than the product of the frequencies of single transformants in the ancestral lineages, suggesting that fewer than 100% of com+ cells are simultaneously competent. In cultures of the ancestral populations, 86% ± 8% (mean ± SD) of A. baylyi cells became competent under the conditions of the transformability assay. The fraction of transformable cells in the evolved competent lineages was 70% ± 30% (mean ± SD), showing no significant change in transformable fraction over the course of evolution. The variance in the level of competence within the ancestral and evolved populations, on the other hand, differed to a significant extent (F[1,8] = 21.4231; P = 0.0017). While this altered distribution of competence may have contributed to the evolution of some lineages and not others, it does not sufficiently account for the fraction that retains significant competency, which is at most 20% (Fig. 3A). Overall, then, it was observed that over the short term (e.g., ∼130 generations), some lineages may increase in transformability (Fig. 3B and Table 4). However, it appears that partial loss-of-function mutations in competence genes appear to inevitably become fixed in the population over the full course of these experiments.
Diminished transformability may have been selected because transformability was failing to provide any of a number of benefits. As demonstrated, DNA taken up by A. baylyi was not used as a nutrient (Fig. 1). Since these populations are essentially clonal, both the reduction of the mutational load and the repair of genetic damage may be considered straightforward benefits that can be derived from transformability. However, the loss of competence suggests that this was not the case, or at least that such putative benefits were not sufficiently advantageous to overcome any mechanisms that resulted in the loss of transformability.
Similarly, one potential benefit of competence—acquisition of novel, immediately advantageous alleles such as, for example, antibiotic resistance in nosocomial environments—is obviated by the nearly clonal nature of the lineages. It has been shown that the benefit from sex in Chlamydomonas reinhardtii increases in larger populations due to the increased availability of mutations (8). It was thought that increasing the mutation rate might make recombination more beneficial by increasing the total number of mutations available for recombination in the population. To this end, the mutS gene of A. baylyi was disrupted with the Kanr-sacB cassette, which was in turn replaced with a clean deletion of mutS. The ΔmutS strain demonstrated ∼100-fold increased mutation frequency, as measured by the fraction of the population that spontaneously acquired rifampin resistance in 20 generations (data not shown). This is a mutation frequency comparable to what has previously been measured for disrupted mutS genes in A. baylyi (44). The growth rate of the ancestral ΔmutS lineage did not differ significantly from that of the wild-type ancestor. Five ΔmutS lineages were adapted in parallel with the five lineages of the wild-type strain under conditions similar to those in the initial serial evolution experiment. After ∼730 generations of adaptation, the growth rate had improved to comparable extents in wild-type and ΔmutS lineages (Fig. 4A). Mutation frequencies were generally unchanged over the course of evolution, although the mutation frequency of one mutS+ lineage did appear to increase mildly (∼6-fold [data not shown]). This may be indicative of a role for transformability as a source of moderate genetic variation. In principle, this could result from the uptake of single-stranded DNA, causing the induction of the SOS response and the error-prone DNA polymerases IV and V; however, A. baylyi apparently lacks lexA (3), and the expression of both recA and umuD is unusual (24, 37). This indicates that even with a dramatically increased frequency of mutation, there remains no evolutionary benefit of competence and transformation in pure laboratory cultures of A. baylyi. Finally, a decrease in transformability was again observed across all strains (Fig. 4B) (wild type, P = 0.003 by two-tailed t test [n = 5]; ΔmutS, P = 0.03 by two-tailed t test [n = 5]).
A. baylyi evolves a diminished transformability regardless of mutation frequency. (A) Over the course of adaptation to LB, wild-type and ΔmutS lineages adapted to similar extents. (B) After ∼730 generations of parallel adaptation, wild-type and ΔmutS lineages achieved similar reductions in transformability. Shown are means and standard deviations for five lineages.
An alternative reason that competency might not be of benefit could be due to the relatively benign environment (i.e., rich medium and appropriate temperature) of the selection conditions. If the population is less fit in a given environment, then more mutations are required for adaptation and it is likely that a population may harbor a greater fraction of beneficial mutations. This principle was recently demonstrated using the model organism Saccharomyces cerevisiae (22). SPO11/SPO13 mutants of yeast, which are deficient in recombination and meisosis, were adapted in parallel with wild-type yeast under conditions that included punctuated episodes of sex; all lineages were adapted to both benign and harsh laboratory conditions. Under benign conditions, no change in fitness was observed in sexual or asexual S. cerevisiae lineages. However, harsh conditions yielded both positive adaptation and an evolutionary advantage of sex.
A similar experiment was carried out with A. baylyi. The addition of 300 mM salt to LB medium resulted in a ∼35% decrease in growth rate. In addition, the temperature was increased to 40°C, resulting in an additional 15 to 25% decrease in growth rate. The temperature change affected the noncompetent lineage to a slightly greater extent, possibly due to a disruption of the surface of the bacteria that lack the competency-related pilins. Competent and noncompetent lineages of A. baylyi were adapted to these "harsh" conditions (LB plus 300 mM NaCl, 40°C) over the course of 30 serial 1:10,000 transfers, corresponding to ∼400 generations. The competent and noncompetent lineages adapted to harsh conditions in very similar manners (Fig. 5A). The growth rates in benign conditions of bacteria adapted to harsh conditions remained unchanged over the course of the experiment (Fig. 5B). The competent lineage appeared to have again evolved a diminished level of competence, in approximately half as many generations as in previous experiments (Fig. 5C) (P = 0.043 by paired t test comparing within lineages; n = 5). The lack of advantage to the competent lineages and the repeated loss of competence under all circumstances suggest that competence provides no evolutionary benefit under the specific harsh conditions examined here.
Adaptation to harsh conditions results in the evolution of diminished transformability. (A) Competent and noncompetent lineages were adapted to LB plus 300 mM NaCl at 40°C for ∼400 generations. Growth rates changed at a greater rate in the Δcom lineages than in the wild-type lineages. (B) Growth rates of competent and noncompetent lineages were measured in benign conditions over the course of adaptation to harsh conditions. Adaptation to harsh conditions elicited neither correlated benefits nor trade-offs with regard to growth rates when measured in benign conditions. (C) Similar to the case for experiments in benign conditions, adaptation to harsh conditions resulted in a diminished transformability. Shown are means and standard deviations for five lineages.
The process of sexual recombination in eukaryotes allows evolution to proceed at a rate that exceeds the mutation rate in individual lineages (8, 14). The results presented here suggest that such a rate increase is less (and possibly nonexistent) for bacterial transformation. This may be related to the difference in recombination rates between meiosis (0.25) and transformation (∼10−3/μg available DNA, as determined here [Fig. 2]). The amount of DNA free in a stationary-phase culture was determined using a transformation assay and was found to be 6 to 12 μg/ml (data not shown); the transformation rate can therefore be considered to be no more than 1.2 × 10−2, and it would be dramatically lower for much of the growth cycle. In addition to the relative proportion of mutations in a population, factors that contribute to the probability of successful transformation combining beneficial mutations into one genome include the fraction of the genome that is incorporated in any one transformation event, the relative frequencies of clones carrying beneficial mutations, and the fraction of the population that is transformable. This may explain different outcomes with A. baylyi compared with Chlamydomonas (8, 9, 28) and Saccharomyces (22, 23, 47).
Computer simulations have been used to model the characteristics of transformable cells in culture. Such models have been used to ask whether one evolutionary benefit of transformability might be the active reversion of deleterious mutations via transformation and recombination with wild-type alleles that constitute the majority of the DNA pool (39). However, simulations of a mixed culture of transformable and nontransformable bacteria suggested that any potential benefit of tranformability would be overwhelmed by the rate of acquisition of com alleles that result in a loss of transformability in previously com+ lineages (40). While DNA from Com− cells can transform Com+ cells, making them Com−, the opposite process is impossible by definition (Com− cells cannot be transformed). Therefore, mixed populations of Com+ and Com− cells (whether derived by spontaneous mutation or by deliberate introduction) should tend to become homogeneously Com−, all else being equal. This biased transfer effect is similar in many ways to the effects attributed to alleles that alter their own segregation dynamics in diploid species ("molecular drive") (15, 16). This process should theoretically operate in the absence of selection against Com+ and possibly even overwhelm selective pressure that favors Com+. We sought to test whether the data presented here fulfill this prediction.
One way to carry this out would be to estimate the number of generations required for the observed loss of competency mutations to sweep the population. With the observed fitness difference of 0.012 h−1 and provided that several assumptions are made (see Materials and Methods), two equations can be used to estimate the number of generations required for these mutations to sweep the population (5, 11). By the use of equation 3, we can calculate the number of generations required for a sweep of the loss of competency by using two estimates of the initial fraction, R0, of the population that has lost competency (see Materials and Methods). If R0 is a fraction of mutants in the initial population, then t = 1,335, nearly twice the number of generations than were found for the loss of transformability. If the mutation that conferred the loss of competence was present once in the initial population, then t = 2,468, more than three times the number of generations that it took in the described evolution experiment for the loss of transformability to go to high frequency. Similarly, by the use of equation 4, we can calculate that the number of generations required for 95% of the population to carry the mutation is t = 1,861, more than twice the number of actual generations. Predicting the number of generations required for the loss of transformability to go to high frequency demonstrates that a nonselective mechanism is acting to favor com alleles over wild-type com+ alleles. This mechanism seems likely to be the unidirectional transfer of com alleles as predicted by modeling efforts.
The ancestral wild-type and Δcom strains were allowed to compete in serial transfer (incorporating high- and low-density regimens) for >80 generations (Fig. 6). The apparent competitive fitness of the Δcom strains exceeds that predicted based on equation 2 (see Materials and Methods), in which R, the ratio of noncompetent to competent bacteria, was determined based on the selection coefficient. Equation 4 was also used to predict the ratio of noncompetent to competent bacteria in culture, based on the mutation rate and the selection coefficient (Fig. 6). These data show that the ratio of competent and noncompetent variants changes at approximately three times the rate expected based on two methods to calculate the ratio using the high-density fitness differential that was observed. An important difference between this repeated serial transfer competition experiment and those in which the high-density fitness value was determined is the cycling between phases of growth (corresponding to the regimens in which high fitness and low fitness were measured). The competence of A. baylyi is known to be dependent on growth phase (35); if there is a significant transfer of Δcom alleles, this experiment will enhance the ability to observe that effect relative to the strictly high-density fitness experiment. These results suggest that any potential evolutionary benefits derived from transformation are exceeded by the risk of loss of transformability in a mixed culture of wild-type and com strains (40).
Noncompetent lineages of A. baylyi overtake competent lineages more rapidly than predicted in direct competition. The rate at which the noncompetent lineages overtake the population is higher than predicted by two equations: (i) using equation 2, which was used to determine selection coefficients in the initial competitions, assuming that s = 0.012 generation−1 (see Materials and Methods) (29), and (ii) using equation 4 (see Materials and Methods) (13). Shown are the means and standard deviations for five competitions, as well as predicted values based on the two equations discussed.
It is also possible that the loss of transformability was partly due to a corresponding increase in fitness. It is unusual for populations to lose metabolic functions with such high reproducibility, unless such a loss results in a fitness benefit. For example, long-term experimental evolution of Escherichia coli found changes in niche breadth such that 75% of informative substrates did not show parallel decay in the first 10,000 generations of adaptation (12). However, mutations in d-ribose catabolism and spoT were both shown to be important in this adaptation, resulting in selection coefficients of 0.014 and 0.094, respectively (10, 13). These sorts of genes show parallel decay in serial transfer experiments. Similarly, mutations in rpoS have been shown to be critical to growth advantage in stationary phase (18, 46). The repeated loss of transformability by the competent lineages under three selection regimens may suggest that transformability is costly. We cannot strictly conclude that either unidirectional genetic transfer or adaptive evolution was the sole mechanism by which transformability so quickly diminished in these evolutionary experiments. Comparisons of independent growth rates suggest that selection alone is not responsible for loss of competence in these serial cultures. However, independent measurement of growth rate does not always correlate perfectly with competitive fitness, and hence this method still leaves significant room for doubt. On the other hand, in mixed populations of competent wild-type cells and mutants with diminished competency (which will certainly arise spontaneously), com alleles should inexorably spread through horizontal transfer unless strongly counterselected. Whatever the reason for the loss of transformability, it must be noted that these results may for the moment be interpreted only in the context of the laboratory, with limited environmental and genetic diversity. The evolutionary benefit conferred on an organism by competence remains to be discovered, and A. baylyi is an ideal organism to carry out such investigations.
We thank Rosie Redfield, Graham Bell, Cliff Zeyl, and Jim Bull for helpful comments and discussion, as well as Christian Hansen for discussions concerning statistical analyses. We thank Paul Schimmel for guidance and support.
This work was funded by NSF grant MCB-0128901 to V.D.C.-L. We also acknowledge support to Paul Schimmel from NIH grant GM23562 and the National Foundation for Cancer Research.
Received 14 June 2006.
Accepted 25 September 2006.
↵▿ Published ahead of print on 6 October 2006.
American Society for Microbiology
Ausubel, F. M. 1997. Short protocols in molecular biology: a compendium of methods from Current Protocols in Molecular Biology, 3rd ed. Wiley, New York, N.Y.
Bacher, J. M., V. de Crecy-Lagard, and P. R. Schimmel. 2005. Inhibited cell growth and protein functional changes from an editing-defective tRNA synthetase. Proc. Natl. Acad. Sci. USA 102:1697-1701.
Barbe, V., D. Vallenet, N. Fonknechten, A. Kreimeyer, S. Oztas, L. Labarre, S. Cruveiller, C. Robert, S. Duprat, P. Wincker, L. N. Ornston, J. Weissenbach, P. Marliere, G. N. Cohen, and C. Medigue. 2004. Unique features revealed by the genome sequence of Acinetobacter sp. ADP1, a versatile and naturally transformation competent bacterium. Nucleic Acids Res. 32:5766-5779.
Bell, G. 1982. The masterpiece of nature: the evolution and genetics of sexuality. University of California Press, Berkeley.
Bell, G. 1997. Selection: the mechanism of evolution. Chapman & Hall, New York, N.Y.
Burt, A. 2000. Perspective: sex, recombination, and the efficacy of selection—was Weismann right? Evol. Int. J. Org. Evol. 54:337-351.
Busch, S., C. Rosenplanter, and B. Averhoff. 1999. Identification and characterization of ComE and ComF, two novel pilin-like competence factors involved in natural transformation of Acinetobacter sp. strain BD413. Appl. Environ. Microbiol. 65:4568-4574.
Colegrave, N. 2002. Sex releases the speed limit on evolution. Nature 420:664-666.
Colegrave, N., O. Kaltz, and G. Bell. 2002. The ecology and genetics of fitness in Chlamydomonas. VIII. The dynamics of adaptation to novel environments after a single episode of sex. Evol. Int. J. Org. Evol. 56:14-21.
Cooper, T. F., D. E. Rozen, and R. E. Lenski. 2003. Parallel changes in gene expression after 20,000 generations of evolution in Escherichia coli. Proc. Natl. Acad. Sci. USA 100:1072-1077.
Cooper, V. S., A. F. Bennett, and R. E. Lenski. 2001. Evolution of thermal dependence of growth rate of Escherichia coli populations during 20,000 generations in a constant environment. Evol. Int. J. Org. Evol. 55:889-896.
Cooper, V. S., and R. E. Lenski. 2000. The population genetics of ecological specialization in evolving Escherichia coli populations. Nature 407:736-739.
Cooper, V. S., D. Schneider, M. Blot, and R. E. Lenski. 2001. Mechanisms causing rapid and parallel losses of ribose catabolism in evolving populations of Escherichia coli B. J. Bacteriol. 183:2834-2841.
de Visser, J. A. G. M., C. W. Zeyl, P. J. Gerrish, J. L. Blanchard, and R. E. Lenski. 1999. Diminishing returns from mutation supply rate in asexual populations. Science 283:404-406.
Dover, G. 2002. Molecular drive. Trends Genet. 18:587-589.
Dover, G. 1982. Molecular drive: a cohesive mode of species evolution. Nature 299:111-117.
Dubnau, D. 1991. Genetic competence in Bacillus subtilis. Microbiol. Rev. 55:395-424.
Farrell, M. J., and S. E. Finkel. 2003. The growth advantage in stationary-phase phenotype conferred by rpoS mutations is dependent on the pH and nutrient environment. J. Bacteriol. 185:7044-7052.
Finkel, S. E., and R. Kolter. 2001. DNA as a nutrient: novel role for bacterial competence gene homologs. J. Bacteriol. 183:6288-6293.
Friedrich, A., T. Hartsch, and B. Averhoff. 2001. Natural transformation in mesophilic and thermophilic bacteria: identification and characterization of novel, closely related competence genes in Acinetobacter sp. strain BD413 and Thermus thermophilus HB27. Appl. Environ. Microbiol. 67:3140-3148.
Fussenegger, M., T. Rudel, R. Barten, R. Ryll, and T. F. Meyer. 1997. Transformation competence and type-4 pilus biogenesis in Neisseria gonorrhoeae—a review. Gene 192:125-134.
Goddard, M. R., H. C. Godfray, and A. Burt. 2005. Sex increases the efficacy of natural selection in experimental yeast populations. Nature 434:636-640.
Greig, D., R. H. Borts, and E. J. Louis. 1998. The effect of sex on adaptation to high temperature in heterozygous and homozygous yeast. Proc. Biol. Sci. 265:1017-1023.
Hare, J. M., S. N. Perkins, and L. A. Gregg-Jolly. 2006. A constitutively expressed, truncated umuDC operon regulates the recA-dependent DNA damage induction of a gene in Acinetobacter baylyi strain ADP1. Appl. Environ. Microbiol. 72:4036-4043.
Herzberg, C., A. Friedrich, and B. Averhoff. 2000. comB, a novel competence gene required for natural transformation of Acinetobacter sp. BD413: identification, characterization, and analysis of growth-phase-dependent regulation. Arch. Microbiol. 173:220-228.
Hofreuter, D., S. Odenbreit, J. Puls, D. Schwan, and R. Haas. 2000. Genetic competence in Helicobacter pylori: mechanisms and biological implications. Res. Microbiol. 151:487-491.
Juni, E., and A. Janik. 1969. Transformation of Acinetobacter calcoaceticus (Bacterium anitratum). J. Bacteriol. 98:281-288.
Kaltz, O., and G. Bell. 2002. The ecology and genetics of fitness in Chlamydomonas. XII. Repeated sexual episodes increase rates of adaptation to novel environments. Evol. Int. J. Org. Evol. 56:1743-1753.
Lenski, R. E. 1991. Quantifying fitness and gene stability in microorganisms. Biotechnology 15:173-192.
Lenski, R. E., M. R. Rose, S. C. Simpson, and S. C. Tadler. 1991. Long-term experimental evolution in Escherichia coli. I. Adaptation and divergence during 2,000 generations. Am. Nat. 138:1315-1341.
Link, C., S. Eickernjager, D. Porstendorfer, and B. Averhoff. 1998. Identification and characterization of a novel competence gene, comC, required for DNA binding and uptake in Acinetobacter sp. strain BD413. J. Bacteriol. 180:1592-1595.
Maynard Smith, J. 1978. The evolution of sex. Cambridge University Press, Cambridge, United Kingdom.
Metzgar, D., J. M. Bacher, V. Pezo, J. Reader, V. Doring, P. Schimmel, P. Marliere, and V. de Crecy-Lagard. 2004. Acinetobacter sp. ADP1: an ideal model organism for genetic analysis and genome engineering. Nucleic Acids Res. 32:5780-5790.
Palmen, R., P. Buijsman, and K. J. Hellingwerf. 1994. Physiological regulation of competence induction for natural transformation in Acinetobacter calcoaceticus. Arch. Microbiol. 162:344-351.
Palmen, R., B. Vosman, P. Buijsman, C. K. Breek, and K. J. Hellingwerf. 1993. Physiological characterization of natural transformation in Acinetobacter calcoaceticus. J. Gen. Microbiol. 139:295-305.
Porstendorfer, D., U. Drotschmann, and B. Averhoff. 1997. A novel competence gene, comP, is essential for natural transformation of Acinetobacter sp. strain BD413. Appl. Environ. Microbiol. 63:4150-4157.
Rauch, P. J., R. Palmen, A. A. Burds, L. A. Gregg-Jolly, J. R. van der Zee, and K. J. Hellingwerf. 1996. The expression of the Acinetobacter calcoaceticus recA gene increases in response to DNA damage independently of RecA and of development of competence for natural transformation. Microbiology 142:1025-1032.
Redfield, R. J. 2001. Do bacteria have sex? Nat. Rev. Genet. 2:634-639.
Redfield, R. J. 1988. Evolution of bacterial transformation: is sex with dead cells ever better than no sex at all? Genetics 119:213-221.
Redfield, R. J., M. R. Schrag, and A. M. Dean. 1997. The evolution of bacterial transformation: sex with poor relations. Genetics 146:27-38.
Richaud, C., D. Mengin-Lecreulx, S. Pochet, E. J. Johnson, G. N. Cohen, and P. Marliere. 1993. Directed evolution of biosynthetic pathways. Recruitment of cysteine thioethers for constructing the cell wall of Escherichia coli. J. Biol. Chem. 268:26827-26835.
Thomas, C. M., and K. M. Nielsen. 2005. Mechanisms of, and barriers to, horizontal gene transfer between bacteria. Nat. Rev. Microbiol. 3:711-721.
Vaneechoutte, M., D. M. Young, L. N. Ornston, T. De Baere, A. Nemec, T. Van Der Reijden, E. Carr, I. Tjernberg, and L. Dijkshoorn. 2006. Naturally transformable Acinetobacter sp. strain ADP1 belongs to the newly described species Acinetobacter baylyi. Appl. Environ. Microbiol. 72:932-936.
Young, D. M., and L. N. Ornston. 2001. Functions of the mismatch repair gene mutS from Acinetobacter sp. strain ADP1. J. Bacteriol. 183:6822-6831.
Young, D. M., D. Parke, and L. N. Ornston. 2005. Opportunities for genetic investigation afforded by Acinetobacter baylyi, a nutritionally versatile bacterial species that is highly competent for natural transformation. Annu. Rev. Microbiol. 59:519-551.
Zambrano, M. M., D. A. Siegele, M. Almiron, A. Tormo, and R. Kolter. 1993. Microbial competition: Escherichia coli mutants that take over stationary phase cultures. Science 259:1757-1760.
Zeyl, C., and G. Bell. 1997. The advantage of sex in evolving yeast populations. Nature 388:465-468.
Journal of Bacteriology Dec 2006, 188 (24) 8534-8542; DOI: 10.1128/JB.00846-06
You are going to email the following Rapid Evolution of Diminished Transformability in Acinetobacter baylyi
|
CommonCrawl
|
Hostname: page-component-7ccbd9845f-hcslb Total loading time: 1.185 Render date: 2023-02-01T20:33:13.906Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true
>Journal of Fluid Mechanics
>Volume 920
>High-Rayleigh-number convection in porous–fluid layers
Governing equations and numerical methods
An overview of two-layer convection
Modelling heat transport and interface temperature
Temporal coupling between the layers
High-Rayleigh-number convection in porous–fluid layers
Thomas Le Reun [Opens in a new window] and
Duncan R. Hewitt [Opens in a new window]
Thomas Le Reun*
Department of Applied Mathematics and Theoretical Physics, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, CambridgeCB3 0WA, UK
Duncan R. Hewitt
Department of Mathematics, University College London, London, UK
†Email address for correspondence: [email protected]
Save PDF (2 mb) View PDF[Opens in a new window] Save hi-res PDF (2 mb)
We present a numerical study of convection in a horizontal layer comprising a fluid-saturated porous bed overlain by an unconfined fluid layer. Convection is driven by a vertical, destabilising temperature difference applied across the whole system, as in the canonical Rayleigh–Bénard problem. Numerical simulations are carried out using a single-domain formulation of the two-layer problem based on the Darcy–Brinkman equations. We explore the dynamics and heat flux through the system in the limit of large Rayleigh number, but small Darcy number, such that the flow exhibits vigorous convection in both the porous and the unconfined fluid regions, while the porous flow still remains strongly confined and governed by Darcy's law. We demonstrate that the heat flux and average thermal structure of the system can be predicted using previous results of convection in individual fluid or porous layers. We revisit a controversy about the role of subcritical 'penetrative convection' in the porous medium, and confirm that such induced flow does not contribute to the heat flux through the system. Lastly, we briefly study the temporal coupling between the two layers and find that the turbulent fluid convection above acts as a low-pass filter on the longer time-scale variability of convection in the porous layer.
JFM classification
Convection: Convection in porous media
JFM Papers
Journal of Fluid Mechanics , Volume 920 , 10 August 2021 , A35
DOI: https://doi.org/10.1017/jfm.2021.449[Opens in a new window]
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2021. Published by Cambridge University Press
Heat transfer driven by flow exchange between a fluid-saturated porous bed and an overlying unconfined fluid arises in a variety of systems in engineering and geophysics. This is the case, for example, in various industrial cooling systems found in nuclear power generation, microelectronics or chemical engineering that require the circulation of fluid from an open channel into a fragmented medium (d'Hueppe et al. Reference d'Hueppe, Chandesris, Jamet and Goyeau2012; Chandesris et al. Reference Chandesris, D'Hueppe, Mathieu, Jamet and Goyeau2013; Su, Wade & Davidson Reference Su, Wade and Davidson2015). A similar situation occurs during the progressive solidification of multi-component fluids, which creates a mushy solid through which liquid flows to transport heat and solute (Worster Reference Worster1997). In geophysical contexts, this phenomenon is encountered below sea ice (Wells, Hitchen & Parkinson Reference Wells, Hitchen and Parkinson2019) and in the Earth's core (Huguet et al. Reference Huguet, Alboussière, Bergman, Deguen, Labrosse and Lesœur2016). This work is particularly motivated by the physics of hydrothermal circulation, where a water-saturated, porous bed that is heated from below exhibits thermal convection that, in turn, drives buoyant plumes and convective motion in the overlying ocean. As well as being a well-known feature of the Earth's ocean, there is evidence of on-going hydrothermal activity under the ice crust of icy satellites such as Jupiter's moon Europa (Goodman Reference Goodman2004) or Saturn's moon Enceladus (Hsu et al. Reference Hsu2015). Unlike on Earth, the entire heat budget of these bodies is believed to be controlled by hydrothermal convection and, in particular, by the manner in which heat is transported through the rocky core and into the overlying oceans beneath their icy crusts (Travis, Palguta & Schubert Reference Travis, Palguta and Schubert2012; Travis & Schubert Reference Travis and Schubert2015; Nimmo et al. Reference Nimmo, Barr, Běhounková and McKinnon2018). Most previous works in this area have focussed on either the flow in the porous medium alone or on that in unconfined fluid alone, with the coupling between them modelled by a parametrised boundary condition. This is particularly the case for hydrothermal activity, for which there are numerous studies focussed either on the structure of the flow in the porous layer (see for instance Fontaine & Wilcock Reference Fontaine and Wilcock2007; Coumou, Driesner & Heinrich Reference Coumou, Driesner and Heinrich2008; Choblet et al. Reference Choblet, Tobie, Sotin, Běhounková, Čadek, Postberg and Souček2017; Le Reun & Hewitt Reference Le Reun and Hewitt2020, among others), or on the buoyant plumes created in the ocean (Goodman Reference Goodman2004; Woods Reference Woods2010; Goodman & Lenferink Reference Goodman and Lenferink2012), or on the induced large-scale oceanic circulation (Soderlund et al. Reference Soderlund, Schmidt, Wicht and Blankenship2014; Soderlund Reference Soderlund2019; Amit et al. Reference Amit, Choblet, Tobie, Terra-Nova, Čadek and Bouffard2020). Travis et al. (Reference Travis, Palguta and Schubert2012) included both layers, but resorted to an enhanced diffusivity to parametrise flows in the sub-surface ocean to make calculations tractable. In all these cases, questions remain about how reasonable it is to use these parametrised boundary conditions rather than resolve both layers, and about how the dynamics of flow in each layer communicates and influences the flow in the other layer. Addressing these questions is the focus of this paper. Works that explicitly focus on the coupled transport across a porous and fluid layer are more numerous in engineering settings. However, they tend to either be focused on situations where inertial effects in the interstices of the porous layer play an important role (d'Hueppe et al. Reference d'Hueppe, Chandesris, Jamet and Goyeau2011; Chandesris et al. Reference Chandesris, D'Hueppe, Mathieu, Jamet and Goyeau2013), or on regimes where heat is mainly transported by diffusion through the porous layer (Poulikakos et al. Reference Poulikakos, Bejan, Selimos and Blake1986; Chen & Chen Reference Chen and Chen1988, Reference Chen and Chen1992; Bagchi & Kulacki Reference Bagchi and Kulacki2014; Su et al. Reference Su, Wade and Davidson2015) or where restricted to the onset of convective instabilities (Chen & Chen Reference Chen and Chen1988; Hirata et al. Reference Hirata, Goyeau, Gobin, Carr and Cotta2007, Reference Hirata, Goyeau, Gobin, Chandesris and Jamet2009b; Hirata, Goyeau & Gobin Reference Hirata, Goyeau and Gobin2009a). In general, these studies are difficult to apply to geophysical contexts and particularly to hydrothermal circulation: here, the large spatial and temperature scales and the typically relatively low permeabilities are such that the porous region can be unstable to strong convection while the flow through it remains inertia-free and well described by Darcy's law. In such a situation, there can be vastly different time scales of motion between the unconfined fluid, which exhibits rapid turbulent convection, and the porous medium, where the convective flow through the narrow pores is much slower. This discrepancy in time scales presents a challenge for numerical modelling, which is perhaps why this limit has not been explored until now.
In this paper, we explore thermal convection in a two-layer system comprising a porous bed overlain by an unconfined fluid. In particular, we focus on situations in which the driving density difference, as described by the dimensionless Rayleigh number, is large and heat is transported through both layers by convective flow, although for completeness we include cases in which there is no convection in the porous layer. The permeability of the porous medium, as described by the dimensionless Darcy number, is small enough that the flow through the medium is always inertia free and controlled by Darcy's law. As in some previous studies of this set-up (Poulikakos et al. Reference Poulikakos, Bejan, Selimos and Blake1986; Chen & Chen Reference Chen and Chen1988; Bagchi & Kulacki Reference Bagchi and Kulacki2014), we consider the simplest idealised system in which natural convection occurs, that is, a two-layer Rayleigh–Bénard cell. In this set-up, the base of the porous medium is heated and the upper surface of the unconfined fluid layer is cooled to provide a fixed destabilising density difference across the domain. Flow in such a system attains a statistically steady state, which allows for investigation of the fluxes, temperature profiles and dynamics of the flow in each layer, and of the coupling between the layers.
We carry out numerical simulations of this problem in two dimensions using a single-domain formulation of the two-layer problem based on the Darcy–Brinkman equations (Le Bars & Worster Reference Le Bars and Worster2006). Using efficient pseudo-spectral methods, we are able to reach regimes where thermal instabilities are fully developed in both the porous and fluid layers. We demonstrate how to use previous results on thermal convection in individual fluid or porous layers to infer predictions of the heat flux and the temperature at the interface between the layers in our system. In addition, we revisit a long-standing controversy on the role of 'penetrative convection', i.e. flow in the porous medium that is actively driven by fluid convection above, and confirm that it is negligible in the limit where the pore scale is small compared with the size of the system. Lastly, we briefly address the temporal coupling between both layers and explore how fluid convection mediates the variability of porous convection.
The paper is organised as follows. The set-up and governing equations are introduced in § 2, where we also outline the main approximations of our model and, importantly, the limits on its validity. After presenting the general behaviour of the two-layer system and how it changes when the porous layer becomes unstable to convection (§ 3), we show how previous works on convection can be used to predict the thermal structure of the flow and the heat it transports (§ 4). In § 5, we discuss the temporal variability of two-layer convection, before summarising our findings and their geophysical implications in § 6.
2. Governing equations and numerical methods
2.1. The Darcy–Brinkman model
Consider a two-dimensional system comprising a fluid-saturated porous medium of depth $h_p$ underlying an unconfined fluid region of depth $h_f$. We locate the centre of the cell at height $z=0$, such that the whole system lies in the range $-h_p \leqslant z \leqslant h_f$, as depicted in figure 1, and we introduce the length scale $h = (h_p+h_f)/2$. For the majority of this paper, we focus on the case of equal layer depths, where $h_p = h_f = h$.
Figure 1. Schematic cartoon of the set-up considered in this paper. In almost all cases considered here, we take the layer depths to be equal, so $h_p=h_f = h$.
The fluid has a dynamic viscosity $\eta$ and density $\rho$, and the porous medium is characterised by a uniform permeability $K_0$ and porosity $\phi _0< 1$. We extend the definition of the porosity – that is, the local volume fraction of fluid – to the whole domain by introducing the step function
(2.1) \begin{equation} \phi(z) = \left\{\begin{array}{@{}ll} \phi_0 & z<0, \\ 1 & z\geq 0. \end{array} \right. \end{equation}
The flow is described by the local fluid velocity $\boldsymbol{U}_{\ell} $ in the unconfined fluid layer and by the Darcy or volume-averaged flux $\boldsymbol{U}_d = \phi_0 \boldsymbol{U}_{\ell}$ in the porous medium. While the latter quantity is, by necessity, coarse-grained over a larger scale (multiple pore scales) than the former, for notational convenience we can nevertheless introduce a single quantity $\boldsymbol{U} = \phi \boldsymbol{U}_{\ell}$ that reduces to each of these limits in the relevant domain. We will work in terms of this mean flux $\boldsymbol{U} = (U,V)$ throughout the domain.
The flow is assumed to be incompressible everywhere, so
(2.2) \begin{equation} \boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{U} = 0. \end{equation}
In the fluid layer, the flow is governed by the Navier–Stokes equation,
(2.3) \begin{equation} \rho [\partial_t \boldsymbol{U} + \boldsymbol{U} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{U} ] ={-} \boldsymbol{\nabla} P + \mu \boldsymbol{\nabla}^2 \boldsymbol{U} + \rho \boldsymbol{g}, \end{equation}
where $P$ is the pressure, while in the porous layer, the flux instead obeys Darcy's law,
(2.4) \begin{equation} \boldsymbol{U} = \frac{K}{\mu}(- \boldsymbol{\nabla} P + \rho \boldsymbol{g}). \end{equation}
We simulate the two-layer system using a one-domain approach in which both porous and unconfined fluid regions are described by a single Darcy–Brinkman equation (Le Bars & Worster Reference Le Bars and Worster2006),
(2.5) \begin{equation} \rho \left[\partial_t \boldsymbol{U} + \boldsymbol{U} \boldsymbol{\cdot} \boldsymbol{\nabla} \frac{\boldsymbol{U}}{\phi} \right] ={-} \phi \boldsymbol{\nabla} P + \mu \boldsymbol{\nabla}^2 \boldsymbol{U} + \phi \rho \boldsymbol{g} - \frac{\mu}{K} \phi \boldsymbol{U}, \end{equation}
where $1/K(z)$ is a step function that goes from $1/K_0$ for $z<0$ to zero for $z>0$. As shown by Le Bars & Worster (Reference Le Bars and Worster2006), the Darcy–Brinkman formulation of the two-layer problem can be retrieved by carrying out a coarse-grained average of the flow over a few typical pore scales $\sqrt {K_0}$. As a consequence, any spatial variation must have a typical length larger than $\sqrt {K}$ for the model to remain valid. The Navier–Stokes equation and Darcy's law are retrieved from the Darcy–Brinkman equation (2.5) in the unconfined fluid and porous layers, respectively. In the fluid layer, $\phi = 1$ and $1/K = 0$, which trivially gives the Navier–Stokes equation, whereas in the bulk of the porous medium, the damping term $-\mu \phi \boldsymbol {U} /K$ dominates over the inertial and viscous forces (provided $K_0$ is sufficiently small), leading to a balance between the damping, pressure and buoyancy terms that yields Darcy's law. Just below the interface, however, viscous effects become important in the porous layer as the flow matches to the unconfined region above. Acceleration remains negligible, and the equations reduce to
(2.6) \begin{equation} - \phi \boldsymbol{\nabla} P + \mu \boldsymbol{\nabla}^2 \boldsymbol{U} + \phi \rho \boldsymbol{g} - \frac{\mu}{K} \phi \boldsymbol{U} \sim 0. \end{equation}
Balancing the viscous resistance and Darcy drag indicates that local viscous forces play a role over a length $\ell _r = \sqrt {K_0}/\phi _0$ below the the interface – i.e. a few times the pore scale. These forces regularise the velocity profile between the unconfined fluid and the porous medium through a boundary layer of typical length $\ell _r$.
To conclude, we model the two-layer system with a one-domain formulation via the Darcy–Brinkman equation. We note that this is not the only option: another classical formulation of the problem, for example, is the one introduced by Beavers & Joseph (Reference Beavers and Joseph1967) where the fluid and the porous layer are treated separately and their coupling is accounted by a semi-empirical boundary condition linking vertical velocity gradients and the velocity difference between the fluid and porous layers. These different models both feature the regularisation boundary layer at the fluid-porous interface over a length $\sim \sqrt {K_0}$ which is corroborated by many experiments, in particular those of Beavers & Joseph (Reference Beavers and Joseph1967). There are, however, some known discrepancies between these two approaches that may not be restricted to the interface (e.g. Le Bars & Worster Reference Le Bars and Worster2006; Hirata et al. Reference Hirata, Goyeau, Gobin, Carr and Cotta2007, Reference Hirata, Goyeau, Gobin, Chandesris and Jamet2009b). As pointed out by Nield & Bejan (Reference Nield and Bejan2017), there is ongoing debate on which of these formulations is the most adequate to model flows in mixed porous-fluid layers, with definitive empirical evidence still lacking.
2.2. Heat transport
We use the Boussinesq approximation to account for the effect of temperature-dependent density in the momentum equations: variations in temperature affect the buoyancy force but do not affect the fluid volume via conservation of mass. We further assume that any changes in viscosity, diffusivity or permeability associated with temperature variation are negligible. Although some of these assumptions may be questionable in complex geophysical settings, they are made here to allow a focus on the basic physics of these two-layer convecting systems.
In particular, we restrict our study to linear variations of density with temperature according to
(2.7) \begin{equation} \rho = \rho_0 ( 1 - \alpha (T-T_0) ), \end{equation}
with $T_0$ being a reference temperature. The momentum equation under the Boussinesq approximation follows from substituting (2.7) into the buoyancy term of (2.5), while letting $\rho = \rho _0$ in the inertial terms. The temperature evolves according to an energy transport equation (Nield & Bejan Reference Nield and Bejan2013)
(2.8) \begin{equation} \bar{\phi}\partial_t T + \boldsymbol{U} \boldsymbol{\cdot} \boldsymbol{\nabla} T = \kappa \varLambda \nabla^2 T, \end{equation}
(2.9a–c) \begin{equation} \bar{\phi} \equiv \frac{(1-\phi) c_m \rho_m + \phi c \rho}{\rho c },\quad \kappa \equiv \frac{\lambda}{\rho c}\quad \text{and} \quad \varLambda \equiv \phi + ( 1-\phi)\frac{\lambda_m}{\lambda}, \end{equation}
where $c$ and $c_m$ are the heat capacity per unit of mass of the fluid and the porous matrix, respectively, $\rho _m$ is the density of the porous matrix, and $\lambda$ and $\lambda _m$ are the thermal conductivities of the fluid and the porous matrix, respectively. Equation (2.8) assumes local thermal equilibrium between the porous matrix and the fluid.
2.3. Boundary conditions
We consider a closed domain with imposed temperature on the upper and lower boundaries, as in a classical Rayleigh–Bénard cell (figure 1). Specifically, for the temperature we set
(2.10a,b) \begin{equation} T(z=h) = T_0,\quad T(z={-}h) = T_0 + \Delta T, \end{equation}
where $\Delta T >0$ is a constant. The upper and lower boundaries are rigid and impermeable, so
(2.11) \begin{equation} U(z={\pm} h) = V(z={\pm} h) = 0. \end{equation}
Note that Darcy's law would only permit one velocity boundary condition on the boundary of the porous region at $z=-h$, but the higher-order derivative in the viscous term in (2.5) allows for application of the no-slip condition in (2.11) as well. This extra condition will induce a boundary layer of thickness $\sim \sqrt {K_0}/\phi _0$ at the base of the domain, just like the boundary-layer region at the interface (see (2.6)). It is not clear whether such a basal boundary layer should arise in experimental situations. Irrespective of whether this boundary is a physically realisable feature, we find that it plays no dynamical role here provided it is thinner than any dynamical lengthscale of the flow (and, in particular, thinner than the thermal boundary layer that can form at the base of the domain, as discussed in § 2.6.)
The horizontal boundaries of the domain are assumed to be periodic with the width of the domain kept constant at $4h$.
2.4. Dimensionless equations and control parameters
To extract the dimensionless equations that govern the two-layer system, we use a 'free-fall' normalisation of (2.5) and (2.8), based on the idea that a balance between inertia and buoyancy governs the behaviour of the fluid layer. Such a balance yields the free-fall velocity in the unconfined layer,
(2.12) \begin{equation} {U^*}^2 = \alpha \Delta T g h \end{equation}
and the associated free-fall time scale is $T^* = h/U^*$. Scaling lengths with $h$, flux with $U^*$, time with $T^*$, temperature with $\Delta T$ and pressure with $U^*$, we arrive at dimensionless equations
(2.13a) \begin{gather} \partial_t \boldsymbol{u} + \boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{\nabla} \frac{ \boldsymbol{u}}{\phi} ={-} \phi \boldsymbol{\nabla} p + \sqrt{\frac{\mathit{Pr}}{\mathit{Ra}}} \boldsymbol{\nabla}^2 \boldsymbol{u} + \phi \theta \boldsymbol{e}_z - \frac{ \chi(z)}{\mathit{Da}} \sqrt{\frac{\mathit{Pr}}{\mathit{Ra}}} \phi \boldsymbol{u}, \end{gather}
(2.13b) \begin{gather}\bar{\phi} \partial_t \theta + \boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{\nabla} \theta = \frac{\varLambda}{\sqrt{\mathit{Ra} \mathit{Pr}}} \nabla^2 \theta, \end{gather}
(2.13c) \begin{gather}\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u} = 0, \end{gather}
where $\boldsymbol {u}$ is the dimensionless flux, $\theta = (T-T_0)/\Delta T$ is the dimensionless temperature and $\chi (z)$ is a step function that jumps from $1$ in $z<0$ to $0$ in $z>0$. In these equations, we have introduced three dimensionless numbers:
(2.14a–c) \begin{equation} \mathit{Da} \equiv \frac{K_0}{h^2},\quad \mathit{Pr} \equiv \frac{\nu}{\kappa} \quad \text{and}\quad \mathit{Ra} \equiv \frac{\alpha g\Delta T h^3 }{\nu \kappa}. \end{equation}
The Darcy number $\mathit {Da}$ is a dimensionless measure of the pore scale $\sqrt {K_0}$ relative to the domain size $h$, and is thus typically extremely small. The Rayleigh number quantifies the importance of the buoyancy forces relative to the viscous resistance in the unconfined fluid layer, and the focus of this work is on cases where $Ra \gg 1$. The Prandtl number compares momentum and heat diffusivities. The dimensionless layer depths $\hat {h}_p$ and $\hat {h}_f$ are also, in general, variables; as noted above, in the majority of computations shown here, we set these to be equal so that $\hat {h}_p = \hat {h}_f = 1$, but we include the general case in the theoretical discussion in § 4.
Note that with this choice of scalings, the dimensionless velocity scale in the fluid layer is $O(1)$, compared with $O(\sqrt {\mathit {Ra}} \,\mathit {Da} /\sqrt {\mathit {Pr}})$ in the porous layer. Given these scales, we can also introduce a porous Rayleigh number $\mathit {Ra}_p$ to describe the flow in the porous layer. The porous Rayleigh number is the ratio between the advective and diffusive time scales in the porous medium, which, from the advection–diffusion ratio in (2.13b), gives
(2.15) \begin{equation} \mathit{Ra}_p = \frac{\mathit{Ra} \,\mathit{Da}}{ \varLambda} = \frac{\alpha g\Delta T K_0 h }{\nu \varLambda \kappa}. \end{equation}
2.5. Further simplifying assumptions
We simplify the complexity of (2.13) by noting that in the bulk of either the fluid or the porous regions, $\phi$ cancels out of the equations (see for example (2.3) and (2.4)). The porosity only affects (2.13) in the narrow boundary-layer region immediately below the interface and at the base of the domain, where it controls the regularisation length $\sqrt {\mathit {Da}}/\phi$ (as shown by (2.6)). In the following, we thus take $\phi = 1$ in (2.13a); the only effect of this is to change the regularisation length at the interface, a regularisation that must anyway be smaller than any dynamical lengths for the model to remain valid, as discussed in more detail in § 2.7.
In the heat transport equation (2.13b), we reduce the number of control parameters by taking $\bar {\phi } = \varLambda = 1$. For hydrothermal systems, water flows through a silicate rock matrix. The thermal diffusivity is typically a factor of two larger in the matrix than in the fluid, while $\rho _m c_m \sim \rho c$. The parameters $\bar {\phi }$ and $\varLambda$ are thus order-one constants that do not vary substantially from one system to another. This is perhaps less true in some industrial applications like the transport of heat through the metallic foam (Su et al. Reference Su, Wade and Davidson2015), where the thermal conductivity can be at least a hundred times larger in the matrix than in the fluid. This would lead to a large value of $\varLambda$ and thermal diffusion would be greatly enhanced in the porous medium, which would reduce its ability to convect. We do not consider such cases here, although we will find that cases where the porous medium is dominated by diffusive transport can be easily treated theoretically, and the theory could be straightforwardly adapted to account for varying $\varLambda$. Finally, to reduce the complexity of this study and maintain a focus on the key features of varying the driving buoyancy forces (i.e. $\mathit {Ra}$) and the properties of the porous matrix (i.e. $\mathit {Da}$), we set the Prandtl number to be $\mathit {Pr} = 1$ throughout this work.
2.6. Limits on the control parameters
Several constraints must be imposed on the control parameters $\mathit {Ra}$ and $\mathit {Da}$ to ensure that the Darcy–Brinkman model remains valid. We give these constraints in their most general form here, but recall from the previous section that all solutions in this work take $\mathit {Pr} = \varLambda = 1$. First, the inertial terms must vanish in the porous medium, which demands that
(2.16) \begin{equation} \frac{\sqrt{\mathit{Ra}} \,\mathit{Da}}{\sqrt{\mathit{Pr}}} \ll 1; \end{equation}
that is, the velocity scale in the porous medium must be much less than the $O(1)$ velocity in the unconfined fluid layer. Second, the continuum approximation that underlies Darcy's law requires that any dynamical length scale of the flow in the porous layer must be larger than the pore scale $\sqrt {\mathit {Da}}$; equivalently, the Darcy drag term must always be larger than local viscous forces in the bulk of the medium. We expect the smallest dynamical scales to arise from a balance between advection and diffusion in (2.13b): such a balance, given typical velocity $\sim \sqrt {\mathit {Ra}}\mathit {Da}/\sqrt {\mathit {Pr}}$, yields a length scale $\sim \mathit {Ra}_p^{-1}$. In fact, simulations carried out in a porous Rayleigh–Bénard cell (Hewitt, Neufeld & Lister Reference Hewitt, Neufeld and Lister2012) indicate that the narrowest structures of the flow, which are thermal boundary layers, have a thickness of at least $50 \mathit {Ra}_p^{-1}$. For these structures to remain larger than the pore scale $\sqrt {\mathit {Da}}$, we thus require
(2.17) \begin{equation} \mathit{Ra} \,\mathit{Da}^{3/2} \lesssim 50 \varLambda. \end{equation}
Note that the effect of violating this constraint is to amplify the importance of viscous resistance $\nabla ^2 \boldsymbol {u}$ within the porous medium in (2.13a), which would no longer reduce to Darcy's law.
Figure 2 provides an overview of the space of control parameters $\mathit {Ra}$ and $\mathit {Da}$, where these various limits are identified and the parameter values in our numerical simulations are indicated. This plot also shows a line that approximately marks the threshold of convective instability in the porous medium, whose importance is discussed in § 3.2 and which is theoretically quantified in § 4.2.2.
Figure 2. Domain of existence and validity of the two-layer model with respect to the control parameters $\mathit {Ra}$ and $\mathit {Da}$, given $\mathit {Pr} = \varLambda = 1$. Each dot represents a simulation. The line $\sqrt {\mathit {Ra}}\mathit {Da}= 1$ marks the limit beyond which inertial terms affect the flow in the porous medium, while the line $\mathit {Ra} \mathit {Da}^{3/2} = 50$ gives an estimate of the point at which the smallest flow structures in the porous medium become comparable to the pore scale. The line $\mathit {Ra} \mathit {Da} = \mathit {Ra}_p^c = 27.1$ corresponds to the threshold of thermal convection in a porous Rayleigh–Bénard cell with an open-top boundary as discussed in § 3.1.
2.7. Numerical method
The one-domain Darcy–Brinkman equations (2.13) are solved using the pseudo-spectral code Dedalus (Burns et al. Reference Burns, Vasil, Oishi, Lecoanet and Brown2020; Hester, Vasil & Burns Reference Hester, Vasil and Burns2021). The flow is decomposed into $N$ Fourier modes in the horizontal direction, while a Chebyshev polynomial decomposition is used in the vertical direction. Because the set-up is composed of two layers whose interface must be accurately resolved, each layer is discretised with its own Chebyshev grid, with $[M_p,M_f]$ nodes for the porous and fluid layers, respectively. This ensures enhanced resolution close to the top and bottom boundary as well as at the interface where the $\sqrt {\mathit {Da}}$ regularisation length must be resolved. Time evolution of the fields is computed with implicit-explicit methods (Wang & Ruuth Reference Wang and Ruuth2008): the nonlinear and Darcy terms in (2.13) are treated explicitly while viscosity and diffusion are treated implicitly. The numerical scheme for time evolution uses second-order backward differentiation for the implicit part and second-order extrapolation for the explicit part (Wang & Ruuth Reference Wang and Ruuth2008). The stability of temporal differentiation is ensured via a standard CFL criterion evaluating the limiting time step in the whole two-layer domain, with an upper limit given by $\sqrt {\mathit {Ra}} \,\mathit {Da}$, which is never reached in practice. Rather, with the control parameters and resolution considered here, the time step is limited by the non-zero vertical velocity at the $z=0$ interface where the vertical discretisation is refined. The range of Rayleigh and Darcy numbers in our simulations is shown in figure 2. Note, with reference to this figure, that we carried out a systematic investigation of parameter space where the porous layer is unstable by varying $\mathit {Ra}$ and $\mathit {Da}$ for various fixed values of the porous velocity scale $\sqrt {\mathit {Ra}}\,Da$.
We use a $\mathcal{C}^{\infty}$-smooth step function for $\chi{(z)}$ in (2.13). The smoothing of the step is performed over a length $ 0.75 \sqrt{Da}$ which is slightly smaller than the regularisation length to ensure that the smoothing does not play any dynamical role. We note that a sharp step function could also be used directly without changing the statistical properties of the simulated flows.
The majority of simulations were carried out in a set-up where the heights of both the porous and the fluid layers were equal $\hat {h}_p = \hat {h}_f = 1$, with resolutions $N \times [M_p,M_f] = 1024 \times [128, 256]$ below $\mathit {Ra} = 10^{8}$ and $1024 \times [256, 512]$ above. As discussed in the following sections, we find that the porous layer in general absorbs more than half of the temperature difference, and so the effective Rayleigh number in the fluid layer is typically somewhat smaller than $\mathit {Ra}$. We used two methods to initiate the simulations. In a few cases, the initial condition was simply taken as the diffusive equilibrium state throughout the domain, perturbed by a small noise. In most cases, however, we proceeded by continuation, using the final output of a previous simulation similar control parameters as an initial condition for a new simulation.
Comparison between the two methods showed that they yielded the same statistically steady state, but the continuation approach reached it in the shortest time. In all cases, we ran simulations over a time comparable to the diffusive time scale $\sqrt {\mathit {Ra}}$, to ensure that the flow had reached a statistically steady state.
3. An overview of two-layer convection
In this section, we describe the results of a series of simulations carried out at a fixed Darcy number, $\mathit {Da} = 10^{-5.5}$, and equal layer depths $\hat {h}_p = \hat {h}_f = 1$, but with varying $\mathit {Ra}$ in the range $10^4 \leqslant \mathit {Ra} \leqslant 10^9$. We use these to illustrate the basic features of high- $\mathit {Ra}$ convective flow in the two-layer system.
3.1. Two different regimes depending on the stability of the porous medium
Figure 3 shows snapshots of the temperature field taken for different simulations that have reached a statistically steady state. The corresponding profiles of the horizontally and temporally averaged temperature, $\bar {\theta }(z)$, are shown in figure 4(a), while the mean interface temperature $\theta _i = \bar {\theta }(0)$ is shown in figure 4(b). The fluid layer is convecting in all simulations, as attested by the presence of plumes and by the mixing of the temperature field that tends to create well-mixed profiles of $\bar {\theta }$ in the bulk of the fluid. The porous layer, however, exhibits two different behaviours depending on the size of $\mathit {Ra}$. At low Rayleigh numbers ( $\mathit {Ra} \leq 10^6$ in figure 3), the porous layer is dominated by diffusive heat transport: there are no hot or cold plumes in the temperature field in $z < 0$ (see figure 3a,b), while the horizontally averaged temperature profiles $\bar {\theta }(z)$ appear to be linear (figure 4a). The corresponding interface temperature monotonically decreases with $\mathit {Ra}$ (figure 4b). As the Rayleigh number is increased beyond $\mathit {Ra} \sim 10^7$, the behaviour of the flow in the porous layer changes as it also becomes unstable to convection. This is attested by the visible presence of plumes in figure 3(c,d) and by the flattening of the horizontally averaged temperature profiles in figure 4(a). The signature of this transition is also very clear in the evolution of the interface temperature $\theta _i$, which reverses from decreasing to increasing with $\mathit {Ra}$ around $\mathit {Ra} \sim 10^7$ (figure 4b). The value of the Rayleigh number at which porous convection emerges can be roughly estimated from the stability of a single porous layer. In a standard Rayleigh–Bénard cell with an open-top boundary, convection occurs if the porous Rayleigh number $\mathit {Ra}_p = \mathit {Ra} \mathit {Da}$ exceeds a critical value $\mathit {Ra}_p^c \simeq 27.1$ (Nield & Bejan Reference Nield and Bejan2017). At $\mathit {Da} = 10^{-5.5}$, the critical Rayleigh number $\mathit {Ra}$ such that $\mathit {Ra} \mathit {Da} = \mathit {Ra}_p^c$ is $\mathit {Ra} \simeq 8.6 \times 10^{6}$, which is reported in figure 4(b) and agrees well with the inversion of trend in $\theta _i$. We will return to a more nuanced form of this argument in § 4.2.2. Lastly, as the Rayleigh number is increased beyond the threshold of porous convection, the porous plumes become thinner and more numerous, a behaviour that is similar to standard Rayleigh–Bénard convection in porous media (Hewitt et al. Reference Hewitt, Neufeld and Lister2012). In addition, the porous plumes become increasingly narrower at their roots in the thermal boundary layer, which causes a local minimum in the temperature profiles (see figure 4a) that has also been observed in previous works on porous convection at large Rayleigh numbers Hewitt et al. (Reference Hewitt, Neufeld and Lister2012).
Figure 3. Snapshots of the temperature field at different Rayleigh numbers in two stable (top panels) and two convective (bottom panels) cases for the porous medium. The Darcy number is kept at $\mathit {Da} = 10^{-5.5}$. The colour scale is cut at $0.8$ to enhance the contrast in the fluid layer.
Figure 4. Temporally averaged quantities for $\mathit {Da} = 10^{-5.5}$. (a) Horizontally averaged temperature $\bar {\theta }(z)$ for values of $\mathit {Ra}$ below (dashed) and above (solid) the threshold of porous convection; and (b) mean interface temperature $\theta _i$. The red dashed line is the diffusive prediction of (4.8) and the black lines are asymptotic predictions obtained by solving (4.7) using $\mathcal {C}_{p} = 0.85$ and the marked values $\mathcal {C}_f$, as detailed in § 4.2.
Lastly, as the Rayleigh number is increased beyond the threshold of porous convection, the porous plumes become thinner and more numerous, a behaviour that is similar to standard Rayleigh–Bénard convection in porous media (Hewitt et al. Reference Hewitt, Neufeld and Lister2012). In addition, the porous plumes become increasingly narrow at their roots above the thermal boundary layer, which causes a local minimumin in the temperature profiles (see figure 4a) that has also been observed in previous works on porous convection at large Rayleigh number (Hewitt et al. Reference Hewitt, Neufeld and Lister2012). Finally, note that the temperature contrast across the interface decreases as the Rayleigh number is increased; this is because the contrast between velocities in the porous and unconfined layers decreases as $\sqrt{ Ra Da}$ increases. At fixed $Da$, the model assumption that there is a separation of scales between these velocities must break down if $Ra$ is made sufficiently large (see the discussion in § 2.6).
3.2. Characteristics of heat transport
The transition between the porous-stable and the porous-unstable cases can be further identified by the analysis of global heat transport across the system. Heat transport is characterised by the horizontally averaged heat flux $J(z) \equiv \overline {w \theta } - \mathit {Ra}^{-1/2} \bar {\theta }'$, which is constant with height in a statistically steady state. As is standard in statistically steady convection problems, we measure the time-averaged enhancement of the heat flux, compared with what it would be in a purely diffusive system, $\mathit {Ra}^{-1/2} /2$, via the Nusselt number $\mathit {Nu} \equiv 2 \sqrt {\mathit {Ra}} \langle J\rangle$, where the angle brackets indicate a long-time average. The Nusselt number (figure 5a) is strongly influenced by the transition to instability in the porous layer. In the porous-stable case ( $\mathit {Ra} \lesssim 10^7$), $\mathit {Nu}$ appears to approach a horizontal asymptote $\mathit {Nu} = 2$, but once the porous layer is unstable, $\mathit {Nu}$ increases much more steeply beyond this value.
Figure 5. (a) Nusselt number $\mathit {Nu} = 2 \sqrt {\mathit {Ra}} \langle J\rangle$ characterising heat transport across the two-layer system, together with predictions from § 4 for the porous-stable case (4.8) (red dashed) and porous-unstable (4.7) (black dotted and dot–dashed, with $\mathcal {C}_{p} = 0.85$ and $\mathcal {C}_{f}$ as marked). The horizontal line marks $\mathit {Nu} = 2$. (b) Root-mean-squared vertical velocity amplitude $w_{rms}$ in the porous and fluid layers. The two grey lines indicate a scaling of $\sqrt {\mathit {Ra}}\mathit {Da}$ (the characteristic speed in the porous layer). (c) Ratio of the diffusive ( $J_{d,p}$) and advective ( $J_{a,p}$) fluxes to the total depth-averaged flux $\langle J \rangle$ in the porous layer. In all three panels, $\mathit {Da} = 10^{-5.5}$ and the vertical line marks the threshold of porous convection estimated using (4.9).
The behaviour below the threshold of convection arises from the flux being predominantly diffusive in the porous layer. The total flux through the system is thus bounded above by a state in which all of the temperature contrast is taken up across the porous layer and the interface temperature $\theta _i$ tends to zero. In this limit, $\langle J \rangle \to 1/\sqrt {\mathit {Ra}}$ and $\mathit {Nu} \to 2$. The decreasing $\theta _i$ in figure 4(b) as $Ra$ increases towards the threshold reflects the approach to this limit. In fact, we find that despite the porous medium appearing to be stable to convection below the threshold, small amplitude flows still exist in this regime, as can be seen from the non-zero root-mean-squared vertical velocity in the porous layer for all $Ra$, shown in figure 5(b). Nevertheless, by computing the relative diffusive and advective contributions to the flux through the porous medium, we confirm that these flows have a negligible impact on heat transport below the threshold (figure 5c). We interpret the weak secondary porous flows in the porous-stable regime as a consequence of the horizontal variations in the interface temperature imposed by fluid convection, which are clearly visible in figure 3(a,b). As shown in figure 5(b,c), the strength of the porous flow increases dramatically as the porous layer becomes unstable, and it is only then that the advective contribution to the flux in the porous medium becomes significant. We return to discuss this induced flow below onset in the porous layer in § 4.3, and defer more detailed discussion and prediction of the behaviour of $\mathit {Nu}(\mathit {Ra},\mathit {Da})$ until the following section.
3.3. Time scales, variability and statistically steady state
The governing equations (2.13) reveal three different time scales that govern the variability of the two-layer system considered here. The first is the turnover time scale in the fluid layer, which is $O(1)$ in our free-fall normalisation. The second is given by diffusion, $\tau _{\mathit {diff}} = \sqrt {\mathit {Ra}}$, and the third is the turnover time scale in the porous layer $\tau _p \sim (\sqrt {\mathit {Ra}}\mathit {Da})^{-1}$, which scales with the inverse of the porous speed scale. Because $\tau _p$ and $\tau _d$ measure advection and diffusion in the porous medium, these time scales are in a ratio $\tau _p = \tau _{\mathit {diff}}/\mathit {Ra}_p$. In addition, we recall that $\tau _p \gg 1$ is required for the porous layer to be in the confined limit and for the Darcy–Brinkman model to hold (see (2.16)). The turnover time scales should scale with the inverse of $w_{rms}$ in each layer, as can be observed in figure 5(b): in the fluid layer, $w_{rms} \sim O(1)$ with no systematic variation with $Ra$, while in the porous layer, $w_{rms} \propto \sqrt {\mathit {Ra}}$ at constant $\mathit {Da}$ in agreement with the scaling above. These two very different time scales are also clearly visible in the time series of the heat flux difference across the two-layer system, as shown in figure 6. Fast oscillations driven by the fluid convection variability are superimposed onto longer time variations owing to flow in the porous layer. Such a time series also illustrates how the two-layer set-up reaches a statistically steady state, the latter being characterised by the flux difference averaging to $0$ over long times. In the particular case of figure 6, the simulation is initiated from the diffusive equilibrium plus a small noise and we observe the equilibration to occur after ${\sim }0.2\tau _{\mathit {diff}}$. Although the equilibration time is largely reduced by the use of continuation, we run all simulations over times that are similar to $\tau _{\mathit {diff}}$ to ensure they are converged.
Figure 6. (a) Time series of the flux difference between top and bottom at $\mathit {Ra} = 10^{7.75}$, $\mathit {Da} = 10^{-5.5}$. The flux difference is normalised by the volume and temporal average of the flux $\langle J \rangle$ and time is normalised by $\tau _{\mathit {diff}} = \sqrt {\mathit {Ra}}$. The insets show zooms to make clearer the different levels of temporal variations, with horizontal segments indicating the porous turnover time scale $\tau _p$ and the free-fall time scale. Note that this simulation has been started from a diffusive state with a small noise.
4. Modelling heat transport and interface temperature
4.1. Predicting heat transport from individual layer behaviour
As observed in figure 4(a), when both layers are convecting, the temperature profiles in each layer are characterised by thin boundary layers at either the upper or lower boundaries of the domain, through which heat must diffuse. This structure is a generic feature of convection problems, and suggests that we may be able to generalise previous results and approaches used in standard Rayleigh–Bénard convection problems to predict the behaviour here.
4.1.1. An asymptotic approach based on boundary-layer marginal stability
Following the seminal approach of Malkus (Reference Malkus1954) and Howard (Reference Howard1966), we posit that in the asymptotic limit of large Rayleigh and small Darcy numbers such that $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$, the thermal boundary layers at the upper and lower boundaries of the domain are held at a thickness that is marginally stable to convection. To apply this idea, let us consider a general region with a Rayleigh number $R$ (i.e. ${R} = \mathit {Ra}$ in the fluid layer and ${R} = \mathit {Ra} \mathit {Da}$ in the porous layer). If the boundary layer has mean thickness $\delta$, then we can also introduce a local boundary-layer Rayleigh number $R \delta ^\beta$, where $\beta =1$ in the porous layer or $\beta =3$ in the fluid layer to account for the different dependence on the height scale in the corresponding Rayleigh number (see (2.14c) and (2.15)). Let the temperature difference across the boundary layer be $\varTheta$ (figure 7), and, for completeness, suppose that the region has an arbitrary dimensionless height $\hat {h}$.
Figure 7. A schematic of the temporal and horizontal average of the temperature in the two-layer system, with a focus on the fluid and porous boundary layers.
Suppose further that we want to rescale lengths and temperatures to compare this general case more directly with the standard Rayleigh–Bénard cell of unit dimensionless height and unit temperature difference. In such a cell, the temperature contrast across the boundary layers is $1/2$, which suggests that we need to rescale temperatures by $2 \varTheta$ and lengths by $\hat {h}$, to give a new Rayleigh number $\hat {R} = 2\varTheta \hat {h} {R}$ and boundary-layer depth $\hat {\delta } = \delta /\hat {h}$. Having rescaled thus, the Malkus–Howard approach would suggest that
(4.1) \begin{equation} \hat{R} \hat{\delta}^\beta = {R}_c, \end{equation}
for some critical value ${R}_c$, or $\hat \delta = ({R}_c/\hat {R})^{1/\beta }$. The corresponding Nusselt number ${N}$ for this rescaled system, given that the scaled temperature drop across the layer is $1/2$, is
(4.2) \begin{equation} {N} = \frac{1}{2\delta} = \frac{1}{2} \left(\frac{\hat{R}}{{R}_c}\right)^{1/\beta}. \end{equation}
Note that the actual, unscaled flux $\langle J\rangle$ across the boundary layer is $\langle J \rangle = \mathit {Ra}^{-1/2} \varTheta /\delta$, which can thus be related to ${N}$ from (4.1) and (4.2) by
(4.3) \begin{equation} \langle J \rangle = \frac{ 2 \varTheta}{\hat{h} \sqrt{\mathit{Ra}}} {N}. \end{equation}
Thus, specification of the critical value ${R}_c$ yields a prediction of the flux through the system in terms of $\varTheta$ and $\hat {h}$. We can extract values of ${R}_c$ from previous works that have determined experimentally or numerically the relation between ${N}$ and $\hat {R}$ in either a fluid or a porous Rayleigh–Bénard cell. For porous convection, Hewitt et al. (Reference Hewitt, Neufeld and Lister2012) found that $(2 {R}_c^{1/\beta })^{-1} \simeq 6.9 \times 10^{-3}$. For unconfined fluid convection, the host of historical experiments reported by Ahlers, Grossmann & Lohse (Reference Ahlers, Grossmann and Lohse2009) and Plumley & Julien (Reference Plumley and Julien2019), as well as more recent studies for instance by Urban et al. (Reference Urban, Hanzelka, Musilová, Králík, Mantia, Srnka and Skrbek2014) or Cheng et al. (Reference Cheng, Stellmach, Ribeiro, Grannan, King and Aurnou2015), suggest that $(2 {R}_c^{1/\beta })^{-1} \sim 6\text {--}8 \times 10^{-2}$, although no definitive observation of a well-developed $\mathit {Ra}^{1/3}$ law has been made so far and the 'true' asymptotic form of ${N}(\hat {R})$ remains a hotly contested question. Nevertheless, these values provide an estimate for the heat flux in both the porous and the fluid layer in the asymptotic limit $\mathit {Ra} \gg 1$ and $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$ that can be compared with our simulations, as discussed in the next section.
4.1.2. Generalising the flux estimate using Rayleigh–Nusselt laws
Our simulations remain limited to moderate Rayleigh numbers, mainly because of the flows through the interface that need to be accurately resolved, and so the asymptotic arguments outlined in the previous section may not be accurate. However, it is straightforward to generalise the asymptotic approach of the previous section to a case where the Nusselt number is some general function $\mathcal {N}(\hat {R})$ of the rescaled Rayleigh number, rather than the asymptotic scaling. To achieve this, we simply replace (4.2) by the relationship
(4.4) \begin{equation} {N} = \mathcal{N}(\hat{R}). \end{equation}
(Equivalently, one could generalise the asymptotic results above to allow $R_c$ to vary with $\hat {R}$ in the manner necessary to recover (4.4).) Over the range of fluid Rayleigh numbers considered here, Cheng et al. (Reference Cheng, Stellmach, Ribeiro, Grannan, King and Aurnou2015) determined an approximate fit to the fluid Nusselt number function $\mathcal {N}_f$ of
(4.5) \begin{equation} \mathcal{N}_f (\hat{R}_f) = 0.162\, \hat{R}_f^{0.284}. \end{equation}
In the porous layer, Hewitt et al. (Reference Hewitt, Neufeld and Lister2012) and Hewitt (Reference Hewitt2020) considered the equivalent porous Nusselt number $\mathcal {N}_p$ and found a correction to the asymptotic relationship of the form $\mathcal {N}_p (\hat {R}_p) = 6.9 \times 10^ {-3}\, \hat {R}_p + 2.75$, which recovers the asymptotic relationship in the limit $\hat {R}_p \rightarrow \infty$. In fact, we also carried out additional simulations of porous convection in a layer of height $\hat {h} = 1$ submitted to a maintained temperature difference of $1$, but in which the top boundary is open as it allows the fluid to flow in and out. An affine fit of the Nusselt number against the porous Rayleigh number in these simulations provided us with the law
(4.6) \begin{equation} \mathcal{N}_p (\hat{R}_p) = 6.99 \times 10^ {{-}3}\, \hat{R}_p + 1.56. \end{equation}
Note that in such a cell, the temperature difference across the bottom boundary layer is not $1/2$ as in a classical Rayleigh–Bénard cell, but rather approximately $\varTheta \sim 0.8$. Note also that the linear coefficient of the fit in (4.6) is effectively the same as that found in the classical cell, which indicates that for a sufficiently large porous Rayleigh number, the mechanisms controlling the lower boundary layer are the same regardless of the nature of the interface condition.
Figure 8 shows a comparison of the predictions of this theory and various flux laws with our numerical data. We show the Nusselt number ${N}$, calculated in our simulations using (4.3), as a function of the rescaled Rayleigh number $\hat {R}$ in both the fluid layer (index $f$) and in the porous layer (index $p$), with the temperature drop across boundary layers $\varTheta$ determined from numerical simulations. We have indicated for reference the single-layer laws (4.5) and (4.6), which are in close agreement with our data. We also include two simulations with different layers depths $\hat {h}_p$ and $\hat {h}_f$ to demonstrate the generality of the theory. In figure 8(a), we observe that the points lie slightly below the law (4.5) found by Cheng et al. (Reference Cheng, Stellmach, Ribeiro, Grannan, King and Aurnou2015), but we note that our values of ${N}_f$ lie within the range of the scatter in the data upon which that fit is based in the paper by Cheng et al. (Reference Cheng, Stellmach, Ribeiro, Grannan, King and Aurnou2015). Overall, the figure indicates that flux laws extracted from individual layers can be used to predict the flux in the two-layer system, after careful rescaling.
Figure 8. The Nusselt number $N$, extracted from the measured flux $\langle J\rangle$ and temperature contrast across the boundary layers via (4.3), as a function of the rescaled Rayleigh number $\hat {R} = {R}\,2 \varTheta \hat {h}^\beta$ in the fluid layer (panel a with subscript $f$) and in the porous layer (panel b with subscript $p$). The values of $\varTheta$, the temperature difference across the boundary layers, are extracted from numerical data. Numerical data are sorted by equal values of the damping factor $\sqrt {\mathit {Ra}}\mathit {Da}$. In each panel, fits from studies of the single-layer case (4.5) or (4.6) are included (dashed grey) together with asymptotic predictions for $\hat {R} \to \infty$ as discussed in § 4.1.1 (red dots). Panel (b) also includes data from simulations of a single porous Rayleigh–Bénard cell with an open-top boundary on which the fit (4.6) is based (red stars). All but two simulations had $\hat {h}_p = \hat {h}_f = 1$; the values of $\hat {h}_p$ for these two simulations are given in the legend.
4.2. The interface temperature
We can use the laws governing the heat flux to determine the interface temperature $\theta _i$, which is important for applications as it gives an order of magnitude of the average fluid temperature. The interface temperature must be set by the constraint of flux conservation between the two layers, which implies that the flux in (4.3) must be the same in both porous and fluid layers, in a statistically steady state. The caveat is that it is not clear how to relate the temperature difference $\varTheta$ across the boundary layers to the interface temperature $\theta _i$. We overcome this issue by saying that the temperature difference across the boundary layer is a fraction of the difference across the whole layer, and so introduce $O(1)$ coefficients $\mathcal {C}_{f,p}$ such that $\varTheta _{f} = \mathcal {C}_f \theta _i$ and $\varTheta _p = \mathcal {C}_p (1-\theta _i)$ (see figure 7). In single classical Rayleigh–Bénard cells, $\mathcal {C}_{f,p} = 0.5$ because both boundary layers are symmetric and diffusive. This is not true in the two-layer system where the transport across the interface includes contributions from diffusion and advection, and $\mathcal {C}_{f,p}$ may take any value ranging from $0.5$ to $1$. Given these coefficients, flux conservation between the layers yields
(4.7) \begin{equation} \frac{2 \mathcal{C}_f \theta_i}{ \hat{h}_f} \mathcal{N}_f (2 \mathcal{C}_f \theta_i \hat{h}_f^3 \mathit{Ra}) = \frac{2 \mathcal{C}_p (1-\theta_i)}{\hat{h}_p} \mathcal{N}_p (2 \mathcal{C}_p (1-\theta_i) \hat{h}_p \mathit{Ra} \mathit{Da}), \end{equation}
which determines the interface temperature $\theta _i$. In general, the fractions $\mathcal {C}_{f,p}$ depend on the control parameters $\mathit {Ra}$ and $\mathit {Da}$, as shown in figure 9.
Figure 9. (a,b) Values of $\mathcal {C}_p$ and $\mathcal {C}_f$ (see figure 7) extracted from simulations for a different porous velocity scale $\sqrt {\mathit {Ra}}\mathit {Da}$. (f) Schematic representation of two limiting temperature profiles in the limit of large Rayleigh numbers based on panel (a) and (b). The porous Rayleigh number is large so that $\mathcal {C}_p = 0.85$ whereas $\mathcal {C}_f$ varies from $1$ in the weakly confined limit ( $\sqrt {\mathit {Ra}}\mathit {Da} \lesssim 1$) to $0.5$ in the strongly confined limit ( $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$).
4.2.1. The porous-convective regime
We first address the case where both layers are unstable to convection. Although $\mathcal {C}_p$ and $\mathcal {C}_f$ may take any value between $0.5$ and $1$, we can still make qualitative predictions on their values depending on the control parameters. Because these coefficients describe the temperature difference between the bulk and the boundary layers, they are also a proxy for the temperature drop at the interface. In the case where the porous turnover time scale $\tau _p = \sqrt {\mathit {Ra}}\mathit {Da}$ is very small compared with the free-fall time scale ( $=$1 in dimensionless units), the porous flow is strongly confined and heat transfer is purely diffusive at the interface so that $\mathcal {C}_p = \mathcal {C}_f = 0.5$. In the opposite, weakly confined limit where $\sqrt {\mathit {Ra}}\mathit {Da}$ is brought up to 1, the porous and fluid velocities become similar and the interface heat transfer becomes advective, which forces the interface temperature drop to vanish (see, for instance, the $\mathit {Ra} = 10^9$ temperature profile in figure 4(a) for which $\sqrt {\mathit {Ra}}\mathit {Da}=10^{-1}$). In such a limit, $\mathcal {C}_p = \mathcal {C}_f = 1$. The values of $\mathcal {C}_p$ and $\mathcal {C}_f$ extracted from numerical simulations are shown in figures 9(a) and 9(b), respectively. The qualitative picture described above seems to be in agreement with the numerical observation in the fluid coefficient: as shown in figure 9(b), although $\mathcal {C}_f$ varies with $\mathit {Ra}$, it is also larger for larger values of $\sqrt {\mathit {Ra}}\mathit {Da}$. Given there is not a clear collapse of this data, we consider in the following discussion on the interface temperature two extreme limits in the asymptotic limit $\mathit {Ra} \to \infty$, as sketched in figure 9(c), $\mathcal {C}_f = 0.5$ in the strongly confined limit ( $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$) and $\mathcal {C}_f = 1$ in the weakly confined limit ( $\sqrt {\mathit {Ra}}\mathit {Da} \lesssim 1$).
In the porous layer, the coefficient $\mathcal{C}_p$ is found to be mainly a function of $Ra_p = RaDa$ and appears to be reaching an asymptote $\mathcal{C}_p\simeq 0.85$ (see figure 9b). $\mathcal{C}_p$ follows a trend that is similar to the case of a single porous layer with an open-top boundary (the red stars in figure 9b), and only weakly depends on $\sqrt{Ra} Da$. This similarity with the single-layer case suggests that the turnover and mixing in the fluid is sufficiently rapid (compared to the motion in the porous medium) so that, from the point of view of the porous layer, the upper layer behaves like a reservoir of fluid at an approximately constant temperature. Given this, some departure from the observed insensitivity to $\sqrt{Ra}Da$ is expected when $\sqrt{Ra} Da \rightarrow 1$; indeed, some suggestion of this might be be visible at $\sqrt{Ra}Da = 10^{-1}$ (see figure 9b). However, despite following similar trends the values of $\mathcal{C}_p$ appear to be larger in the two-layer system than they are in the single porous medium where the asymptotic value is somewhat lower ( $\mathcal{C}_p \simeq 0.78$). This is presumably a consequence of the temperature being imposed as a constant at every point on the upper boundary in the case of a single porous layer. This is a stronger constraint than in the two-layer system where hot porous plumes can locally heat the bottom of the fluid layer, hence reducing the interface temperature drop and resulting into larger values of $\mathcal{C}_p$.
Before showing the asymptotic predictions based on these limiting values of $\mathcal {C}_{p,f}$, we first use the actual extracted values to verify the accuracy of (4.7) in predicting the interface temperature (figure 10). We find the agreement between both to be within $10$ % relative error. This figure also shows the predicted interface temperature for $\mathit {Ra} \gg 1$ for the two limiting cases $\mathcal {C}_f = 0.5$ and $\mathcal {C}_f = 1.0$ with $\mathcal {C}_p = 0.85$ as discussed above. Apart from values close to the threshold of porous convection, all the numerical data lies between them. Moreover, the weakly confined simulations with larger $\sqrt {\mathit {Ra}}\mathit {Da}$ show interface temperatures approaching the limit $\mathcal {C}_f =1$ (figure 10a), while the more strongly confined simulations with $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$ draw closer to $\mathcal {C}_f = 0.5$ as $\mathit {Ra}$ is increased (figure 10d), which suggests that these limits will become increasingly accurate for increasingly large $Ra$.
Figure 10. Plot of the interface temperature $\theta _i$ as a function of the Rayleigh number $\mathit {Ra}$ for different values of $\sqrt {\mathit {Ra}}\mathit {Da}$. The filled symbols are the numerical data and the empty symbol represent is the predicted $\theta _i$ using (4.7) with the values of $\mathcal {C}_{p,f}$ extracted from the simulation and shown in figure 9(a,b). The black lines show the law expected for the interface temperature in the two limiting cases of strongly confined porous medium ( $\sqrt {\mathit {Ra}}\mathit {Da} \rightarrow 0$, $\mathcal {C}_p = 0.85$ and $\mathcal {C}_f = 0.5$) and the weakly confined case ( $\sqrt {\mathit {Ra}}\mathit {Da} \lesssim 1$, $\mathcal {C}_p = 0.85$ and $\mathcal {C}_f = 1.0$). The red line is the interface temperature in the case where the porous medium remains diffusive, the plus sign highlights the value at threshold of porous convection based on the prediction (4.9).
4.2.2. The porous-diffusive regime
If the porous layer remains stable, dominated by diffusive heat transfer, then the expression for the flux in the fluid layer is still given by (4.3), but in the porous layer, it is simply $\langle J \rangle = \mathit {Ra}^{-1/2} (1-\theta _i)/\hat {h}_p$. A balance of these expressions should then replace (4.7) to determine $\theta _i$. Moreover, we know a priori that $\mathcal {C}_{f} = 0.5$ because the flux is entirely diffusive at the interface $z=0$. Flux conservation thus reduces to
(4.8) \begin{equation} \frac{\theta_i}{\hat{h}_f} \mathcal{N}_f(\theta_i \hat{h}_f^3\mathit{Ra} ) = \frac{1-\theta_i}{\hat{h}_p}, \end{equation}
which we solve numerically. The predicted interface temperature from this model is shown in figure 4(b) and gives reasonably good agreement with the numerical data. Given $\theta _i$, we can also extract the Nusselt number for the two-layer system, ${\mathit {Nu} = 2 \sqrt {\mathit {Ra}} \langle J\rangle = 2 (1-\theta _i)/ \hat {h}_p}$ (see § 3), which was also shown to give good agreement with numerical data in figure 5(a) for the particular case $\hat {h}_p = 1$. Note that this prediction also agrees with an earlier result of Chen & Chen (Reference Chen and Chen1992) that the two-layer Nusselt number is bounded above by $2/\hat {h}_p$ when the porous layer remains stable.
In fact, prediction of $\theta _i$ allows for a more accurate prediction of the threshold of convection in the porous medium. Neglecting any temperature variations induced by fluid convection above, we expect that porous convection sets in when $\mathit {Ra} \mathit {Da}\, \hat {h}_p (1-\theta _i)$ reaches a critical value $\mathit {Ra}_p^c \simeq 27.1$ (Nield & Bejan Reference Nield and Bejan2017). Because $\theta _i \rightarrow 0$ at large Rayleigh numbers while the porous layer remains stable, this condition may be recast as
(4.9) \begin{equation} \mathit{Ra} = \frac{\mathit{Ra}_p^c}{\mathit{Da} \,\hat{h}_p} [ 1+ \theta_i + O(\theta_i^2)] \end{equation}
at the threshold of porous convection. To leading order, the threshold is given by $\mathit {Ra} = \mathit {Ra}_p^c /(\mathit {Da} \hat {h}_p)$, as already noted in § 3.1. For the value of $\mathit {Da}$ used in § 3 where $\hat {h}_p = 1$, including the first-order correction increases the predicted threshold value of $\mathit {Ra}$ by approximately $10\,\%$.
4.2.3. Asymptotic predictions
These relationships for $\theta _i$, and thus for the flux, simplify in the asymptotic regime of very large Rayleigh numbers and extremely low Darcy numbers that is relevant for geophysical applications ( $\mathit {Ra} \mathit {Da} \gg 1$ while $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$). In such a regime, the Rayleigh–Nusselt relations in convecting sub-layers follow the asymptotic scalings discussed at the end of § 4.1.1:
(4.10a,b) \begin{equation} \mathcal{N}_p = 6.9 \times 10^{{-}3} \hat{R}_p \equiv \alpha_p \hat{R}_p \quad \text{and}\quad \mathcal{N}_f \simeq 7 \times 10^{{-}2} \hat{R}_f^{1/3} \equiv \alpha_f \hat{R}_f^{1/3}. \end{equation}
In the case of a stable porous medium, the flux balance (4.8) reduces to
(4.11) \begin{equation} \alpha_f \mathit{Ra}^{1/3} \theta_i^{4/3} = \frac{1-\theta_i}{\hat{h}_p }, \end{equation}
with an asymptotic upper bound
(4.12) \begin{equation} \theta_i \sim \frac{7.3}{\hat{h}_p \,\mathit{Ra}^{1/4}}, \end{equation}
for $\mathit {Ra} \gg 1$. In the case where both layers are convecting, (4.7) instead reduces to
(4.13) \begin{equation} \theta_i^{4/3} = \frac{(2 \mathcal{C}_p)^{2}}{(2 \mathcal{C}_f)^{4/3}} \frac{\alpha_p}{\alpha_f} \, \mathit{Ra}^{2/3} \mathit{Da} \,(1-\theta_i)^2, \end{equation}
which shows that the interface temperature is a function of the grouping $\mathit {Ra}^{2/3} \mathit {Da}$ alone in this limit. The height of the layers $\hat {h}_p$ and $\hat {h}_f$ do not appear in (4.13) because the heat flux is controlled by boundary layers whose widths are asymptotically independent of the depth of the domain (Priestley Reference Priestley1954).
The predictions of (4.13) are shown in figure 11 for the two limiting values of $\mathcal {C}_f$, along with the simulation data for reference. There is rough agreement between data and prediction, although the prediction slightly underestimates the data, presumably owing to the finite values of $\mathit {Ra}$ and $\mathit {Ra}_p$ in the simulations. The figure demonstrates that $\theta _i$ decreases with decreasing $\mathit {Ra}^{2/3} \mathit {Da}$: at constant Rayleigh number, decreasing the Darcy number decreases the efficiency of heat transport in the porous medium and, as a consequence, the porous layer must absorb most of the temperature difference, forcing a decrease in $\theta _i$. In the asymptotic limit of large $\mathit {Ra}$ but very small $\mathit {Da}$, such that $\mathit {Ra}^{2/3} \mathit {Da}$ remains small, the interface temperature from (4.12) satisfies
(4.14) \begin{equation} \theta_i \sim \sqrt{\mathit{Ra} \mathit{Da}^{3/2}}, \end{equation}
for $\mathit {Ra}^{2/3} \mathit {Da} \ll 1$. Conversely, $\theta _i$ increases with increasing $\mathit {Ra}^{2/3} \mathit {Da}$ but evidently cannot increase without bound, which reflects the fact that the grouping $\mathit {Ra}^{2/3}\mathit {Da}$ cannot take arbitrarily large values. For $\mathit {Ra}^{2/3}\mathit {Da} \gtrsim O(10)$, the flow structures in the porous medium will become smaller than the pore scale, breaking the assumption of Darcy flow there (see (2.17)).
Figure 11. Plot of the interface temperature $\theta _i$ found in simulations (symbols) and predicted with the asymptotic model (4.13) as a function of $\mathit {Ra}^{2/3} \mathit {Da}$. The two lines materialise the two limiting cases of a strongly confined porous medium ( $\sqrt {\mathit {Ra}}\mathit {Da} \rightarrow 0$, $\mathcal {C}_p = 0.85$ and $\mathcal {C}_f = 0.5$) and the weakly confined case ( $\sqrt {\mathit {Ra}}\mathit {Da} \lesssim 1$, $\mathcal {C}_p = 0.85$ and $\mathcal {C}_f = 1.0$). Note that $\mathit {Ra}^{2/3} \mathit {Da}$ must remain smaller than $\sim$10 to ensure that the constraint (2.17) is satisfied.
4.3. Penetrative convection in the porous-diffusive regime
We end this discussion of fluxes with a brief consideration of the issue of so-called 'penetrative convection' – significant subcritical flow in the porous layer – which has been a contentious subject in some previous studies. For instance, Poulikakos et al. (Reference Poulikakos, Bejan, Selimos and Blake1986) and more recently Bagchi & Kulacki (Reference Bagchi and Kulacki2014) reported numerical simulations of porous flows forced by fluid convection despite the porous Rayleigh number being sub-critical, but Chen & Chen (Reference Chen and Chen1992) found that convection should be confined to the fluid layer only. In the simulations detailed in § 3, we found that while weak flows do exist in the porous-stable case, they do not enhance heat transport compared with diffusion, in opposition to the results of Bagchi & Kulacki (Reference Bagchi and Kulacki2014) (see, for example, their figure 3.3). We argue here that provided the Darcy number is not too large, this is a general result: it is not possible to induce subcritical flow in the porous layer that has a significant impact on the heat flux through the layer, without violating the limitations of the model outlined in § 2.6.
While the lower boundary condition on the porous layer is uniform, $\theta (z=-\hat {h}_p) = 0$, the temperature at the the interface will, in general, display horizontal variations owing to fluid convection above. Because the temperature cannot be smaller than $0$, the amplitude of these horizontal variations is, at most, of the order of the interface temperature $\theta _i$. Any penetrative flow must be driven by these horizontal variations, and so will have an amplitude $w \sim \sqrt {\mathit {Ra}}\mathit {Da} \theta _i$. The heat transported on average by the penetrative flow scales with $w \theta \sim \sqrt {\mathit {Ra}}\mathit {Da} \theta _i^2$, and so its contribution to the Nusselt number $\mathit {Nu}$ is $\sim \mathit {Ra} \mathit {Da} \theta _i^2$. The contribution of the penetrative flow thus becomes significant, relative to the $O(1)$ diffusive flux through the layer, when $\mathit {Ra} \mathit {Da} \theta _i^2 \sim O(1)$. However, according to the upper bound in (4.12), $\mathit {Ra} \mathit {Da} \theta _i^2 \lesssim 50 \sqrt {\mathit {Ra}} \mathit {Da}$ for $\mathit {Ra} \gg 1$. The assumption that penetrative flows do not transport appreciable heat is therefore accurate, as long as $50 \sqrt {\mathit {Ra}}\mathit {Da} \lesssim 1$ or $\mathit {Ra}_p \lesssim 4 \times 10^{-4} \mathit {Da}^{-1}$. Provided $\mathit {Da} < O(10^{-5})$, this value of the porous Rayleigh number is always larger than the critical value $\mathit {Ra}_p^c$ for the onset of convection in the porous layer, and there will be no enhanced penetrative convection in the porous layer at subcritical values of $\mathit {Ra}_p$. Given that in most geophysical systems we expect $\mathit {Da} \ll 10^{-5}$, we conclude that, in general, diffusion controls the heat flow through the medium and penetrative convection plays a negligible role. In general, for $\mathit {Ra} \gg 1$ penetrative convection can only occur if the Darcy number is such that either the constraint $\sqrt {\mathit {Ra}}\mathit {Da} \ll 1$ (2.16) – which enforces that the flow in the porous medium remains confined with negligible inertial effects – or the constraint $\mathit {Ra} \mathit {Da}^{3/2} \lesssim 50$ (2.17) – which enforces that the flow length scales are larger than the pore scale – are violated.
5. Temporal coupling between the layers
In this two-layer set-up, heat is transported through two systems with very different response time scales while carrying the same average heat flux. The dynamics in the unconfined fluid layer must, therefore, exhibit variability on both a slow time scale imposed by the porous layer below and on a rapid time scale inherent to turbulent fluid convection. In turn, as heat is transported to the top of the two-layer cell, fluid convection must mediate and possibly filter the long variations of the porous activity in a manner that remains to be quantified.
5.1. Heat-flux variations with height
The contrast between imposed and inherent variability of fluid convection is first illustrated by time series of the horizontally averaged heat flux $J(t,z) = \overline {w\theta } - \mathit {Ra}^{-1/2} \bar {\theta }'$ at different heights, shown in figure 12. The flux in the porous layer experiences long-lived bursts of activity that can amount to up to a 50 % increase of the flux compared with its average value, with a duration that is controlled by the porous turnover time scale $\tau _p \sim (\sqrt {\mathit {Ra} \mathit {Da}})^{-1}$. In figure 12, the signature of these long-time variations can be traced up to the top of the fluid layer, where they are superposed on much faster variations in heat flux associated with the turbulent convective dynamics, which evolve on the $O(1)$ free-fall time scale. However, comparison between the time series at $z=0$ and $z=1$ in figure 12 reveals that the typical intensity of the bursts is notably weaker at the top of the fluid layer than at the interface. Therefore, fluid convection is not a perfect conveyor of the long-time, imposed variability, which it partially filters out.
Figure 12. Time series of the $x$-averaged flux $J (z,t) = \overline {w\theta }- \mathit {Ra}^{-1/2} \bar {\theta }'$ at different heights in a simulation carried out at $\mathit {Ra} = 10^8$ and $\mathit {Da} = 10^{-5.5}$. The heat flux is normalised by its averaged value over the whole domain and over time. Note that the scale of the $y$ axis is larger at mid-height in the fluid layer ( $z = 0.5$). Time is normalised by the free-fall time scale. The present simulation spans over more than one diffusive time scale because $\tau _{\mathit {diff}} = 10^4$.
5.2. Spectral content of the heat flux at different heights
We use spectral analysis of the heat flux time series to better quantify the inherent and imposed variability of fluid convection and the latter's filtering effect on imposed variability. Figure 13(a) shows the power spectra $\vert \hat {J}(\omega,z)\vert ^2$ of the signals displayed in figure 12. First, the spectral content that is inherent to fluid convection is easily distinguished from that imposed by porous convection. The variability of the flux in the fluid layer ( $z=-0.5$ in figure 13a) is almost entirely contained in harmonics smaller than ${\sim }10^{-2}$. The energy of lower harmonics in the fluid layer ( $z\geq 0$) closely follows the spectral content in the porous layer, which confirms that it is primarily imposed by the porous flow. Higher harmonics are therefore controlled by fast fluid convection. This is particularly well illustrated by the spectra at the fluid boundaries ( $z=0$ & $z=1$) being effectively identical above $\omega = 5 \times 10^{-2}$: we retrieve the top-down symmetry of classical Rayleigh–Bénard convection with imposed uniform temperature at the boundaries for these larger harmonics. The filtering effect of the fluid layer is visible at lower frequencies where the energy decays as $z$ increases. It is further quantified in figure 13(b), where we show the energy at particular frequencies as a function of depth through the fluid layer. We find that at low frequency, the energy decays exponentially with $z$ and with a rate that seems to increase with $\omega$. The spatial decay is lost for higher harmonics, which are driven directly by fluid convection; instead, the energy is maximised in the bulk of the fluid layer and decays at the boundaries.
Figure 13. (a) Power spectrum $\vert \hat {J}(\omega,z) \vert ^2$ of the heat flux time series shown in figure 12 and normalised by $\langle J \rangle ^2$. (b) Spatial variations of the power spectrum $\vert \hat {J}(\omega,z) \vert ^2$ in (a) with depth through the fluid layer, shown for several frequencies, the lower two being typical of porous convection and the larger two typical of fluid convection. (c) Vertical decay rate $r$ of the energy of low frequency harmonics of the heat flux, for $\mathit {Ra} = 10^{8}$ with $\mathit {Da}$ varied by an order of magnitude from $10^{-6}$ to $10^{-5}$ as indicated in the legend. The dotted line gives the power law $\omega ^{0.75}$ for reference.
5.3. Spatial decay rate of low-frequency variability
We attempt to quantify the filtering effect of fluid convection on the long-time variations by systematically measuring the spatial decay rate of the low-frequency harmonics.
By linear fitting of $\ln (\vert \hat {J} (\omega,z)\vert ^2 )$ with $z \in [0,0.6]$, we extract the decay rate $r$ for all frequencies below $\omega = 10^{-2}$, above which temporal variations become partially imposed by fluid rather porous convection. The result of this process is shown in figure 13(c). Despite some spread in the extracted values, they all appear to follow the same trend: below $\omega \simeq 5 \times 10^{-3}$, the energy decay rate roughly increases like $\omega ^{0.75}$, and above this value, the decay rate saturates, presumably because the harmonics are increasingly driven by fluid convection. That is, sufficiently slow variations imposed by the porous layer decay very slowly through the fluid layer, but more rapid variations decay faster. The exponential decay of the flux harmonics with $z$ is reminiscent of the problem of diffusion in a solid submitted to an oscillating temperature boundary condition. In that classical problem, the decay rate has a diffusive scaling, $r\sim \omega ^{1/2}$, for oscillation frequency $\omega$. Here, our results instead suggest that fluid convection acts as a sub-diffusive process on the low-frequency flux variations imposed by the porous medium. A quantitative explanation for such behaviour remains elusive. It does, however, at least seem reasonable that the decay rate should increase with $\omega$. As $\omega \to 0$, the decay rate must vanish, because the average heat flux through the system is conserved with height, but higher frequency variations in the porous layer will lead to localised bursts of plumes that are quickly mixed into the large-scale circulation of the fluid convection; the extra flux momentarily increase the temperature of the fluid but is not transmitted up to the top of the layer. Additional work, possibly with more idealised models of fluid convection submitted to flux variations, is required to fully understand the phenomenology that we have outlined here. Although our results on the filtering effect remain preliminary, we can still predict the typical porous turnover time scale $\tau _p$ needed to ensure that the decay of the variability in the fluid layer is negligible. The power spectrum of the flux at $z=0$ and $z=\hat {h}_f$ is in a ratio $\sim \exp (-r(\omega ) \hat {h}_f)$. The cut-off frequency of such a low-pass filter, reached when $\exp (-r \hat {h}_f) \sim 1/2$, is roughly located at $\omega = \omega _c = 10^{-3}$ in the particular case of $\hat {h}_f = 1$, according to figure 13(c). Therefore, using $r \sim \omega ^{3/4}$, we predict in general that when $1/\tau _p = \sqrt {\mathit {Ra}}\mathit {Da}$ is smaller than ${\sim }\hat {h}_f^{-4/3} \omega _c$, the temporal variability imposed by porous convection is sufficiently slow that it will be entirely transmitted across the fluid layer.
In this article, we have explored heat transport in a Rayleigh–Bénard cell composed of a fluid-saturated porous bed overlain by an unconfined fluid layer, using numerical simulations and theoretical modelling. The focus of the work has been on the geologically relevant limit of large Rayleigh number, $\mathit {Ra}$, and small Darcy number, $\mathit {Da}$, such that heat is transported through the system by vigorous convection but the flow within the porous medium remains inertia-free and well described by Darcy's law. To the best of our knowledge, this is the first study of porous–fluid two-layer systems in this limit.
Having identified suitable limits on the parameters, we demonstrated that the dynamics and heat flux through the two-layer system are strongly dependent on whether the flow in the porous layer is unstable to convection, which we demonstrated should occur if $\mathit {Ra} \gtrsim 27/(\hat {h}_p\mathit {Da})$. By suitably rescaling, we showed that flux laws from individual convecting fluid or porous layers could be used to predict both the flux through the two-layer system and the mean temperature at the interface between the two layers. In the asymptotic limit of large $\mathit {Ra}$, we find that while flow in the porous layer remains stable ( $\mathit {Ra} \mathit {Da}\hat {h}_p \lesssim 27$), the interface temperature $\theta _i$ satisfies $\theta _i \sim \mathit {Ra}^{-1/4}$, whereas if the porous layer becomes unstable to convection, $\theta _i \sim \sqrt {\mathit {Ra} \mathit {Da}^{3/2}}$, provided $\mathit {Da}$ remains sufficiently small that $\mathit {Ra} \mathit {Da}^{3/2} \ll 1$.
We briefly investigated the role played by 'penetrative convection' (Bagchi & Kulacki Reference Bagchi and Kulacki2014), i.e. subcritical flows in the porous medium driven by fluid convection above. If the fluid above is convecting, then some weak flow will always be driven in the porous layer by horizontal temperature variations imposed at the interface; but we show that they are always too weak to contribute significantly to heat transport, unless some of the model assumptions about flow in the porous medium are violated. Interestingly, the laws governing or limiting heat transport and the interface temperature in porous–fluid convection have been derived from the behaviour of each layer considered separately. They do not rely on the details of the flow at the interface, in particular, on the pore-scale boundary layer at the transition between the porous and the pure fluid flows. Therefore, the laws that we have identified here should only weakly depend on the choice of the formulation of the two-layer convection problem (see the discussion in § 2.1).
Lastly, we also briefly explored the manner in which rapid fluid convection mediates the long-time variations of activity in the porous layer. The amplitude of low-frequency temporal variations of flux imposed by porous convection decay exponentially through the unconfined fluid layer in a way that is reminiscent of a diffusive process. As a result, fluid convection acts as a low-pass filter on the bursts of activity in the porous layer. We predict that for $\sqrt {\mathit {Ra}}\mathit {Da} < 10^{-3} \hat {h}_f^{-4/3}$, the decay of low-frequency variations in the fluid layer are negligible and the variability of porous convection is entirely transmitted to the top of the two-layer system.
Before ending, we return to consider the question of how important it is to resolve both porous and fluid layers when studying these coupled systems in astrophysical or geophysical settings, rather than using parametrised boundary conditions. Our results in the limit of strong convection suggest that, while the details of convection in each layer are important for controlling the interface temperature and heat flux through the system, there is very little coupling in the dynamical structure of the flow between each layer. As may be noticed in figure 3, for example, fluid convection remains organised in large-scale rolls even if it is forced by several hot plumes from the porous medium below. Therefore, a localised 'hot spot' at the interface associated with, say, a strong plume in the porous medium, does not, in general, lead to any associated hot spot at the surface of the fluid layer.
This observation helps to justify the approach of various studies which neglect the dynamics of flow in the porous medium altogether (e.g. Soderlund Reference Soderlund2019 and Amit et al. Reference Amit, Choblet, Tobie, Terra-Nova, Čadek and Bouffard2020 in the context of convection in icy moons). It is also an interesting point in the context of Enceladus, which is well known for sustaining a strong heat-flux anomaly at its South Pole that is believed to arise from hydrothermal circulation driven by tidal heating in its rocky porous core (Spencer et al. Reference Spencer, Pearl, Segura, Flasar, Mamoutkine, Romani, Buratti, Hendrix, Spilker and Lopes2006; Choblet et al. Reference Choblet, Tobie, Sotin, Běhounková, Čadek, Postberg and Souček2017; Spencer et al. Reference Spencer, Nimmo, Ingersoll, Hurford, Kite, Rhoden, Schmidt and Howett2018; Le Reun & Hewitt Reference Le Reun and Hewitt2020). While models predict that tidal heating does drive a hot plume in the porous core at the poles (Choblet et al. Reference Choblet, Tobie, Sotin, Běhounková, Čadek, Postberg and Souček2017), our results suggest that this hot spot is unlikely to be transmitted through the overlying ocean to the surface without invoking other ingredients such as rotation (Soderlund et al. Reference Soderlund, Schmidt, Wicht and Blankenship2014; Soderlund Reference Soderlund2019; Amit et al. Reference Amit, Choblet, Tobie, Terra-Nova, Čadek and Bouffard2020) or topography caused by melting at the base of the ice shell (Favier, Purseed & Duchemin Reference Favier, Purseed and Duchemin2019).
The authors are grateful to Eric Hester for his help in implementing the code Dedalus.
This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). TLR is supported by the Royal Society through a Newton International Fellowship (grant no. NIF\R1\192181).
The authors report no conflict of interest.
Ahlers, G., Grossmann, S. & Lohse, D. 2009 Heat transfer and large scale dynamics in turbulent Rayleigh–Bénard convection. Rev. Mod. Phys. 81 (2), 503–537.CrossRefGoogle Scholar
Amit, H., Choblet, G.ël, Tobie, G., Terra-Nova, F., Čadek, O.řej & Bouffard, M. 2020 Cooling patterns in rotating thin spherical shells: application to Titan's subsurface ocean. Icarus 338, 113509.CrossRefGoogle Scholar
Bagchi, A. & Kulacki, F.A. 2014 Natural Convection in Superposed Fluid-Porous Layers. Springer.CrossRefGoogle Scholar
Beavers, G.S. & Joseph, D.D. 1967 Boundary conditions at a naturally permeable wall. J. Fluid Mech. 30 (1), 197–207.CrossRefGoogle Scholar
Burns, K.J., Vasil, G.M., Oishi, J.S., Lecoanet, D. & Brown, B.P. 2020 Dedalus: a flexible framework for numerical simulations with spectral methods. Phys. Rev. Res. 2 (2), 023068.CrossRefGoogle Scholar
Chandesris, M., D'Hueppe, A., Mathieu, B., Jamet, D. & Goyeau, B. 2013 Direct numerical simulation of turbulent heat transfer in a fluid-porous domain. Phys. Fluids 25, 125110.CrossRefGoogle Scholar
Chen, F. & Chen, C.F. 1988 Onset of finger convection in a horizontal porous layer underlying a fluid layer. J. Heat Transfer 110 (2), 403–409.CrossRefGoogle Scholar
Chen, F. & Chen, C.F. 1992 Convection in superposed fluid and porous layers. J. Fluid Mech. 234, 97–119.CrossRefGoogle Scholar
Cheng, J.S., Stellmach, S., Ribeiro, A., Grannan, A., King, E.M. & Aurnou, J.M. 2015 Laboratory-numerical models of rapidly rotating convection in planetary cores. Geophys. J. Intl 201 (1), 1–17.CrossRefGoogle Scholar
Choblet, G.ël, Tobie, G., Sotin, C., Běhounková, M., Čadek, O.řej, Postberg, F. & Souček, O.řej 2017 Powering prolonged hydrothermal activity inside Enceladus. Nat. Astron. 1, 841–847.CrossRefGoogle Scholar
Coumou, D., Driesner, T. & Heinrich, C.A. 2008 The structure and dynamics of mid-ocean ridge hydrothermal systems. Science 321 (5897), 1825–1828.CrossRefGoogle ScholarPubMed
d'Hueppe, A., Chandesris, M., Jamet, D. & Goyeau, B. 2011 Boundary conditions at a fluid–porous interface for a convective heat transfer problem: analysis of the jump relations. Intl J. Heat Mass Transfer 54 (15), 3683–3693.CrossRefGoogle Scholar
d'Hueppe, A., Chandesris, M., Jamet, D. & Goyeau, B. 2012 Coupling a two-temperature model and a one-temperature model at a fluid-porous interface. Intl J. Heat Mass Transfer 55 (9), 2510–2523.CrossRefGoogle Scholar
Favier, B., Purseed, J. & Duchemin, L. 2019 Rayleigh–Bénard convection with a melting boundary. J. Fluid Mech. 858, 437–473.CrossRefGoogle Scholar
Fontaine, F.J. & Wilcock, W.S.D. 2007 Two-dimensional numerical models of open-top hydrothermal convection at high Rayleigh and Nusselt numbers: implications for mid-ocean ridge hydrothermal circulation. Geochem. Geophys. Geosyst. 8 (7), Q07010.CrossRefGoogle Scholar
Goodman, J.C. 2004 Hydrothermal plume dynamics on Europa: implications for chaos formation. J. Geophys. Res. 109 (E3), E03008.Google Scholar
Goodman, J.C. & Lenferink, E. 2012 Numerical simulations of marine hydrothermal plumes for Europa and other icy worlds. Icarus 221 (2), 970–983.CrossRefGoogle Scholar
Hester, E.W., Vasil, G.M. & Burns, K.J. 2021 Improving accuracy of volume penalised fluid-solid interactions. J. Comput. Phys. 430, 110043.CrossRefGoogle Scholar
Hewitt, D.R. 2020 Vigorous convection in porous media. Proc. R. Soc. A Math. Phys. Engng Sci. 476 (2239), 20200111.Google ScholarPubMed
Hewitt, D.R., Neufeld, J.A. & Lister, J.R. 2012 Ultimate regime of high Rayleigh number convection in a porous medium. Phys. Rev. Lett. 108 (22), 224503.CrossRefGoogle Scholar
Hirata, S.C., Goyeau, B. & Gobin, D. 2009 a Stability of thermosolutal natural convection in superposed fluid and porous layers. Transp. Porous Med. 78 (3), 525–536.CrossRefGoogle Scholar
Hirata, S.C., Goyeau, B., Gobin, D., Carr, M. & Cotta, R.M. 2007 Linear stability of natural convection in superposed fluid and porous layers: influence of the interfacial modelling. Intl J. Heat Mass Transfer 50 (7–8), 1356–1367.CrossRefGoogle Scholar
Hirata, S.C., Goyeau, B., Gobin, D., Chandesris, M. & Jamet, D. 2009 b Stability of natural convection in superposed fluid and porous layers: equivalence of the one- and two-domain approaches. Intl J. Heat Mass Transfer 52 (1–2), 533–536.CrossRefGoogle Scholar
Howard, L.N. 1966 Convection at high Rayleigh number. In Applied Mechanics (ed. H. Görtler), pp. 1109–1115. Springer.CrossRefGoogle Scholar
Hsu, H.-W., et al. 2015 Ongoing hydrothermal activities within Enceladus. Nature 519 (7542), 207–210.CrossRefGoogle ScholarPubMed
Huguet, L., Alboussière, T., Bergman, M.I., Deguen, R., Labrosse, S.éphane, Lesœur, G. 2016 Structure of a mushy layer under hypergravity with implications for Earth's inner core. Geophys. J. Intl 204 (3), 1729–1755.CrossRefGoogle Scholar
Le Bars, M. & Worster, M.G. 2006 Interfacial conditions between a pure fluid and a porous medium: implications for binary alloy solidification. J. Fluid Mech. 550, 149.CrossRefGoogle Scholar
Le Reun, T. & Hewitt, D.R. 2020 Internally heated porous convection: an idealized model for Enceladus' hydrothermal activity. J. Geophys. Res. Planets 125 (7), e2020JE006451.CrossRefGoogle Scholar
Malkus, W.V.R. 1954 The heat transport and spectrum of thermal turbulence. Proc. R. Soc. Lond. A Math. Phys. Sci. 225 (1161), 196–212.Google Scholar
Nield, D.A. & Bejan, A. 2013 Heat transfer through a porous medium. In Convection in Porous Media (ed. D.A. Nield & A. Bejan), pp. 31–46. Springer.CrossRefGoogle Scholar
Nield, D.A. & Bejan, A. 2017 Convection in Porous Media. Springer.CrossRefGoogle Scholar
Nimmo, F., Barr, A.C., Běhounková, M. & McKinnon, W.B. 2018 The thermal and orbital evolution of Enceladus: observational constraints and models. In Enceladus and the Icy Moons of Saturn. pp. 79–94. The University of Arizona Press.Google Scholar
Plumley, M. & Julien, K. 2019 Scaling laws in Rayleigh–Bénard convection. Earth Space Sci. 6 (9), 1580–1592.CrossRefGoogle Scholar
Poulikakos, D., Bejan, A., Selimos, B. & Blake, K.R. 1986 High Rayleigh number convection in a fluid overlaying a porous bed. Intl J. Heat Fluid Flow 7 (2), 109–116.CrossRefGoogle Scholar
Priestley, C.H.B. 1954 Convection from a large horizontal surface. Austral. J. Phys. 7 (1), 176–201.CrossRefGoogle Scholar
Soderlund, K.M. 2019 Ocean dynamics of outer solar system satellites. Geophys. Res. Lett. 46 (15), 8700–8710.CrossRefGoogle Scholar
Soderlund, K.M., Schmidt, B.E., Wicht, J. & Blankenship, D.D. 2014 Ocean-driven heating of Europa's icy shell at low latitudes. Nat. Geosci. 7 (1), 16–19.CrossRefGoogle Scholar
Spencer, J.R., Nimmo, F., Ingersoll, A.P., Hurford, T.A., Kite, E.S., Rhoden, A.R., Schmidt, J. & Howett, C.J.A. 2018 Plume origins and plumbing: from ocean to surface. In Enceladus and the Icy Moons of Saturn (ed. P.M. Schenk, R.N. Clark, C.J.A. Howett, A.J. Verbiscer & J. Hunter Waite), pp. 163–174. University of Arizona Press.CrossRefGoogle Scholar
Spencer, J.R., Pearl, J.C., Segura, M., Flasar, F.M., Mamoutkine, A., Romani, P., Buratti, B.J., Hendrix, A.R., Spilker, L.J. & Lopes, R.M.C. 2006 Cassini encounters Enceladus: background and the discovery of a south polar hot spot. Science 311 (5766), 1401–1405.CrossRefGoogle ScholarPubMed
Su, Y., Wade, A. & Davidson, J.H. 2015 Macroscopic correlation for natural convection in water saturated metal foam relative to the placement within an enclosure heated from below. Intl J. Heat Mass Transfer 85, 890–896.CrossRefGoogle Scholar
Travis, B.J., Palguta, J. & Schubert, G. 2012 A whole-moon thermal history model of Europa: impact of hydrothermal circulation and salt transport. Icarus 218 (2), 1006–1019.CrossRefGoogle Scholar
Travis, B.J. & Schubert, G. 2015 Keeping Enceladus warm. Icarus 250, 32–42.CrossRefGoogle Scholar
Urban, P., Hanzelka, P., Musilová, Věra, Králík, T.áš, Mantia, M.L., Srnka, A.š, Skrbek, L. 2014 Heat transfer in cryogenic helium gas by turbulent Rayleigh–Bénard convection in a cylindrical cell of aspect ratio 1. New J. Phys. 16 (5), 053042.CrossRefGoogle Scholar
Wang, D. & Ruuth, S.J. 2008 Variable step-size implicit-explicit linear multisept methods for time-dependent partial differential equations. J. Comput. Math. 26 (6), 838–855.Google Scholar
Wells, A.J., Hitchen, J.R. & Parkinson, J.R.G. 2019 Mushy-layer growth and convection, with application to sea ice. Phil. Trans. R. Soc. A Math. Phys. Engng Sci. 377 (2146), 20180165.CrossRefGoogle ScholarPubMed
Woods, A.W. 2010 Turbulent plumes in nature. Annu. Rev. Fluid Mech. 42 (1), 391–412.CrossRefGoogle Scholar
Worster, M.G. 1997 Convection in mushy layers. Annu. Rev. Fluid Mech. 29, 91–122.CrossRefGoogle Scholar
View in content
|
CommonCrawl
|
Search papers
Search references
Computer Optics:
Personal entry:
Author Index, 2018, Volume 42
A B C D E F G H I K L M N O P R S T U V W Y Z р Full List
V. G. Abashkin, see E. Achimova
Computer Optics, 2018, Volume 42:2, 267–272
E. Achimova, V. G. Abashkin, Daniel Claus, Giancarlo Pedrini, Igor Shevkunov, V. Ya. Katkovnik
Noise minimised high resolution digital holographic microscopy applied to surface topography
A. A. Agafonov, V. V. Myasnikov
Numerical route reservation method in the geoinformatic task of autonomous vehicle routing
A. A. Agafonov, A. S. Yumaganov, V. V. Myasnikov
Big data analysis in a geoinformatic problem of short-term traffic flow forecasting based on a k nearest neighbors method
Computer Optics, 2018, Volume 42:6, 1101–1111
A. A. Akimov, V. V. Ivakhnik, S. A. Guzairov
Four-wave mixing on thermal nonlinearity in a scheme with positive feedback
E. A. Akimova, see V. V. Podlipnov
Ya. E. Akimova, see A. V. Volyar
E. S. Andreev, see L. L. Doskolovich
K. V. Andreeva, see L. L. Doskolovich
A. E. Anfilofev, see I. A. Hodashinsky
A. Ansari, see F. Habibi
A. I. Antonov, see G. I. Greisukh
Computer Optics, 2018, Volume 42:1, 38–43
E. I. Astakhov, see D. A. Usanov
N. V. Astapenko, K. T. Koshekov, A. N. Kolesnikov
Methodology of automatic registration of 3D measurements of bulk materials in granaries
A. N. Babichev, see V. V. Podlipnov
V. Kh. Bagmanov, see I. L. Vinogradova
M. B. Bardamova, see I. A. Hodashinsky
A. E. Barinov, see A. A. Zakharov
S. A. Bartalev, see D. E. Plotnikov
Yu. Ts. Batomunkuev, A. A. Dianova
Calculation of the higher-order axial spherical aberrations of a high-aperture focusing holographic optical element with the corrected third-order spherical aberration. Part 1
Yu. Ts. Batomunkuev, A. A. Dianova, T. V. Maganakova
Calculation of higher-order axial spherical aberrations of a high-aperture focusing holographic optical element with the corrected third-order spherical aberration. Part 2
A. M. Belov, A. Yu. Denisova
Spectral and spatial super-resolution method for Earth remote sensing image fusion
A. A. Belov, see Yu. A. Kropotov
M. L. Belov, see P. A. Filimonov
E. A. Bezus, D. A. Bykov, L. L. Doskolovich
On the relation between the propagation constant of Bloch surface waves and the thickness of the upper layer of a photonic crystal
E. A. Bezus, see L. L. Doskolovich
E. A. Bezus, see E. A. Kadomina
S. A. Bibikov, N. L. Kazanskiy, V. A. Fursov
Vegetation type recognition in hyperspectral images using a conjugacy indicator
V. A. Blank, see V. V. Podlipnov
Yu. B. Blokhinov, V. A. Gorbachev, Yu. O. Rakutin, A. D. Nikitin
A real-time semantic segmentation algorithm for aerial imagery
Yu. A. Bolotova, see I. A. Kanaeva
M. Boori, R. A. Paringer, K. Choudhary, A. V. Kupriyanov
Comparison of hyperspectral and multi-spectral imagery to building a spectral library and land cover classification performanc
M. V. Bretsko, see A. V. Volyar
S. A. Brianskiy, Yu. V. Vizilter
Morphological estimates of image complexity and information content
M. Butt, A. K. Reddy, S. N. Khonina
A compact design of a balanced $1\times 4$ optical power splitter based on silicon on insulator slot waveguides
D. A. Bykov, see E. A. Bezus
D. A. Bykov, see L. L. Doskolovich
D. A. Bykov, see A. A. Mingazov
V. P. Chasovskikh, see V. G. Labunets
V. M. Chernov
Calculation of Fourier-Galois transforms in reduced binary number systems
Ternary number systems in finite fields
Discrete orthogonal transforms with bases generated by self-similar sequences
"Exotic" binary number systems for rings of Gauss and Eisenstein integers
N. I. Chervyakov, P. A. Lyakhov, A. R. Orazaev
New methods of adaptive median filtering of impulse noise in images
S. E. Chesnokov, see A. V. Krevetsky
K. Choudhary, see M. Boori
S. N. Chukanov, see S. V. Lejhter
Computer Optics, 2018, Volume 42:1, 96–104
V. A. Danilov, see G. I. Greisukh
A. V. Demin, see O. A. Volkov
A. Yu. Denisova, see A. M. Belov
A. Yu. Denisova, see A. A. Varlamova
A. A. Dianova, see Yu. Ts. Batomunkuev
R. R. Diyazitdinov, see N. N. Vasin
E. A. Dmitriev, V. V. Myasnikov
Comparative study of description algorithms for complex-valued gradient fields of digital images using linear dimensionality reduction methods
S. Yu. Dobdin, see D. A. Usanov
K. A. Dorofeev, see A. N. Ruchay
L. L. Doskolovich, see E. A. Bezus
L. L. Doskolovich, E. A. Bezus, N. L. Kazanskii
Multifocal spectral diffractive lens
L. L. Doskolovich, A. A. Mingazov, D. A. Bykov, E. S. Andreev
Variational approach to eikonal function computation
L. L. Doskolovich, see A. A. Mingazov
L. L. Doskolovich, K. V. Andreeva, D. A. Bykov
Design of an axisymmetric optical element generating a prescribed illuminance distribution and wavefront
L. L. Doskolovich, see E. A. Kadomina
M. A. Dryuchenko, see A. A. Sirota
A. A. Dyachenko, L. A. Maksimova, V. P. Ryabukho
Manifestation of effects of the angular spectrum of the illuminating field in polychromatic interference microscopy of stratified objects
O. A. Efremova, Yu. N. Kunakov, S. V. Pavlov, A. Kh. Sultanov
An algorithm for mapping flooded areas through analysis of satellite imagery and terrestrial relief features
Yu. A. Egorov, see A. V. Volyar
V. A. Fedoseev, see V. A. Mitekin
N. G. Fedotov, A. A. Syemov, A. V. Moiseev
Theoretical foundations of hypertrace-transform: scanning techniques, mathematical apparatus and experimental verification
Yu. V. Fedotov, see P. A. Filimonov
V. R. Fidelman, see R. A. Shatalin
I. V. Filimonenko, see I. A. Hodashinsky
P. A. Filimonov, M. L. Belov, Yu. V. Fedotov, S. E. Ivanov, V. A. Gorodnichev
An algorithm for segmentation of aerosol inhomogeneities
M. A. Frolova, see S. N. Koreshev
V. A. Fursov
Constructing a quadratic-exponential FIR-filter with an extended frequency response midrange
V. A. Fursov, see S. A. Bibikov
A. V. Gaidel, A. V. Kapishnikov, Yu. S. Pyshkina, A. V. Kolsanov, A. G. Khramov
Method of nephroscintigraphic dynamic images analysis
E. V. Galkina, see A. V. Kalenskii
M. V. Gashnikov
Interpolation based on context modeling for hierarchical compression of multidimensional signals
M. V. Gashnikov, see A. I. Maksimov
K. I. Gerasimov, see N. S. Perminov
A. R. Gizatulin, see I. L. Vinogradova
D. Yu. Golovanov, see V. I. Parfenov
D. L. Golovashkin, see L. V. Yablokova
V. A. Gorbachev, see Yu. B. Blokhinov
A. V. Gorevoy, V. Ya. Kolyuchkin, A. S. Machikhin
Estimation of the geometrical measurement error at the stage of stereoscopic system design
V. A. Gorodnichev, see P. A. Filimonov
E. P. Grakhova, see I. L. Vinogradova
A. A. Grebenyuk, D. M. Klychkova, V. P. Ryabukho
Numerical focusing and the field of view in interference microscopy
A. A. Grebenyuk, V. P. Ryabukho
Numerically focused optical coherence microscopy with structured illumination aperture
G. I. Greisukh, V. A. Danilov, A. I. Antonov, S. A. Stepanov, B. A. Usievich
Spectral and angular dependence of the efficiency of a two-layer and single-relief sawtooth microstructure
G. I. Greisukh, S. A. Stepanov, A. I. Antonov
Comparative analysis of the Fresnel lens and the kinoform lens
S. A. Guzairov, see A. A. Akimov
F. Habibi, M. Moradi, A. Ansari
Study on the Mainardi beam through the fractional Fourier transforms system
I. A. Hodashinsky, E. Yu. Kostyuchenko, K. S. Sarin, A. E. Anfilofev, M. B. Bardamova, S. S. Samsonov, I. V. Filimonenko
Dynamic-signature-based user authentication using a fuzzy classifier
N. Yu. Ilyasova, see P. A. Khorin
N. Yu. Ilyasova, see A. S. Shirokanev
V. V. Ivakhnik, M. V. Savelyev
Transient four-wave mixing in a transparent two-component medium
V. V. Ivakhnik, see A. A. Akimov
S. E. Ivanov, see P. A. Filimonov
N. A. Ivliev, see V. V. Podlipnov
E. A. Kadomina, E. A. Bezus, L. L. Doskolovich
Bragg gratings with parasitic scattering suppression for surface plasmon polaritons
A. V. Kalenskii, A. A. Zvekov, E. V. Galkina, D. R. Nurmuhametov
Modeling spectral properties of transparent matrix composites containing core-shell nanoparticles
N. V. Kalinin, see V. A. Saleev
A. N. Kamaev, I. P. Urmanov, A. A. Sorokin, D. A. Karmanov, S. P. Korolev
Images analysis for automatic volcano visibility estimation
A. N. Kamaev, D. A. Karmanov
Visual navigation of an autonomous underwater vehicle based on the global search of image correspondences
I. A. Kanaeva, Yu. A. Bolotova
Color and luminance corrections for panoramic image stitching
A. V. Kapishnikov, see A. V. Gaidel
D. A. Karmanov, see A. N. Kamaev
S. V. Karpeev, V. V. Podlipnov, S. N. Khonina, V. D. Paranin, А. С. Решетников
A four-sector polarization converter integrated in a calcite crystal
S. V. Karpeev, see A. K. Reddy
V. Ya. Katkovnik, see E. Achimova
N. L. Kazanskii, see L. L. Doskolovich
N. L. Kazanskii, see A. A. Rastorguev
N. L. Kazanskii, see A. A. Mingazov
N. L. Kazanskiy, S. I. Kharitonov, I. N. Kozlova, M. A. Moiseev
The connection between the phase problem in optics, focusing of radiation, and the Monge–Kantorovich problem
N. L. Kazanskii, see S. A. Bibikov
N. L. Kazanskii, see S. I. Kharitonov
S. I. Kharitonov, S. N. Khonina
Conversion of a conical wave with circular polarization into a vortex cylindrically polarized beam in a metal waveguide
S. I. Kharitonov, see A. A. Rastorguev
S. I. Kharitonov, see N. L. Kazanskii
S. I. Kharitonov, S. G. Volotovsky, S. N. Khonina
Calculation of the angular momentum of an electromagnetic field inside a waveguide with absolutely conducting walls: ab initio
S. I. Kharitonov, S. G. Volotovsky, S. N. Khonina, N. L. Kazanskii
Propagation of electromagnetic pulses and calculation of dynamic invariants in a waveguide with a convex shell
S. N. Khonina, see S. I. Kharitonov
S. N. Khonina, see M. Butt
S. N. Khonina, see S. V. Karpeev
S. N. Khonina, A. V. Ustinov, S. G. Volotovsky
Comparison of focusing of short pulses in the Debye approximation
S. N. Khonina, see A. K. Reddy
S. N. Khonina, see V. V. Podlipnov
P. A. Khorin, N. Yu. Ilyasova, R. A. Paringer
Informative feature selection based on the Zernike polynomial coefficients for various pathologies of the human eye cornea
A. Khorokhorov, see O. V. Rozhkov
A. G. Khramov, see A. V. Gaidel
R. S. Kirillov, see N. S. Perminov
D. V. Kirsh, see A. S. Shirokanev
D. V. Kirsh, see I. A. Rytsarev
D. M. Klychkova, see A. A. Grebenyuk
D. M. Klychkova, V. P. Ryabukho
Spatial spectrum of coherence signal for a defocused object images in digital holographic microscopy with partially spatially coherent illumination
E. A. Kochegurova, D. Wu
Realization of a recursive digital filter based on penalized splines
E. V. Kokh, see V. G. Labunets
P. A. Kolbudaev, see D. E. Plotnikov
A. N. Kolesnikov, see N. V. Astapenko
M. Kolesnikova, see D. Nesterenko
V. I. Kolpakov, see A. N. Ruchay
A. V. Kolsanov, see A. V. Gaidel
V. Ya. Kolyuchkin, see A. V. Gorevoy
M. S. Komlenok, see K. N. Tukmakov
T. V. Kononenko, see A. G. Nalimov
T. V. Kononenko, see K. N. Tukmakov
A. S. Konouchine, see V. I. Shakhuro
V. I. Konov, see A. G. Nalimov
V. I. Konov, see K. N. Tukmakov
K. V. Konstantinov, see O. A. Volkov
A. I. Konyukhov, see A. S. Plastun
A. V. Kopylov, see Ph. C. Thang
S. N. Koreshev, D. S. Smorodinov, O. V. Nikanorov, M. A. Frolova
Distribution of the complex amplitude and intensity in a 3D scattering pattern formed by the optical system for an on-axis point object
S. P. Korolev, see A. N. Kamaev
K. T. Koshekov, see N. V. Astapenko
I. Kostuchenko, see D. A. Usanov
E. Yu. Kostyuchenko, see I. A. Hodashinsky
M. V. Kotlyar, see S. S. Stafeev
V. V. Kotlyar, A. A. Kovalev, A. P. Porfirev
Orbital angular momentum of an astigmatic Hermite-Gaussian beam
V. V. Kotlyar, A. A. Kovalev
Angular momentum density of a circularly polarized paraxial optical vortex
Computer Optics, 2018, Volume 42:1, 5–12
V. V. Kotlyar, see A. A. Kovalev
Orbital angular momentum of an arbitrary axisymmetric light field after passing through an off-axis spiral phase plate
V. V. Kotlyar, A. G. Nalimov, S. S. Stafeev
The near-axis backflow of energy in a tightly focused optical vortex with circular polarization
V. V. Kotlyar, A. A. Kovalev, A. G. Nalimov
Backward flow of energy for an optical vortex with arbitrary integer topological charge
V. V. Kotlyar, A. G. Nalimov
A spirally rotating backward flow of light
Observation of an optical "angular tractor" effect in a Bessel beam
A variety of Fourier-invariant Gaussian beams
V. V. Kotlyar, see S. S. Stafeev
V. V. Kotlyar, see A. G. Nalimov
A. A. Kovalev, see V. V. Kotlyar
A. A. Kovalev, V. V. Kotlyar
Fresnel and Fraunhofer diffraction of a Gaussian beam with several polarization singularities
A. A. Kovalev
Orbital angular momentum of an elliptically symmetric laser beam after passing an elliptical spiral phase plate
A. V. Kozak, B. Ya. Steinberg, O. B. Shteinberg
Fast restoration of a blurred image obtained by a horizontally rotating camera
E. S. Kozlova
Modeling of the optical vortex generation using a silver spiral zone plate
I. N. Kozlova, see N. L. Kazanskii
A. V. Krevetsky, S. E. Chesnokov
Elements marking of partially masked group objects using local descriptions of an associated continuous image
Yu. A. Kropotov, A. Yu. Proskuryakov, A. A. Belov
Method for forecasting changes in time series parameters in digital information management systems
G. Kukharev, N. L. Shchegoleva
Methods of two-dimensional projection of digital images into eigen-subspaces: peculiarities of implementation and application
Yu. N. Kunakov, see O. A. Efremova
A. V. Kupriyanov, see A. S. Shirokanev
A. V. Kupriyanov, see I. A. Rytsarev
A. V. Kupriyanov, see M. Boori
K. S. Kurochka
Numerical modeling of a influence of a nanoparticle pair on the electromagnetic field in the near zone by the vector finite elements method
V. G. Labunets, E. V. Kokh, E. Ostheimer (Rundblad)
Algebraic models and methods of computer image processing. Part 1. Multiplet models of multichannel images
V. G. Labunets, V. P. Chasovskikh, Yu. G. Smetanin, E. Ostheimer (Rundblad)
Many-parameter m-complementary Golay sequences and transforms
R. R. Latypov, see N. S. Perminov
M. A. Lebedev, A. Yu. Rubis, Yu. V. Vizilter, O. V. Vygolov
Detecting image differences based on reference EMD-filters
M. A. Lebedev, see A. Yu. Rubis
S. V. Lejhter, S. N. Chukanov
Matching of images based on their diffeomorphic mapping
A. Lukina, see A. Zadorin
P. A. Lyakhov, see N. I. Chervyakov
A. Lyubarskaya, see D. Nesterenko
A. S. Machikhin, see A. V. Gorevoy
T. V. Maganakova, see Yu. Ts. Batomunkuev
A. I. Maksimov, M. V. Gashnikov
Adaptive interpolation of multidimensional signals for differential compression
L. A. Maksimova, see A. A. Dyachenko
M. Martinez-Corral, see A. K. Reddy
I. K. Meshkov, see I. L. Vinogradova
E. Michaelsen
On the automation of gestalt perception in remotely sensed data
R. M. Mikherskii
Application of an artificial immune system for visual pattern recognition
A. A. Mingazov, see L. L. Doskolovich
A. A. Mingazov, D. A. Bykov, L. L. Doskolovich, N. L. Kazanskii
Variational interpretation of the eikonal calculation problem from the condition of generating a prescribed irradiance distribution
V. A. Mitekin, V. A. Fedoseev
New secure QIM-based information hiding algorithms
E. Yu. Mitrofanova, see A. A. Sirota
A. V. Moiseev, see N. M. Moiseeva
A. V. Moiseev, see N. G. Fedotov
M. A. Moiseev, see N. L. Kazanskii
S. A. Moiseev, see N. S. Perminov
N. M. Moiseeva, A. V. Moiseev
The matrix solution of the 4x4 problem by the Wentzel-Kramers-Brillouin method for a planar inhomogeneous anisotropic layer
M. Moradi, see F. Habibi
V. V. Myasnikov, see E. A. Dmitriev
V. V. Myasnikov, see A. A. Agafonov
V. V. Myasnikov
Description of images using a configuration equivalence relation
Computer Optics, 2018, Volume 42:6, 998–1007
A. G. Nalimov, see S. S. Stafeev
A. G. Nalimov, see V. V. Kotlyar
A. G. Nalimov, V. V. Kotlyar, T. V. Kononenko, V. I. Konov
An X-ray diamond focuser based on an array of three-component elements
D. Nesterenko, M. Kolesnikova, A. Lyubarskaya
Optical differentiation based on the Brewster effect
D. Nesterenko, see V. V. Podlipnov
R. R. Nigmatullin, see N. S. Perminov
O. V. Nikanorov, see S. N. Koreshev
A. D. Nikitin, see Yu. B. Blokhinov
O. V. Nikolaeva
Algorithm for eliminating gas absorption effects on hyperspectral remote sensing data
P. A. Nosov, see O. V. Rozhkov
D. R. Nurmuhametov, see A. V. Kalenskii
L. O'Faolain, see S. S. Stafeev
A. R. Orazaev, see N. I. Chervyakov
E. Ostheimer (Rundblad), see V. G. Labunets
P. E. Ovchinnikov, see R. A. Shatalin
I. S. Panyaev, D. G. Sannikov
Optical waveguide on the basis of a layered magnetoactive metamaterial
V. D. Paranin, see S. V. Karpeev
V. I. Parfenov, D. Yu. Golovanov
Noise stability of signal reception algorithms with multipulse pulse-position modulation
R. A. Paringer, see P. A. Khorin
R. A. Paringer, see M. Boori
V. S. Pavel'ev, see K. N. Tukmakov
E. Pavelyeva
Image processing and analysis based on the use of phase information
S. V. Pavlov, see O. A. Efremova
V. Yu. Pavlov, see O. V. Rozhkov
N. S. Perminov, M. A. Smirnov, R. R. Nigmatullin, Talipov A.A., S. A. Moiseev
Comparison of the capabilities of histograms and a method of ranged amplitudes in noise analysis of single-photon detectors
N. S. Perminov, K. V. Petrovnin, K. I. Gerasimov, R. S. Kirillov, R. R. Latypov, O. N. Sherstyukov, S. A. Moiseev
Spectroscopy of cascade multiresonator quantum memory
K. V. Petrovnin, see N. S. Perminov
D. E. Piskunov, see O. V. Rozhkov
A. S. Plastun, A. I. Konyukhov
Spectral Characteristics of a photonic bandgap fiber
D. E. Plotnikov, P. A. Kolbudaev, S. A. Bartalev
Identification of dynamically homogeneous areas with time series segmentation of remote sensing data
V. V. Podlipnov, see S. V. Karpeev
V. V. Podlipnov, N. A. Ivliev, S. N. Khonina, D. Nesterenko, V. S. Vasilev, E. A. Achimova
Investigation of photoinduced formation of microstructures on the surface of carbaseole-containing azopolymer depending on the power density of incident beams
V. V. Podlipnov, V. N. Shchedrin, A. N. Babichev, S. M. Vasilyev, V. A. Blank
Experimental determi-nation of soil moisture on hyperspectral images
A. P. Porfirev, see V. V. Kotlyar
A. Yu. Proskuryakov, see Yu. A. Kropotov
Yu. S. Pyshkina, see A. V. Gaidel
Yu. O. Rakutin, see Yu. B. Blokhinov
A. A. Rastorguev, S. I. Kharitonov, N. L. Kazanskii
Modeling of arrangement tolerances for the optical elements in a spaceborne Offner imaging hyperspectrometer
A. K. Reddy, see M. Butt
A. K. Reddy, M. Martinez-Corral, S. N. Khonina, S. V. Karpeev
Focusing of light beams by the phase apodization pupil
V. G. Rodin
A non-coherent holographic correlator based on a digital micromirror device
O. V. Rozhkov, D. E. Piskunov, P. A. Nosov, V. Yu. Pavlov, A. Khorokhorov, A. F. Shirankov
Bauman MSTU scientific school "Zoom lens design": features of theory and practice
A. Yu. Rubis, see M. A. Lebedev
A. Yu. Rubis, M. A. Lebedev, Yu. V. Vizilter, O. V. Vygolov, S. Yu. Zheltov
Comparative image filter-ing using monotonic morphological operators
A. N. Ruchay, K. A. Dorofeev, V. I. Kolpakov
Fusion of information from multiple Kinect sensors for 3D object reconstruction
V. P. Ryabukho, see A. A. Grebenyuk
V. P. Ryabukho, see D. M. Klychkova
V. P. Ryabukho, see A. A. Dyachenko
I. A. Rytsarev, D. V. Kirsh, A. V. Kupriyanov
Clustering of media content from social networks using bigdata technology
S. V. Sai
Metric of fine structures distortions of compressed images
V. A. Saleev, N. V. Kalinin
Ab initio modeling of Raman and infrared spectra of calcite
S. S. Samsonov, see I. A. Hodashinsky
D. G. Sannikov, see I. S. Panyaev
K. S. Sarin, see I. A. Hodashinsky
A. V. Savchenko
Trigonometric series in orthogonal expansions for density estimates of deep image features
M. V. Savelyev, see V. V. Ivakhnik
V. V. Sergeev, see A. A. Varlamova
V. I. Shakhuro, A. S. Konouchine
Image synthesis with neural networks for traffic sign classification
R. A. Shatalin, V. R. Fidelman, P. E. Ovchinnikov
Abnormal behavior detection based on dense trajectories
V. N. Shchedrin, see V. V. Podlipnov
N. L. Shchegoleva, see G. Kukharev
O. N. Sherstyukov, see N. S. Perminov
A. F. Shirankov, see O. V. Rozhkov
A. S. Shirokanev, D. V. Kirsh, N. Yu. Ilyasova, A. V. Kupriyanov
Investigation of algorithms for coagulate arrangement in fundus images
S. A. Shoydin, A. Trifanov
Form-faсtor of the holograms of composite images
O. B. Shteinberg, see A. V. Kozak
A. A. Sirota, M. A. Dryuchenko, E. Yu. Mitrofanova
Digital watermarking method based on heteroassociative image compression and its realization with artificial neural networks
A. V. Skripal, see D. A. Usanov
Yu. G. Smetanin, see V. G. Labunets
M. A. Smirnov, see N. S. Perminov
D. S. Smorodinov, see S. N. Koreshev
A. A. Sorokin, see A. N. Kamaev
S. S. Stafeev, A. G. Nalimov
Longitudinal component of the Poynting vector of a tightly focused optical vortex with circular polarization
S. S. Stafeev, L. O'Faolain, M. V. Kotlyar
Rotation of two-petal laser beams in the near field of a spiral microaxicon
S. S. Stafeev, see V. V. Kotlyar
S. S. Stafeev, A. G. Nalimov, V. V. Kotlyar
Energy backflow in a focal spot of the cylindrical vector beam
S. S. Stafeev, A. G. Nalimov, L. O'Faolain, M. V. Kotlyar
Effects of fabrication errors on the focusing performance of a sector metalens
B. Ya. Steinberg, see A. V. Kozak
S. A. Stepanov, see G. I. Greisukh
A. Kh. Sultanov, see O. A. Efremova
A. Kh. Sultanov, see I. L. Vinogradova
A. A. Syemov, see N. G. Fedotov
V. A. Taamazyan
Shape from mixed polarization
Y. Tan, H. Wang, Y. Wang
Calculation of effective mode field area of photonic crystal fiber with digital image processing algorithm
Ph. C. Thang, A. V. Kopylov
Tree-serial parametric dynamic programming with flexible prior model for image denoising
V. S. Titov, see A. A. Zakharov
A. Trifanov, see S. A. Shoydin
K. N. Tukmakov, M. S. Komlenok, V. S. Pavel'ev, T. V. Kononenko, V. I. Konov
A continuous-profile diffractive focuser for terahertz radiation fabricated by laser ablation of silicon
I. P. Urmanov, see A. N. Kamaev
D. A. Usanov, A. V. Skripal, E. I. Astakhov, I. Kostuchenko, S. Yu. Dobdin
Self-mixing interferometry for distance measurement using a semiconductor laser with current-modulated wavelength
B. A. Usievich, see G. I. Greisukh
A. V. Ustinov, see S. N. Khonina
A. A. Varlamova, A. Yu. Denisova, V. V. Sergeev
Earth remote sensing data processing for obtaining vegetation types maps
V. S. Vasilev, see V. V. Podlipnov
S. M. Vasilyev, see V. V. Podlipnov
N. N. Vasin, R. R. Diyazitdinov
Processing of triangulation scanner data for measurements of rail profiles
I. L. Vinogradova, I. K. Meshkov, E. P. Grakhova, A. Kh. Sultanov, V. Kh. Bagmanov, A. V. Voronkova, A. R. Gizatulin
Secured RoF segment in subterahertz range providing independent optical modulation of radiochannel frequency characteristics and phased antenna array beamsteering parameter
Yu. V. Vizilter, see M. A. Lebedev
Yu. V. Vizilter, see A. Yu. Rubis
Yu. V. Vizilter, see S. A. Brianskiy
O. A. Volkov, A. V. Demin, K. V. Konstantinov
An optical system of a sensor for measuring the meteorological optical range
S. G. Volotovsky, see S. N. Khonina
S. G. Volotovsky, see S. I. Kharitonov
A. V. Volyar, M. V. Bretsko, Ya. E. Akimova, Yu. A. Egorov
Beyond the light intensity or intensity moments and measurements of the vortex spectrum in complex light beams
A. V. Voronkova, see I. L. Vinogradova
O. V. Vygolov, see M. A. Lebedev
O. V. Vygolov, see A. Yu. Rubis
H. Wang, see Y. Tan
Y. Wang, see Y. Tan
D. Wu, see E. A. Kochegurova
L. V. Yablokova, D. L. Golovashkin
Block algorithms of a simultaneous difference solution of d'Alembert's and Maxwell's equations
A. S. Yumaganov, see A. A. Agafonov
A. Zadorin, A. Lukina
A resonance system of an optoelectronic oscillator based on a transmission-type planar optical disk microcavity
A. A. Zakharov, A. E. Barinov, A. L. Zhiznyakov, V. S. Titov
Object detection in images with a structural descriptor based on graphs
S. Yu. Zheltov, see A. Yu. Rubis
A. L. Zhiznyakov, see A. A. Zakharov
A. A. Zvekov, see A. V. Kalenskii
А. С. Решетников, see S. V. Karpeev
|
CommonCrawl
|
Average Power Calculations of Periodic Functions Using Fourier Series
Signals and SystemsElectronics & ElectricalDigital Electronics
When a voltage of V volts is applied across a resistance of R Ω, then a current I flows through it. The power dissipated in the resistance is given by,
$$\mathrm{P=I^2R=\frac{V^2}{R}\:\:\:\:\:\:....(1)}$$
But when the voltage and current signals are not constant, then the power varies at every instant, and the equation for the instantaneous power is given by,
$$\mathrm{P=i^2(t)R=\frac{V^2(t)}{R}\:\:\:\:\:\:....(2)}$$
Where, 𝑖(𝑡) and 𝑣(𝑡) are the corresponding instantaneous values of current and voltage respectively
Now, if the value of the resistance (R) is 1 Ω, then the instantaneous power can be represented as,
$$\mathrm{p=i^2(t)=v^2(t)\:\:\:\:\:\:....(3)}$$
Therefore, the instantaneous power of a signal x(t) can be given by
$$\mathrm{p=x^2(t)\:\:\:\:\:\:....(4)}$$
Hence, the average power of x(t) over a certain interval of time is,
$$\mathrm{Average\:power,\:\:P=\frac{1}{T}\int_{0}^{T}x^2(t)dt\:\:\:\:\:\:....(5)}$$
By using Parseval's theorem, we get
$$\mathrm{\frac{1}{T}\int_{0}^{T}|x(t)|^2dt=\sum_{\substack{n=-\infty \\ n \neq 0}}^{\infty} |C_n|^2=C_{0}^{2}+\sum_{\substack{n=-\infty \\ n \neq 0}}^{\infty}C_{n}^{2}}$$
$$\mathrm{\Rightarrow \frac{1}{T}\int_{0}^{T}|x(t)|^2dt=C_{0}^{2}+\sum_{\substack{n=-\infty \\ n \neq 0}}^{\infty}C_{n}C_{n}^{*}}$$
$$\mathrm{\Rightarrow \frac{1}{T}\int_{0}^{T}|x(t)|^2dt=a_{0}^{2}+\sum_{n=1}^{\infty}2[Re(C_{n}^{2})+Im(C_{n}^{2})]}$$
$$\mathrm{\Rightarrow \frac{1}{T}\int_{0}^{T}|x(t)|^2dt=a_{0}^{2}+\sum_{n=1}^{\infty}\frac{a_{n}^{2}}{2}+\frac{b_{n}^{2}}{2}\:\:\:\:\:....(6)}$$
On comparing equations (5) & (6), we get,
$$\mathrm{P=a_{0}^{2}+\sum_{n=1}^{\infty}\frac{a_{n}^{2}}{2}+\frac{b_{n}^{2}}{2}\:\:\:\:\:....(7)}$$
Therefore, the average power over a period of time can be given using the Fourier series as,
$$\mathrm{Avg\:Power = (DC\: term)^2+\sum(Mean\:square\: values \:of \:cosine \:terms)+\sum(Mean \:square\: values\: of \:sine\: terms)\:\:\:\:....(8)}$$
Numerical Example
Determine the average power of the following signal −
$$\mathrm{x(t)=\cos^2(4000\pi t)\sin (10000\pi t)}$$
Soution
The given signal is,
$$\mathrm{\because \cos^2 \theta = \frac{1 +\cos2 \theta}{2}}$$
$$\mathrm{\therefore x(t)=(\frac{1+\cos8000\pi t}{2})\sin(10000\pi t)}$$
$$\mathrm{\Rightarrow x(t)=\frac{1}{2}\sin(10000\pi t)+\frac{1}{2}\sin(10000\pi t)\cos(8000\pi t)}$$
$$\mathrm{\because \sin X\cos Y = \frac{\sin(X+Y)\sin(X-Y)}{2}}$$
$$\mathrm{\therefore x(t)=\frac{1}{2}\sin(10000\pi t)+\frac{1}{4}\sin(18000\pi t)+\frac{1}{4}\sin(2000\pi t)}$$
Hence, from equation (8), the average power of the signal is
$$\mathrm{P=\frac{(1/2)^2}{2}+\frac{(1/4)^2}{2}+\frac{(1/4)^2}{2}}$$
$$\mathrm{\Rightarrow P=\frac{1}{8}+\frac{1}{32}+\frac{1}{32}=\frac{3}{16}\: Watts}$$
Manish Kumar Saini
Published on 03-Dec-2021 13:15:45
Fourier Series Representation of Periodic Signals
Derivation of Fourier Transform from Fourier Series
Signals and Systems – Fourier Transform of Periodic Signals
Difference between Fourier Series and Fourier Transform
Convolution Property of Continuous-Time Fourier Series
Fourier Transform of Complex and Real Functions
Laplace Transform of Periodic Functions (Time Periodicity Property of Laplace Transform)
GIBBS Phenomenon for Fourier Series
Instantaneous and Average Power Formula
Fourier Transform of Single-Sided Real Exponential Functions
Fourier Transform of the Sine and Cosine Functions
Fourier Transform of Two-Sided Real Exponential Functions
Fourier Series – Representation and Properties
Expression for Exponential Fourier Series Coefficients
Linearity and Conjugation Property of Continuous-Time Fourier Series
|
CommonCrawl
|
Energy Requirements for Maintenance and Growth of Male Saanen Goat Kids
Medeiros, A.N. (Department of Animal Science, Universidade Federal da Paraiba (UFPB)) ;
Resende, K.T. (Department of Animal Science, Universidade Estadual Paulista (UNESP)) ;
Teixeira, I.A.M.A. (Department of Animal Science, Universidade Estadual Paulista (UNESP)) ;
Araujo, M.J. (Department of Animal Science, Universidade Federal do Piaui (UFPI)) ;
Yanez, E.A. (Department of Animal Production, Universidad Nacional Del Nordeste (UNNE)) ;
Ferreira, A.C.D. (Department of Animal Science, Universidade Federal de Sergipe (UFS))
https://doi.org/10.5713/ajas.2013.13766
The aim of study was to determine the energy requirements for maintenance and growth of forty-one Saanen, intact male kids with initial body weight (BW) of $5.12{\pm}0.19$ kg. The baseline (BL) group consisted of eight kids averaging $5.46{\pm}0.18$ kg BW. An intermediate group consisted of six kids, fed for ad libitum intake, that were slaughtered when they reached an average BW of $12.9{\pm}0.29$ kg. The remaining kids (n = 27) were randomly allocated into nine slaughter groups (blocks) of three animals distributed among three amounts of dry matter intake (DMI; ad libitum and restricted to 70% or 40% of ad libitum intake). Animals in a group were slaughtered when the ad libitum-treatment kid in the group reached 20 kg BW. In a digestibility trial, 21 kids (same animals of the comparative slaughter) were housed in metabolic cages and used in a completely randomized design to evaluate the energetic value of the diet at different feed intake levels. The net energy for maintenance ($NE_m$) was $417kJ/kg^{0.75}$ of empty BW (EBW)/d, while the metabolizable energy for maintenance ($ME_m$) was $657kJ/kg^{0.75}$ of EBW/d. The efficiency of ME use for NE maintenance ($k_m$) was 0.64. Body fat content varied from 59.91 to 92.02 g/kg of EBW while body energy content varied from 6.37 to 7.76 MJ/kg of EBW, respectively, for 5 and 20 kg of EBW. The net energy for growth ($NE_g$) ranged from 7.4 to 9.0 MJ/kg of empty weight gain by day at 5 and 20 kg BW, respectively. This study indicated that the energy requirements in goats were lower than previously published requirements for growing dairy goats.
Body Composition;Comparative Slaughter;Dairy Goats;Feed restriction;Heat Production
NRC (National Research Council). 2001. Nutrient Requirements of Dairy Cattle. National Academy Press. Washington, DC, USA.
NRC (National Research Council). 2007. Nutrient Requirements of Small Ruminants, Sheep, Goats, Cervids, and New World Camelids. The National Academic Press, Washington DC, USA.
Sahlu, T., A. L. Goetsch, J. Luo, I. V. Nsahlai, J. E. Moore, M. L. Galyean, F. N. Owens, C. L. Ferrel, and Z. B. Johnson. 2004. Nutrient requirements of goats: developed equations, other considerations and future research to improve them. Small Rumin. Res. 53:191- 219. https://doi.org/10.1016/j.smallrumres.2004.04.001
Sahlu, T., L. J. Dawson, T. A. Gipson, S. P. Hart, R. C. Merkel, R. Puchala, Z. Wang, S. Zeng, and A. L. Goetsch. 2009. ASAS Centennial Paper: Impact of animal science research on United States goat production and predictions for the future. J. Anim. Sci. 87:400-418. https://doi.org/10.2527/jas.2008-1291
Silanikove, N. 2000. The physiological basis of adaptation in goats to harsh environments. Small Rum. Res. 35:181-194. https://doi.org/10.1016/S0921-4488(99)00096-6
Tedeschi, L. O., D. G. Fox, and P. J. Guiroy. 2004. A decision support system to improve individual cattle management. 1. A mechanistic, dynamic model for animal growth. Agric. Syst. 79:171-204. https://doi.org/10.1016/S0308-521X(03)00070-2
Tedeschi, L. O., A. Cannas, and D. G. Fox. 2010. A nutrition mathematical model to account for dietary supply and requirements of energy and other nutrients for domesticated small ruminants: The development and evaluation of a Small Ruminant Nutrition System. Small Rumin. Res. 89:174-184. https://doi.org/10.1016/j.smallrumres.2009.12.041
Tolkamp, B. J. 2010. Efficiency of energy utilization and voluntary feed intake in ruminants. Animal 7:1084-1092.
Van Soest, P. J., J. B. Robertson, and B. A. Lewis. 1991. Methods of dietary fiber, neutral detergent fiber and nonstarch polysaccharides in relation to animal nutrition. J. Dairy Sci. 74:3583-3597. https://doi.org/10.3168/jds.S0022-0302(91)78551-2
Galvani, D. B., C. C. Pires, G. V. Kozloski, and T. P. Wommer. 2008. Energy requirements of Texel crossbred lambs. J. Anim. Sci. 86:3480-3490. https://doi.org/10.2527/jas.2008-1097
Goering, H. K. and P. J. Van Soest. 1970. Forage Fiber Analyses (Apparatus, Reagents, Procedures, and Some Applications), Agricultural Handbook no. 379. ARS-USDA, Washington, DC, USA.
INRA (Institut National de la Recherche Agronomique). 2007. Alimentation des bovins, ovins et caprins. Besoins des animaux. Valeurs des aliments: Editions Quae, Paris, France.
Lawrence, T. L. J. and V. R. Fowler. 2002. Growth of Farm Animals, 2th Edition, CAB International, Wallingford, UK.
Lofgreen, G. P. and W. N. Garrett. 1968. A system for expressing net energy requirements and feed values for growing and finishing beef cattle. J. Anim. Sci. 27:793-806. https://doi.org/10.2527/jas1968.273793x
Luo, J., A. L. Goetsch, T. Sahlu, I. V. Nsahlai, Z. B. Johnson, J. E. Moore, M. L. Galyean, F. N. Owens, and C. L. Ferrell. 2004e. Prediction of metabolizable energy requirements for maintenance and gain of preweaning, growing and mature goats. Small Rumin. Res. 53:231-252. https://doi.org/10.1016/j.smallrumres.2004.04.006
NRC (National Research Council). 1981. Nutrient Requirements of Goats: Angora, Dairy and Meat Goats in Temperate and Tropical Countries. National Academy Press. Washington, DC, USA.
NRC (National Research Council). 1989. Nutrient Requirements of Dairy Cattle, 6th revised Edition. National Academy Press, Washington, DC, USA.
ARC (Agricultural Research Council). 1980. The nutrient requirements of ruminant livestock. CAB International, Wallingford, UK.
AOAC (Association of Official Analytical Chemists). 1990. Official Methods of Analysis, 15th edition, Arlington, VA, USA.
Bezabih, M. and E. Pfeffer. 2003. Body composition and efficiency of energy and nutrient utilization by growing pre-ruminant Saanen goat kids. Anim. Sci. 77:155-163.
Blaxter, K. L. and J. L. Clapperton. 1965. Prediction of the amount of methane produced by ruminants. Br. J. Nutr. 19:511-522. https://doi.org/10.1079/BJN19650046
Cannas, A., L. O. Tedeschi, D. G. Fox, A. N. Pell, and P. J. Van Soest. 2004. A mechanistic model for predicting the nutrient requirements and feed biological values for sheep. J. Anim. Sci. 82:149-169. https://doi.org/10.2527/2004.821149x
Chizzotti, M. L., S. C. Valadares Filho, L. O. Tedeschi, F. H. M. Chizzotti, and G. E. Carstens. 2007. Energy and protein requirements for growth and maintenance of $F_1$ Nellore${\times}$Red Angus bulls, steers, and heifers. J. Anim. Sci. 85:1971-1981. https://doi.org/10.2527/jas.2006-632
Colomer-Rocher, F., A. H. Kirton, G. J. K. Mercer, and D. M. Duganzich. 1992. Carcass composition of New Zealand Saanen goats slaughtered at different weights. Small Rumin. Res. 7:161-173. https://doi.org/10.1016/0921-4488(92)90205-I
Etheridge, R. D., G. M. Pesti, and E. H. Foster. 1998. A comparison of nitrogen values obtained utilizing the Kjeldahl nitrogen and Dumas combustion methodologies (Leco CNS 2000) on samples typical of an animal nutrition analytical laboratory. Anim. Feed Sci. Technol. 73:21-28. https://doi.org/10.1016/S0377-8401(98)00136-9
Fernandes, M. H. M. R., K. T. Resende, L. O. Tedeschi, J. S. Fernandes Jr, H. M. Silva, G. E. Carstens, T. T. Berchielli, I. A. M. A. Teixeira, and L. Akinaga. 2007. Energy and protein requirements for maintenance and growth of Boer crossbred kids. J. Anim. Sci. 85:1014-1023. https://doi.org/10.2527/jas.2006-110
AFRC (Agricultural and Food Research Council). 1993. Energy and Protein Requirements of Ruminants. CAB International, Wallingford, UK.
AFRC (Agricultural and Food Research Council). 1998. The nutrition of goat. An advisory manual prepared by the AFRC technical committee on responses to nutrients. CAB International, Wallingford, UK.
Sex effects on net protein and energy requirements for growth of Saanen goats vol.100, pp.6, 2017, https://doi.org/10.3168/jds.2016-11895
Possibility of modifying the growth trajectory in Raeini Cashmere goat vol.50, pp.7, 2018, https://doi.org/10.1007/s11250-018-1579-6
|
CommonCrawl
|
Algebra 1 / Formulating linear equations /
Writing linear equations using the point-slope form and the standard form
There are other ways to write the linear equation of a straight line than the slope-intersect form previously described
We've got a line with the slope 2. One of the points that the line passes through has got the coordinates (3, 5). It's possible to write an equation relating x and y using the slope formula with
$$\left (x _{1}\, ,y_{1} \right )=\left ( 3,5 \right )\: and\: \left ( x_{2},\, y_{2} \right )=\left ( x,y \right )$$
$$m=\frac{y_{2}-y_{1}}{x_{2}-x_{1}}$$
$$2=\frac{y-5}{x-3}$$
$$2\, {\color{green} {\cdot \, \left ( x-3 \right )}}=\frac{\left ( y-5 \right )\, {\color{green} {\cdot \, \left ( x-3 \right )}}}{x-3}$$
$$2\left ( x-3 \right )=y-5$$
Since we used the coordinates of one known point and the slope to write this form of equation it is called the point-slop form
$$y-y_{1}=m\left ( x-x_{1} \right )$$
Another way of writing linear equations is to use the standard form
$$Ax+By=C$$
Where A, B and C are real numbers and where A and B are not both zero.
Since the slope of a vertical line is undefined you can't write the equation of a vertical line using neither the slope-intersect form or the point-slope form. But you can express it using the standard form.
Write the equation of the line
For any given point of the line the x-coordinate is 3. This means that the equation of the line is
$$x=3$$
Write the linear equation in the point-slope form for the line that passes through (-1, 4) and has a slope of -1
FORMULATING LINEAR EQUATIONS – Parallel and perpendicular lines
Discovering expressions, equations and functions
Expressions and variables
Operations in the right order
Composing expressions
Composing equations and inequalities
Representing functions as rules and graphs
Exploring real numbers
Calculating with real numbers
The Distributive property
Square roots
How to solve linear equations
Properties of equalities
Ratios and proportions and how to solve them
Similar figures
Calculating with percents
Visualizing linear functions
The coordinate plane
The slope-intercept form of a linear equation
Formulating linear equations
Writing linear equations using the slope-intercept form
Parallel and perpendicular lines
Scatter plots and linear models
Linear inequalitites
Solving linear inequalities
Solving compound inequalities
Solving absolute value equations and inequalities
Linear inequalities in two variables
Systems of linear equations and inequalities
Graphing linear systems
The substitution method for solving linear systems
The elimination method for solving linear systems
Systems of linear inequalities
Exponents and exponential functions
Properties of exponents
Exponential growth functions
Factoring and polynomials
Monomials and polynomials
Special products of polynomials
Polynomial equations in factored form
Factor polynomials on the form of x^2 + bx + c
Factor polynomials on the form of ax^2 + bx +c
The graph of y = ax^2 + bx + c
Use graphing to solve quadratic equations
Completing the square
The quadratic formula
The graph of a radical function
Simplify radical expressions
Radical equations
The distance and midpoint formulas
Simplify rational expression
Multiply rational expressions
Division of polynomials
Add and subtract rational expressions
Solving rational expressions
|
CommonCrawl
|
Section 8.4 Transformations for skewed data
County population size among the counties in the US is very strongly right skewed. Can we apply a transformation to make the distribution more symmetric? How would such a transformation affect the scatterplot and residual plot when another variable is graphed against this variable? In this section, we will see the power of transformations for very skewed data.
See how a log transformation can bring symmetry to an extremely skewed variable.
Recognize that data can often be transformed to produce a linear relationship, and that this transformation often involves log of the \(y\)-values and sometimes log of the \(x\)-values.
Use residual plots to assess whether a linear model for transformed data is reasonable.
Subsection 8.4.2 Introduction to transformations
Consider the histogram of county populations shown in Figure 8.4.2.(a), which shows extreme skew. What isn't useful about this plot?
Nearly all of the data fall into the left-most bin, and the extreme skew obscures many of the potentially interesting details in the data.
Figure 8.4.2. (a) A histogram of the populations of all US counties. (b) A histogram of log\(_{10}\)-transformed county populations. For this plot, the x-value corresponds to the power of 10, e.g. "4" on the x-axis corresponds to \(10^4 =\) 10,000.
There are some standard transformations that may be useful for strongly right skewed data where much of the data is positive but clustered near zero. A transformation is a rescaling of the data using a function. For instance, a plot of the logarithm (base 10) of county populations results in the new histogram in Figure 8.4.2.(b). This data is symmetric, and any potential outliers appear much less extreme than in the original data set. By reigning in the outliers and extreme skew, transformations like this often make it easier to build statistical models against the data.
Transformations can also be applied to one or both variables in a scatterplot. A scatterplot of the population change from 2010 to 2017 against the population in 2010 is shown in Figure 8.4.3.(a). In this first scatterplot, it's hard to decipher any interesting patterns because the population variable is so strongly skewed. However, if we apply a log\(_{10}\) transformation to the population variable, as shown in Figure 8.4.3.(b), a positive association between the variables is revealed. While fitting a line to predict population change (2010 to 2017) from population (in 2010) does not seem reasonable, fitting a line to predict population from log\(_{10}\)(population) does seem reasonable.
Figure 8.4.3. (a) Scatterplot of population change against the population before the change. (b) A scatterplot of the same data but where the population size has been log-transformed.
Transformations other than the logarithm can be useful, too. For instance, the square root (\(\sqrt{\text{ original observation } }\)) and inverse (\(\frac{1}{\text{ original observation } }\)) are commonly used by data scientists. Common goals in transforming data are to see the data structure differently, reduce skew, assist in modeling, or straighten a nonlinear relationship in a scatterplot.
Subsection 8.4.3 Transformations to achieve linearity
Figure 8.4.4. Variable \(y\) is plotted against \(x\text{.}\) A nonlinear relationship is evident by the \(\cup\)-pattern shown in the residual plot. The curvature is also visible in the original plot.
Consider the scatterplot and residual plot in Figure 8.4.4. The regression output is also provided. Is the linear model \(\hat{y} = -52.3564 + 2.7842 x\) a good model for the data?
The regression equation is
y = -52.3564 + 2.7842 x
Predictor Coef SE Coef T P
Constant -52.3564 7.2757 -7.196 3e-08
x 2.7842 0.1768 15.752 < 2e-16
S = 13.76 R-Sq = 88.26% R-Sq(adj) = 87.91%
We can note the \(R^2\) value is fairly large. However, this alone does not mean that the model is good. Another model might be much better. When assessing the appropriateness of a linear model, we should look at the residual plot. The \(\cup\)-pattern in the residual plot tells us the original data is curved. If we inspect the two plots, we can see that for small and large values of \(x\) we systematically underestimate \(y\text{,}\) whereas for middle values of \(x\text{,}\) we systematically overestimate \(y\text{.}\) The curved trend can also be seen in the original scatterplot. Because of this, the linear model is not appropriate, and it would not be appropriate to perform a \(t\)-test for the slope because the conditions for inference are not met. However, we might be able to use a transformation to linearize the data.
Regression analysis is easier to perform on linear data. When data are nonlinear, we sometimes transform the data in a way that makes the resulting relationship linear. The most common transformation is log of the \(y\) values. Sometimes we also apply a transformation to the \(x\) values. We generally use the residuals as a way to evaluate whether the transformed data are more linear. If so, we can say that a better model has been found.
Using the regression output for the transformed data, write the new linear regression equation.
log(y) = 1.722540 + 0.052985 x
Predictor Coef SE Coef T P
Constant 1.722540 0.056731 30.36 < 2e-16
x 0.052985 0.001378 38.45 < 2e-16
S = 0.1073 R-Sq = 97.82% R-Sq(adj) = 97.75%
The linear regression equation can be written as: \(\widehat{\text{ log } (y)} = 1.723 +0.053 x\)
Figure 8.4.7. A plot of \(log (y)\) against \(x\text{.}\) The residuals don't show any evident patterns, which suggests the transformed data is well-fit by a linear model.
Which of the following statements are true? There may be more than one.
There is an apparent linear relationship between \(x\) and \(y\text{.}\)
There is an apparent linear relationship between \(x\) and \(\widehat{\text{ log } (y)}\text{.}\)
The model provided by Regression I (\(\hat{y} = -52.3564 + 2.7842 x\)) yields a better fit.
The model provided by Regression II (\(\widehat{\text{ log } (y)} = 1.723 +0.053 x\)) yields a better fit. 1
Part (a) is false since there is a nonlinear (curved) trend in the data. Part (b) is true. Since the transformed data shows a stronger linear trend, it is a better fit, i.e. Part (c) is false, and Part (d) is true.
A transformation is a rescaling of the data using a function. When data are very skewed, a log transformation often results in more symmetric data.
Regression analysis is easier to perform on linear data. When data are nonlinear, we sometimes transform the data in a way that results in a linear relationship. The most common transformation is log of the \(y\)-values. Sometimes we also apply a transformation to the \(x\)-values.
To assess the model, we look at the residual plot of the transformed data. If the residual plot of the original data has a pattern, but the residual plot of the transformed data has no pattern, a linear model for the transformed data is reasonable, and the transformed model provides a better fit than the simple linear model.
1. Used trucks.
The scatterplot below shows the relationship between year and price (in thousands of $) of a random sample of 42 pickup trucks. Also shown is a residuals plot for the linear model for predicting price from year.
Describe the relationship between these two variables and comment on whether a linear model is appropriate for modeling the relationship between year and price.
The scatterplot below shows the relationship between logged (natural log) price and year of these trucks, as well as the residuals plot for modeling these data. Comment on which model (linear model from earlier or logged model presented here) is a better fit for these data.
The output for the logged model is given below. Interpret the slope in context of the data.
Estimate Std. Error t value Pr\((>|t|)\)
(Intercept) -271.981 25.042 -10.861 0.000
Year 0.137 0.013 10.937 0.000
(a) The relationship is positive, non-linear, and somewhat strong. Due to the non-linear form of the relationship and the clear non-constant variance in the residuals, a linear model is not appropriate for modeling the relationship between year and price.
(b) The logged model is a much better fit: the scatter plot shows a linear relationships and the residuals do not appear to have a pattern.
(c) For each year increase in the year of the truck (for each year the truck is newer) we would expect the price of the truck to increase on average by a factor of \(e^{0.137} \approx 1.15\text{,}\) i.e. by 15%.
2. Income and hours worked.
The scatterplot below shows the relationship between income and years worked for a random sample of 787 Americans. Also shown is a residuals plot for the linear model for predicting income from hours worked. The data come from the 2012 American Community Survey. 2
United States Census Bureau. Summary File. 2012 American Community Survey. U.S. Census Bureau's American Community Survey Office, 2013. Web.
The scatterplot below shows the relationship between logged (natural log) income and hours worked, as well as the residuals plot for modeling these data. Comment on which model (linear model from earlier or logged model presented here) is a better fit for these data.
(Intercept) 1.017 0.113 9.000 0.000
hrs_work 0.058 0.003 21.086 0.000
Subsection 8.4.6 Chapter Highlights
This chapter focused on describing the linear association between two numerical variables and fitting a linear model.
The correlation coefficient, \(r\text{,}\) measures the strength and direction of the linear association between two variables. However, \(r\) alone cannot tell us whether data follow a linear trend or whether a linear model is appropriate.
The explained variance, \(R^2\text{,}\) measures the proportion of variation in the \(y\) values explained by a given model. Like \(r\text{,}\) \(R^2\) alone cannot tell us whether data follow a linear trend or whether a linear model is appropriate.
Every analysis should begin with graphing the data using a scatterplot in order to see the association and any deviations from the trend (outliers or influential values). A residual plot helps us better see patterns in the data.
When the data show a linear trend, we fit a least squares regression line of the form: \(\hat{y} = a+bx\text{,}\) where \(a\) is the \(y\)-intercept and \(b\) is the slope. It is important to be able to calculate \(a\) and \(b\) using the summary statistics and to interpret them in the context of the data.
A residual, \(y-\hat{y}\text{,}\) measures the error for an individual point. The standard deviation of the residuals, \(s\text{,}\) measures the typical size of the residuals.
\(\hat{y} = a+bx\) provides the best fit line for the observed data. To estimate or hypothesize about the slope of the population regression line, first confirm that the residual plot has no pattern and that a linear model is reasonable, then use a \(t\)-interval for the slope or a \(t\)-test for the slope with \(n-2\) degrees of freedom.
In this chapter we focused on simple linear models with one explanatory variable. More complex methods of prediction, such as multiple regression (more than one explanatory variable) and nonlinear regression can be studied in a future course.
|
CommonCrawl
|
A first-principle mechanism for particulate aggregation and self-assembly in stratified fluids
Roberto Camassa1,
Daniel M. Harris ORCID: orcid.org/0000-0003-2615-91782,
Robert Hunt ORCID: orcid.org/0000-0001-5696-10361,
Zeliha Kilic ORCID: orcid.org/0000-0002-3287-95613 &
Richard M. McLaughlin ORCID: orcid.org/0000-0001-7003-67791
Nature Communications volume 10, Article number: 5804 (2019) Cite this article
An extremely broad and important class of phenomena in nature involves the settling and aggregation of matter under gravitation in fluid systems. Here, we observe and model mathematically an unexpected fundamental mechanism by which particles suspended within stratification may self-assemble and form large aggregates without adhesion. This phenomenon arises through a complex interplay involving solute diffusion, impermeable boundaries, and aggregate geometry, which produces toroidal flows. We show that these flows yield attractive horizontal forces between particles at the same heights. We observe that many particles demonstrate a collective motion revealing a system which appears to solve jigsaw-like puzzles on its way to organizing into a large-scale disc-like shape, with the effective force increasing as the collective disc radius grows. Control experiments isolate the individual dynamics, which are quantitatively predicted by simulations. Numerical force calculations with two spheres are used to build many-body simulations which capture observed features of self-assembly.
Particle sedimentation and aggregation in stratified fluids is observed throughout natural systems. Some examples include: sedimenting "marine snow" particles in lakes and oceans (central to carbon sequestration)1, dense microplastics in the oceans (which impact ocean ecology and the food chain2), and even "iron snow" on Mercury3 (conjectured as its magnetic field source). These fluid systems all have stable density stratification, which is known to trap particulates1,4,5,6,7 through upper lightweight fluid coating the sinking particles, thus providing transient buoyancy. Numerous mechanisms have been described by which particles may aggregate in such environments. The current understanding of aggregation of such trapped matter involves collisions (owing to Brownian motion, shear, and differential settling) and adhesion8,9.
In this work, we identify and isolate an unexplored mechanism, which induces particle attraction and self-assembly in stratified fluids in the absence of adhesion (short range binding effects) arising as a first-principle fluid dynamics phenomenon. Specifically, we present first an experiment, which documents the self-assembly with hundreds of particles. We next present a series of control experiments and calculations to explain the underlying mechanism responsible for the attraction. This involves using Lagrangian tracer particle dynamics to observe flows created by single bodies of different aspect ratios. In turn, we use particle imaging velocimetry (PIV) to observe the flow structure in a plane induced by a single spheroid. We compare these experimental observations quantitatively with simulations performed within the COMSOL software environment. With validated simulations, we are able to further explore the flow structure's dependence on physical parameters (aspect ratio, size, density gradient, etc). To explore many-body effects, we first simulate the flow induced by two fixed, same size spheres and evaluate numerically the induced force at an array of separation distances. This resulting force law is used to develop a modified Stokesian dynamics simulation (capable of simulating hundreds of spheres), which is in turn validated with the experiments involving two same size, moving spheres. Modified Stokesian dynamics results are then presented for hundreds of spheres exhibiting self-assembly features which strongly resemble those observed in our many-body experiment.
Experimental observations
The self-assembly phenomenon is clearly observed in our experiments (Fig. 1a–d, f, g) with a collection of neutrally buoyant, small spheres suspended at the same height between layers of sharply salt-stratified water in a rectangular plexiglass container. The spheres are photographed from above (see schematic in Fig. 1e) at regular time intervals. The spheres, initially isolated, feel a mutually attractive force, which forms local clusters. In turn, these clusters attract each other while orienting themselves to seemingly try to fill a puzzle-like pattern, ultimately resulting in a large disc-like shape. See Supplementary Movie 1 for a dynamic view of this self-assembling cluster.
Fig. 1: Experimental snapshots and schematic.
a–d Time series of self assembly of collection of neutrally buoyant spheres suspended within a sharply salt-stratified fluid viewed from above. Spheres radii and densities are 0.025–0.05 cm and \(1.05\) g cc−1, top fluid is fresh water (\(0.997\) g cc−1), bottom is NaCl water solution of density (\(1.1\) g cc−1). e Schematic showing experimental setup. f Initial cluster (different trial). g Final cluster.
To probe more directly the nature of the interaction between spheres, we next examine cases involving single large bodies interacting with single small bodies in linear stratification (as opposed to the sharp, two-layer stratification in Fig. 1), which can be assumed to be advected by the fluid flow induced by each individual large body (the mechanism for these flow is theoretically explained below). We note that linear concentration gradients are known to be exact solutions to the diffusion equation in free space, which thereby reduces any effect of the background concentration field changing during the duration of these control experiments. All bodies have the same approximate densities (\(1.05\) g cc−1) and float at similar heights within the salt-stratified layer (density gradient ~ \(0.007\) g cm−4). The left panel of Fig. 2 shows the side view of the two different large bodies considered. The sphere has radius \(0.5\) cm. The oblate spheroid has the same vertical radius, while its horizontal radius is \(1.0\) cm. We monitor the separation distance as a function of time between the large body and the small body. In the right panel of Fig. 2, we present these trackings for the case of a large sphere and for the case of a large oblate (horizontal disc-like) spheroid for multiple trials of each experiment. The small sphere in both cases has a radius of \(0.05\) cm. These observations indicate that an attractive force is created by the large body upon the small body when their geometric centers are at the same heights. Observe that the slopes for these curves indicate that the attraction is stronger for the wide spheroid than that of the sphere. This supports the fact that larger discs induce larger attractive forces on smaller particles and gives insight into the collective dynamics of the many particle system: as clusters grow into large-scale discs, their attractive force upon individual particles (or smaller clusters) becomes effectively stronger and stronger until all particles are packed into the single large disc-like shape observed in Fig. 1. In particular, in our experiments presented in Fig. 1, we measure that the attraction speed for a single particle approaching a large cluster can be as large as \(20\) times that of the attraction speed between two isolated spheres under similar conditions. See Supplementary Movie 2 for video documenting the collapse for these two cases involving the large sphere/spheroid.
Fig. 2: Experiments and computations with large sphere/spheroid.
a, b Side view geometries, and numerically computed density isolines, \(Pe=41\), sphere radius, \(0.5\) cm, spheroid width, 2 cm, density increments ~\(0.001\) g cc−1, with density gradient \(\sigma =0.007\) g cm−4. c Experimental and computational evolution of separation distance as a function of time between a large sphere and a small passive sphere and between a large spheroid and a small passive sphere. All experimental trajectories are plotted with the same initial separation. Also shown are the companion trackings produced by our computational simulation for the two geometries (dashed lines). The diffusion-induced flows generated by either large body advect and attract the small passive particle until contact. Note that the velocities in the case of the oblate spheroid are larger than the case of the sphere.
The theoretical explanation for this unexplored effective force of attraction lies within diffusion-induced flows, first studied by O. M. Phillips10 and C. Wunsch11. These are flows that originate from a mismatch between the background horizontal isolines of density, which initially intersect the surface non-orthogonally. The no-flux boundary condition for salt transport implies that this angle of intersection must be 90 degrees. As a result, on an initial transient timescale the isolines are bent to locally align with the surface normal. But this produces a lifting (or depression) of density which creates a buoyancy imbalance which, in turn, creates a fluid flow. Of course, the full, quantitative explanation of this behavior relies upon the integration of the partial differential equations (PDEs) underlying the fluid system. Although our studies involve fully three-dimensional flows and non-planar geometry, some intuition can be found for the special case of an infinite tilted flat plane inserted into a linearly stratified fluid: for this geometry there is an exact steady solution10,11 to the Navier-Stokes equations coupled to an advection-diffusion equation for salt concentration. This solution shows a boundary layer region in which there is a density anomaly and a shear flow up the top side of the sloped wall. We remark that experimental work of Allshouse and Peacock, using a freely suspended wedge-shaped object, demonstrated that such flows are in fact sufficient to self transport a single object12,13. For our studies, the sphere and spheroids are symmetric, and thus no self-induced motion is generated by a single body in isolation. And yet, self-induced flows are generated by these bodies, and those flows induce the collective motion of other nearby bodies, such as documented in Fig. 1. To study theoretically and numerically these flows in detail, we will consider first the case of single body held fixed in background linear stratification, whose diffusion-induced flow can be used as an experimental benchmark; we then move on to the two-body case, which allows us to isolate the attractive force.
The characteristic velocity and length scales for the Phillip's solution are \(U=\kappa {\left(\frac{g\sigma }{\kappa \mu }\right)}^{1/4}\), and \(L={\left(\frac{\kappa \mu }{g\sigma }\right)}^{1/4}\), where \(g\) is that gravitational acceleration, \(\mu\) and \(\kappa\) are the dynamic fluid viscosity and salt diffusivity, and \(\sigma\) is the slope of the background density field (all assumed to be constant in this work). The length scale \(L\) defines the thickness of the boundary layer above the tilted flat plane and the velocity scale \(U\) defines its strength. We use this velocity scale to nondimensionalize the equations of motion for the velocity field \({\boldsymbol{u}}\), pressure \(P\), density \(\rho\) (varying only through the evolution of the salt concentration), position \({\boldsymbol{x}}\), and time \(t\), (for a single sphere of radius \(a\) for brevity in exposition) via \(\tilde{{\boldsymbol{x}}}=\frac{{\boldsymbol{x}}}{a},\tilde{{\boldsymbol{u}}}=\frac{{\boldsymbol{u}}}{U},\tilde{\rho }=\frac{\rho }{\sigma a},\tilde{P}=\frac{Pa}{\mu U}\), and \(\tilde{t}=\frac{t\kappa }{{a}^{2}}\). The resulting non-dimensional PDE system (dropping tildes and primes) for the incompressible fluid velocity and concentration field is, respectively:
$$Re\,\rho \left[\frac{1}{Pe}\frac{\partial {\boldsymbol{u}}}{\partial t}+{\boldsymbol{u}}\cdot \nabla {\boldsymbol{u}}\right]=-\nabla P-P{e}^{3}\rho \hat{z}+\Delta {\boldsymbol{u}}$$
$$\frac{\partial \rho }{\partial t}+Pe\,{\boldsymbol{u}}\cdot \nabla \rho =\Delta \rho$$
where the Reynolds number is \(Re=\frac{\sigma {a}^{2}U}{\mu }\) and the Peclet number is \(Pe=\frac{a}{L}\). For all the experiments we have run, \(Re\;<\; 0.001\), and, as such, the so-called Stokes approximation may be employed, which sets the left hand side of Eq. (1) to zero while retaining nonlinear effects through non-zero \(Pe\) in the advection-diffusion of the salt concentration. As such, the only remaining parameter in the system is the Peclet number (of course, when the object is non-spherically symmetric, then its aspect ratio provides an additional non-dimensional parameter). The boundary conditions are assumed to be no-slip for the velocity and no-flux for the tracer on any physical boundary (see Supplementary Note 6 and Supplementary Fig. 7 for exact solutions derived with these as well as other boundary conditions).
Unless otherwise stated, we seek steady solutions of these equations using three different approaches. The steady approximation is well supported through standard partial differential equation estimates, which suggest that the transients will decay after the first hour of our experiment with the large sphere/spheroids, and even faster (on the order of a minute) with the small spheres for the aggregation experiments (total experimental time typically lasting \(16\) hours). First, for very small Peclet numbers, this evolution of the salt concentration decouples from the fluid velocity and in fact is, surprisingly, equivalent to finding the irrotational velocity potential for the velocity field induced by a steady vertical flow past a fixed sphere or spheroid. In turn, the actual velocity field is constructed through convolution of the concentration field, \(\rho\), with the available Green's function associated with a point source of momentum in the presence of a no-slip sphere. See Supplementary Note 5 and Supplementary Figs. 5 and 6 for details regarding these solutions, the Green's function, and the imaging methods employed to generate physical solutions. In this low Peclet limit in free space, we have also derived mathematically exact solutions which satisfy the boundary conditions on a single sphere. These solutions are presented in Supplementary Note 6 and document the possibility of complete flow reversal as the boundary conditions are switched from no-flux (for salt diffusion), to continuity of temperature and heat flux for particles with different heat conductivities from the surrounding fluid.
Next, for general Peclet numbers, we utilize the finite element package COMSOL to calculate steady solutions of these equations of motion. See Supplementary Notes 1 and 2, Supplementary Table 1, and Supplementary Figs. 1 and 2 for extended details on this calculation.
Figure 3 depicts the simulations for this method, documenting the toroidal flow structures observed for a range of Peclet numbers. In the top left panel, we document, for the case of a sphere, the maximum flow speed in the equatorial plane as well as the maximum speed overall in the entire domain as a function of the Peclet number. In the equatorial plane, there is an optimal velocity which occurs at a Peclet number slightly bigger than unity. As a benchmark, observe that the slope (here in log-log coordinates) at low Peclet number is \(3\) (i.e. \({\bf{u}} \sim P{e}^{3}\)), which is the exact theoretical scaling in this limit. In the lower left panels, the companion flow structures are observed for four representative Peclet numbers. The background color scheme is assigned by the radial velocity, with negative values corresponding to horizontal velocities directed left. Observe that in all cases, in the equatorial plane, the velocities are directed to attract nearby fluid particles towards the sphere. In the right panels of Fig. 3 we show the analogous plots for spheroids of different aspect ratios. In the top, we show the max speed in the equatorial plane and global max speed as a function of aspect ratio, whereas in the bottom, we document the toroidal flow structures for four representative aspect ratios. In all cases, we observe a monotonic increase in the max equatorial speed as the horizontal disc radius increases.
Fig. 3: Flow strengths and structures for spheres and spheroids.
a Overall max velocity and equatorial maximum cylindrical radial speed, \({u}_{r}\) as a function of the Peclet number for spheres. b Flow structures induced by a sphere for four different Peclet numbers in a vertical plane slicing the sphere through north and south pole, color representing horizontal velocity scaled by its maximum value for each Peclet number. c Overall max velocity and equatorial maximum cylindrical radial speed as a function of the aspect ratio for spheroids. d Flow structures induced by four different spheroids, scaled by the Phillips velocity, \(U\) at \(Pe=1\).
Comparison of PIV with simulations
In Fig. 4, we present the direct comparison of the flow field from the experiment, measured using PIV, with the COMSOL simulated flow for the parameters of the experiment for the case of our oblate spheroid, see caption for experimental parameters. The strong quantitative agreement between the flow structures and magnitudes clearly supports that the attraction mechanism is indeed hydrodynamic. Please see Supplementary Movie 4 for a video showing the particle evolution in the experiment used to construct the PIV image. By assuming the smaller sphere moves as a passive tracer under the flow induced by the large sphere (or spheroid), we compute the evolution of the separation distance between the large and small bodies. By "passive" we are specifically referring to the assumption that the particle is advected with the local fluid velocity and that its presence does not significantly disrupt the structure of the flow field created by the large particle. This assumption is valid in our case as the small sphere is significantly smaller than the larger particle.
Fig. 4: Comparison of experiment with simulation.
The flow field in a vertical plane slicing the spheroid through north and south pole with \(Pe=52\), \(\mu =0.016\) Poise, \(\kappa =1.5\,\times\, 1{0}^{-5}\) cm2 s−1 m, spheroid vertical radius, \(0.5\) cm, width, \(2\) cm, and with density gradient \(\sigma =0.002\) g c−4. a Flow speed from PIV. b Flow speed from COMSOL. c Horizontal flow from PIV. d Horizontal flow from COMSOL.
We next apply this method to the experimental observations reported in Fig. 2. For this case involving NaCl in water, \(\kappa \simeq 1.5\times 1{0}^{-5}\) cm2 s−1, the radius \(a=0.5\) cm, and \(Pe\simeq 41\). These predictions are shown superimposed over the experimental data in the right panel of Fig. 2, where we also show in the left panel the corresponding computed density isolines for the sphere/spheroid. The agreement is quantitative until the objects get sufficiently close together, at which point the passive tracer assumption begins to fail as particle–particle interactions develop.
Multi-particle interactions and Stokesian dynamics
To explore these particle–particle interactions, we further perform time dependent simulations in COMSOL with two spheres at an array of fixed separation differences (see Supplementary Note 3, Supplementary Table 2, and Supplementary Figs. 2 and 3 for details regarding these simulations) to document the rapid convergence to the steady state behavior alluded to above. This yields both the diffusion-induced flows exterior to two spheres as well as the time evolution of the force exerted on each sphere by these flows, computed through the explicit evaluation of the surface integral of the stress tensor. We note that recent studies have explored anomalous force calculations for particles moving vertically through stratification18,19, whereas here we are focusing on horizontal motions. Supplementary Fig. 3 documents how this force rapidly achieves its steady state, further justifying the steady state assumption. Figure 5 displays the flows induced by the two fixed spheres at for four different separation distances after steady state has been reached (approximately one minute for these separation distances) for parameters \(Pe=5.1\), \(\sigma =0.03\) g cm−4, \(\mu =0.0113\) Poise, and radii \(a=0.045\) cm. The trend is that, for well separated spheres, interactive effects are minimal, as the flow structures resemble the case for an individual sphere from Fig. 3. As the spheres are moved closer together, the sphere-sphere interaction modifies the flow structures, with strong flows between the spheres. These flows create an asymmetric distribution of stress on each sphere, which gives rise to the increased attractive force between the spheres as they approach each other, documented in Supplementary Fig. 2.
Fig. 5: Flow structures created by two spheres.
Four different fixed separation distances (a \(4\), b \(2\), c \(1\), and d \(0\) radii), with flows computed assuming \(Pe=5.35\), \(\sigma =0.03\) g cm−4, and \(a=0.045\) cm, scale bar on the right normalized by Phillips velocity \(U\).
With this pairwise force constructed, we may modify standard, available Stokesian dynamics solvers used in the sedimentation community20,21 to build many-bodied cluster dynamics simulations. See the Supplementary Note 4 and Supplementary Fig. 4 for details on this implementation. As a benchmark, we experimentally monitor the separation distance between two \(0.045\) cm radius spheres in two different linear stratifications, one with \(\sigma =0.025\) g c−4 and one with \(\sigma =0.03\) g cm−4, with \(\mu =0.0113\) Poise and \(\kappa =1.5\times 1{0}^{-5}\) cm2 s−1. Shown in Supplementary Fig. 4 is the comparison of the experiment with two modified Stokesian dynamics simulations that incorporate the numerically measured two-body diffusion-induced force, which documents quantitative agreement. Next, we apply the modified Stokesian dynamics simulation to a large number of particles. Fig. 6 depicts the evolution at four output times for \(200\) identical particles with radius \(a=0.045\) cm and force derived from a COMSOL simulation with parameters \(\sigma =0.03\) g cm−4, \(\mu =0.0113\) Poise, and \(\kappa =1.5\times 1{0}^{-5}\) cm2 s−1; this system exhibits self-assembly and cluster formation with many features matching the experiment. The simulation of self-assembly and cluster formation is documented in Supplementary Movie 5.
Fig. 6: Modified Stokesian dynamics simulation.
Snapshots of many-body self-assembly and cluster formation at different times (a 0 hrs, b 4 hrs, c 8 hrs, d 12 hrs), with \(a=0.045\) cm, \(\sigma =0.03\) g cm−4, \(\mu =0.0113\) Poise, \(\kappa =1.5\,\times\, 1{0}^{-5}\) cm2s−1.
Of course, further work is needed to account for the full, three-dimensional nature of the diffusion-induced flow and the force that it generates within the Stokesian dynamics approach, as well accounting for subtle effects such as the distribution of sizes and shapes, which are to be expected under natural conditions. Furthermore, future work will aim to explore the aggregation process in more detail: specifically how the size and orientation of an existing aggregates influences the attraction rate of nearby isolated particles.
As mentioned above, we have developed a number of mathematically exact solutions in the zero Peclet limit, which satisfy a menu of different boundary conditions. These solutions provide an alternative starting point to building complete many particle simulations. It is interesting to consider extending the asymptotics of small Peclet regimes for these exact solutions. If a spectral method with physically relevant boundary conditions could be implemented, work by List (and more recently Ardekani and Stocker) could be a starting point for such an extension16,17.
In this letter, we have demonstrated an unexplored interaction force, which exists between bodies suspended in a stratified fluid. The force origin lies in a body's self creation of a diffusion-induced, toroidal flow that attracts nearby matter suspended at the same depth. Our analysis has shown that these flows are quantitatively predicted by solving the Stokes equations coupled to an advection-diffusion equation. We remark that we have also experimentally observed that these flows may induce an effective repulsive force for particles suspended above (or below) a large body, as they are on the opposite (repulsive) side of the toroid (see Supplementary Movie 3). Interestingly, Allshouse and Peacock briefly reported in their supplementary material to have observed that two skinny wedges repel each other without discussion of the flow structures involved. It is likely that that repulsion was similarly a result of the three-dimensionality of the fluid flows generated. We also stress that the phenomena we identified occur over a wide range of different particle materials (glass, photopolymer resin, polystyrene), and we have even observed the same behavior in a non-electrolytic fluid (watery corn syrup). In contrast to self-assembly mechanisms that rely on adjustable parameters, our first-principle theory accurately predicts the source of an attractive force and it allows the extension of Stokesian dynamics to reproduce the experimental clustering observations. We remark that our mechanism is distinct from other small scale effects (electro/diffusiophoresis) in that the particle motion is orthogonal to the background gradient (with velocities scaling sublinearly in \(\sigma\) as opposed to linearly15). Further, for the case of symmetric bodies, motion is only induced through collective interactions. We also emphasize that this process may be relevant in oceans and lakes, particularly at scales smaller than the Kolmogorov scale of turbulence. Below this scale, which can vary from centimeters (at depth) to tens of microns (in surface water)8,14, turbulence is negligible, and these processes may participate in the formation and clustering of marine aggregates. Of course, more work is necessary to address the relevance of these effects in such field applications.
Experimental setup and materials
To create a linear stratification, deionized water is mixed with NaCl, degassed in a vacuum chamber at −24inHg for 10 minutes, and thermalized in a water bath. A two bucket method is used, where fresh water is pumped via Cole-Parmer Ismatec ISM405A pumps at a constant rate into a mixing bucket initially full of saline. Concurrently, this mixture is pumped through a floating diffuser to a depth of ~\(10\) cm inside a tank \(30.5\) cm tall, \(16.5\) cm wide, and \(15.5\) cm deep submerged in a custom built Fluke thermal bath to regulate temperature. The short depth walls are made of \(1/4\)-inch-thick copper to facilitate thermal coupling, and the longer width walls are made of \(7/16\) inch acrylic to allow for visualization. To measure the density profile, a Thermo Scientific Orion Star A215 conductivity meter equipped with an Orion 013005MD conductivity cell is carefully submerged into the tank along the wall using a linear stage that measures with \(0.01\) mm precision.
Density-matched objects (spheres and/or spheroids of ~\(1.05\) g cc−1 with radii varying from \(50\) microns to centimeter scale) are lowered into the fluid slowly using a thin wire support, with care taken to remove any air bubbles on the surface of the body. Once they have been placed at their buoyant height near the center of the tank, the stratified tank is left for a minimum of \(12\) hours to allow decay of transients, which arise from the pouring of the stratification, insertion of the spheres, etc., so that the ambient fluid is quiescent other than for the diffusion-induced flows, which persist for as long as stratification exists (weeks to months). This wait time is sufficient to allow viscous effects to cause any initial fluid motion to come to rest, allowing us to isolate and study the diffusion-induced flows, which are significantly longer lived. From above, images are acquired at \(15\), \(30\), or \(60\) seconds intervals using a Nikon D3 camera equipped with an AFS Micro Nikkor 105 mm lens and a Nikon MC-36A intervalometer. The tank is covered on top with a \(1/4\) inch acrylic sheet to reduce evaporation and prevent convective motion.
To manufacture spheres and spheroids, we used a Formlabs Form 2 3D printer with Formlabs Clear V4 photopolymer resin printed at \(25\) micron layer thickness. The bodies were printed in halves split along the equatorial plane and joined together with epoxy. An interior cavity in the top half allows for the printed bodies to be density matched to polystyrene tracer spheres of density \(\simeq\! 1.05\) g cc−1. The polystyrene spheres used for the self-assembly experiments varied in radius from \(0.025\) to \(0.05\) cm, and the vertical density profile was two-layer (rather than linear as used in the control experiments). Moreover, we note that two sphere experiments were performed with glass spheres of radius \(0.25\) cm and similar attraction was observed. We also note that similar attractions were observed in experiments performed in a non-electrolytic corn syrup solution with density stratification achieved by varying water content.
PIV measurements
To perform particle imaging velocimetry (PIV), a linear stratification is prepared, as described above, with all experimental parameters reported in Fig. 4. We remark that linear stratification provide excellent profiles for benchmarking in that they remain steady except in the vicinity of the top and bottom.The 3D printed spheroid is attached to the tank bottom using a monofilament. Glass microspheres are mixed with salt water matching the top density of the tank and degassed before being released into the tank through a diffuser. In order to reduce transients, the system is left to relax for 24 hours after which PIV is performed with a laser sheet whose normal is oriented horizontally, thus axially slicing the spheroid when entered from the tank side. Imaging is performed with the optical setup described above on \(30\) seconds intervals. Two-dimensional, two-component PIV analysis is performed in LaVision DaVis 7.2 software and time averaged due to sparse seeding. A vertical drift owing to settling of seeding particles is measured far from the spheroid and subtracted from the final velocity field. We note that the PIV seems to mildly underestimate the flow strengths as compared with manually tracking individual particles, which could be attributed to the difficulty in seeding the thin boundary layer regions where the flow is strongest.
All relevant data are available upon request from the authors.
Code availability
All relevant code is available upon request from the authors.
MacIntyre, S., Alldredge, A. L. & Gottschalk, C. C. Accumulation of marine snow at density discontinuities in the water column. Limnol. Oceanogr. 40, 449–468 (1995).
ADS Article Google Scholar
Jamieson, A. J., Brooks, L. S. R., Reid, W. D. K., Piertney, S. B., Narayanaswamy, B. E. & Linley, T. D. Microplastics and synthetic particles ingested by deep-sea amphipods in six of the deepest marine ecosystems on Earth, Earth. R. Soc. Open Sci. 6, 180667 (2019).
Dumberry, M. & Rivoldini, A. Mercuryas inner core size and core-crystallization regime. Icarus 248, 254–268 (2015).
Abaid, N., Adalsteinsson, D., Agyapong, A. & McLaughlin, R. M. An internal splash: levitation of falling spheres in stratified fluids. Phys. Fluids 16, 1567–1580 (2004).
Camassa, R., Falcon, C., Lin, J., McLaughlin, R. M. & Parker, R. Prolonged residence times for particles settling through stratified miscible fluids in the Stokes regime. Phys. Fluids 21, 031702 (2009).
Camassa, R., Falcon, C., Lin, J., McLaughlin, R. M. & Mykins, N. A first-principle predictive theory for a sphere falling through sharply stratified fluid at low Reynolds number. J. Fluid Mech. 664, 436–465 (2010).
ADS MathSciNet Article Google Scholar
Camassa, C. et al. Retention and entrainment effects: experiments and theory for porous spheres settling in sharply stratified fluids. Phys. Fluids Lett. 25, 081701 (2013).
Burd, A. B. & Jackson, G. A. Particle aggregation. Ann. Rev. Mar. Sci. 1, 65–90 (2009).
Durham, W. M. & Stocker, R. Thin phytoplankton layers: characteristics, mechanisms, and consequences. Ann. Rev. Mar. Sci. 4, 177–207 (2012).
Phillips, O. M. On flows induced by diffusion in a stably stratified fluid. Deep Sea Res. Ocean. Abstr. 17, 435–443 (1970).
Wunsch, C. On oceanic boundary mixing. Deep-Sea Res. 17, 293–301 (1970).
Allshouse, M. R., Barad, M. F. & Peacock, T. Propulsion generated by diffusion-driven flow. Nat. Phys. 6, 516 (2010).
Peacock, T., Stocker, R. & Aristoff, J. An experimental investigation of the angular dependence of diffusion-driven flow. Phys. Fluids 16, 3503–3505 (2004).
Thorpe, S. A. The Turbulent Ocean. (Cambridge Univ. Press., Cambridge, UK, 2005).
Anderson, J. L. Colloid Transport by interfacial flows. Ann. Rev. Fluid Mech. 21, 61–99 (2009).
Ardekani, A. M. & Stocker, R. Stratlets: low Reynolds number point-force solutions in a stratified fluid. Phys. Rev. Lett. 105, 084502 (2010).
List, E. J. Laminar momentum jets in a stratified fluid. J. Fluid Mech. 45, 561–574 (1971).
Candelier, F., Mehaddi, R. & Vauquelin, O. The history force on a small particle in a linearly stratified fluid. J. Fluid Mech. 749, 184–200 (2014).
Mehaddi, R., Candelier, F. & Mehlig, B. Inertial drag on a sphere settling in a stratified fluid. J. Fluid Mech. 855, 1074–1087 (2018).
ADS MathSciNet CAS Article Google Scholar
Brady, J. F. & Bossis, G. Stokesian dynamics. Ann. Rev. Fluid Mech. 20, 111–157 (1988).
Townsend, A. K. The mechanics of suspensions, Doctoral thesis, University College London, (2017).
We acknowledge funding received from the following NSF Grant Nos.: RTG DMS-0943851, CMG ARC-1025523, DMS-1009750, DMS-1517879, DMS-1910824, and DMS-1909521; and ONR Grant No: ONR N00014-18-1-2490. We thank David Adalsteinsson for use of his visualization software DataTank.
Department of Mathematics, University of North Carolina, Chapel Hill, Chapel Hill, NC, 27599, USA
Roberto Camassa, Robert Hunt & Richard M. McLaughlin
School of Engineering, Brown University, Providence, RI, 02912, USA
Daniel M. Harris
Department of Physics and Center for Biological Physics, Arizona State University, Tempe, AZ, 85287, US
Zeliha Kilic
Roberto Camassa
Robert Hunt
Richard M. McLaughlin
R.C., D.M.H., R.H., Z.K., and R.M.M. contributed equally to this article.
Correspondence to Roberto Camassa or Daniel M. Harris or Richard M. McLaughlin.
The authors declare no competing financial or non-financial interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Description of Additional Supplementary Files
Supplementary Movie 1
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Camassa, R., Harris, D.M., Hunt, R. et al. A first-principle mechanism for particulate aggregation and self-assembly in stratified fluids. Nat Commun 10, 5804 (2019). https://doi.org/10.1038/s41467-019-13643-y
Top 50 Physics Articles
Editors' Highlights
Nature Communications ISSN 2041-1723 (online)
|
CommonCrawl
|
Discontinuity of multiplication and left translations in $\beta G$
by Yevhen Zelenyuk PDF
The operation of a discrete group $G$ naturally extends to the Stone-ÄŒech compactification $\beta G$ of $G$ so that for each $a\in G$, the left translation $\beta G\ni x\mapsto ax\in \beta G$ is continuous, and for each $q\in \beta G$, the right translation $\beta G\ni x\mapsto xq\in \beta G$ is continuous. We show that for every Abelian group $G$ with finitely many elements of order 2 such that $|G|$ is not Ulam-measurable and for every $p,q\in G^*=\beta G\setminus G$, the multiplication $\beta G\times \beta G\ni (x,y)\mapsto xy\in \beta G$ is discontinuous at $(p,q)$. We also show that it is consistent with ZFC, the system of usual axioms of set theory, that for every Abelian group $G$ and for every $p,q\in G^*$, the left translation $G^*\ni x\mapsto px\in G^*$ is discontinuous at $q$.
Andreas Blass and Neil Hindman, On strongly summable ultrafilters and union ultrafilters, Trans. Amer. Math. Soc. 304 (1987), no. 1, 83–97. MR 906807, DOI 10.1090/S0002-9947-1987-0906807-4
W. W. Comfort and S. Negrepontis, The theory of ultrafilters, Die Grundlehren der mathematischen Wissenschaften, Band 211, Springer-Verlag, New York-Heidelberg, 1974. MR 0396267
László Fuchs, Infinite abelian groups. Vol. I, Pure and Applied Mathematics, Vol. 36, Academic Press, New York-London, 1970. MR 0255673
Neil Hindman and Dona Strauss, Algebra in the Stone-ÄŒech compactification, De Gruyter Expositions in Mathematics, vol. 27, Walter de Gruyter & Co., Berlin, 1998. Theory and applications. MR 1642231, DOI 10.1515/9783110809220
Kenneth Kunen, Set theory, Studies in Logic and the Foundations of Mathematics, vol. 102, North-Holland Publishing Co., Amsterdam-New York, 1980. An introduction to independence proofs. MR 597342
Anthony To Ming Lau and John Pym, The topological centre of a compactification of a locally compact group, Math. Z. 219 (1995), no. 4, 567–579. MR 1343662, DOI 10.1007/BF02572381
I. V. Protasov, Points of joint continuity of a semigroup of ultrafilters of an abelian group, Mat. Sb. 187 (1996), no. 2, 131–140 (Russian, with Russian summary); English transl., Sb. Math. 187 (1996), no. 2, 287–296. MR 1392845, DOI 10.1070/SM1996v187n02ABEH000112
I. V. Protasov, Continuity in $G^*$, Topology Appl. 130 (2003), no. 3, 271–281. MR 1978891
I. V. Protasov and J. S. Pym, Continuity of multiplication in the largest compactification of a locally compact group, Bull. London Math. Soc. 33 (2001), no. 3, 279–282. MR 1817766, DOI 10.1017/S0024609301007925
W. Ruppert, On semigroup compactifications of topological groups, Proc. Roy. Irish Acad. Sect. A 79 (1979), no. 17, 179–200. MR 570414
Saharon Shelah, Proper and improper forcing, 2nd ed., Perspectives in Mathematical Logic, Springer-Verlag, Berlin, 1998. MR 1623206, DOI 10.1007/978-3-662-12831-2
Eric K. van Douwen, The Čech-Stone compactification of a discrete groupoid, Topology Appl. 39 (1991), no. 1, 43–60. MR 1103990, DOI 10.1016/0166-8641(91)90074-V
Yevhen G. Zelenyuk, Ultrafilters and topologies on groups, De Gruyter Expositions in Mathematics, vol. 50, Walter de Gruyter GmbH & Co. KG, Berlin, 2011. MR 2768144, DOI 10.1515/9783110213225
Retrieve articles in Proceedings of the American Mathematical Society with MSC (2010): 03E35, 22A15, 22A05, 54D35
Retrieve articles in all journals with MSC (2010): 03E35, 22A15, 22A05, 54D35
Yevhen Zelenyuk
Affiliation: School of Mathematics, University of the Witwatersrand, Private Bag 3, Wits 2050, South Africa
Email: [email protected]
Received by editor(s): February 19, 2013
Received by editor(s) in revised form: May 18, 2013
Published electronically: October 6, 2014
Additional Notes: The author was supported by NRF grant IFR2011033100072.
Communicated by: Mirna Džamonja
MSC (2010): Primary 03E35, 22A15; Secondary 22A05, 54D35
DOI: https://doi.org/10.1090/S0002-9939-2014-12267-2
|
CommonCrawl
|
Formation of metre-scale bladed roughness on Europa's surface by ablation of ice
Daniel E. J. Hobley ORCID: orcid.org/0000-0003-2371-05341,
Jeffrey M. Moore2,
Alan D. Howard3 &
Orkan M. Umurhan2
Nature Geoscience volume 11, pages901–904(2018)Cite this article
702 Altmetric
Cryospheric science
Rings and moons
Matters Arising to this article was published on 02 December 2019
An Author Correction to this article was published on 13 November 2019
This article has been updated
On Earth, the sublimation of massive ice deposits at equatorial latitudes under cold and dry conditions in the absence of any liquid melt leads to the formation of spiked and bladed textures eroded into the surface of the ice. These sublimation-sculpted blades are known as penitentes. For this process to take place on another planet, the ice must be sufficiently volatile to sublimate under surface conditions and diffusive processes that act to smooth the topography must operate more slowly. Here we calculate sublimation rates of water ice across the surface of Jupiter's moon Europa. We find that surface sublimation rates exceed those of erosion by space weathering processes in Europa's equatorial belt (latitudes below 23°), and that conditions would favour penitente growth. We estimate that penitentes on Europa could reach 15 m in depth with a spacing of 7.5 m near the equator, on average, if they were to have developed across the interval permitted by Europa's mean surface age. Although available images of Europa have insufficient resolution to detect surface roughness at the multi-metre scale, radar and thermal data are consistent with our interpretation. We suggest that penitentes could pose a hazard to a future lander on Europa.
Access through your institution
Buy or subscribe
The Jovian moon of Europa hosts an interior ocean of liquid water1,2,3, and has been proposed as a target for future planetary missions due to the possible habitability of this ocean. Past studies of its icy shell have envisioned a surface that is smooth at the lander scale, dominated by diffusive impact processes such as impact gardening and sputtering by charged particles in Jupiter's magnetic field4,5,6,7,8,9,10. However, on Earth, icy surfaces ablated by solar radiation develop characteristic roughness patterns at the centimetre to multi-metre scale11,12,13,14,15,16. Here we show that under modern Europan conditions, sublimation processes driven by solar radiation flux are expected to dominate over diffusive processes in a band around the satellite's equator. As the surface at these latitudes degrades, it will develop an east-west aligned, spiked and bladed texture, or roughness, at the metre scale – known on Earth as penitentes. This interpretation can explain anomalous radar returns seen around Europa's equator4,5,17. Penitentes may well explain reduced thermal inertias and positive circular polarization ratios in reflected light from Europa's equatorial region17,18. This formation mechanism is used to explain formation of bladed terrain on Pluto in methane ice19.
Blade formation by sublimation
Self-organized surface patterning is ubiquitous in terrestrial snow and ice during ablation by radiative heating, through both sublimation and melting. Europa's atmosphere is so tenuous (~0.1 μPa, 1012 times less than Earth's surface; 10–20 km particle mean free paths20) that its external heat budget is effectively radiative, and hence such textures might also be expected there on ablating surfaces, but solely due to sublimation. On Earth, growth of these patterns is linked to amplification of initial random depressions in the surface by lensing of scattered solar and thermal infrared radiation11,16,21.
On Earth, the dominant radiative structures that form in snow and ice under cold, dry conditions are called penitentes. These are tall, east-west aligned, sharp-edged blades and spikes of sculpted snow or ice which point towards the elevation of the midday sun12,22 (Fig. 1). Typical heights are 1–5 m. Formation of large and well-developed penitentes requires bright, sustained sunlight, cold, dry, still air11, and a melt-free environment12. Thus, they are almost entirely restricted to high-altitude tropics and subtropics14. Laboratory experiments22 and numerical modelling16 confirm that sublimation in the absence of melting is particularly essential for penitente formation11,12. Small amounts of dirt in the ice do not inhibit penitente formation if radiation can penetrate the ice, and the vapour can escape11,16,22.
Fig. 1: Terrestrial penitentes from the southern end of the Chajnantor plain, Chile.
The view is broadly northwards; blades can be seen perpendicular to the viewing direction. The extreme relief of the structures is typical. The depressions between these examples have ablated down to the underlying rock surface. Credit: ESO (https://www.eso.org/public/images/img_1824/).
Radiative modelling confirms that penitentes form by scattering and lensing of light on and into snow and ice11,16. A key factor controlling penitente formation is that the pit of the structure must ablate faster than the sidewalls; if the sidewalls ablate faster, an alternate bowl-like stable form known as a suncup may develop13,15,16. Penitente growth requires a daily low solar incidence angle, such that light strikes the walls of the blades at a high angle, and illuminates the floors of the pits13,14. This maximizes the contrast in flux per unit area on the floor compared to the sidewalls, both in terms of direct and scattered radiation14, and explains why terrestrial examples are usually found near the equator, or also on steep equatorward-facing slopes at higher latitudes23. Physical analysis indicates that the scale and stability of penitentes are critically controlled by the thermal absorption of solar radiation into the ice and by the ability of the system to sustain gradients in the vapour pressure of the atmosphere that is in contact with the ice16. Theory suggests that the minimum size of penitentes may be governed by any of the following physical parameters: light extinction depth11,22, atmospheric vapour diffusion16, or heat conduction16. Their growth is most rapid for penitentes of sizes close to this minimum scale. This implies that ice grain size, porosity, roughness and impurity concentrations affect penitente size. Experiments, however, suggest that penitente size increases with depth of incision, and that a characteristic depth-to-width (aspect) ratio of about 2 is obtained, similar to 1.5–1.7 in terrestrial penitente fields13. The focusing of radiation in shallow hollows means that they will deepen, but shadowing and multiple reflections limit the depth of penitentes23,24, implying an optimum aspect ratio. Whether penitentes grow in size without limit during continued sublimation is uncertain, but eventually the mechanical strength of H2O ice will limit the size.
These observations suggest that sublimation on Europa could create penitente-like textures on its surface. Europa is tidally locked to Jupiter, with an inclination to Jupiter's equator of 0.47°. Jupiter's obliquity is only 3.13°, and thus for any given point on Europa's surface, the solar zenith angle never varies by >4°. This orbital configuration has likely been stable over the lifetime of the surface25. Based on Galileo Photopolarimeter-Radiometer (PPR) data, surface brightness temperatures have been calculated to vary between ~70 K and 132 K4,5. Its photometric properties, in particular its albedo, show that the surface of Europa is fairly pure water ice, with a minor component of silicate materials and salts2,5,7. Thus, the surface fulfills three essential requirements for penitente growth - it is dominantly exposed ice; it would sublime without melting; and there is very little variation in solar incidence angle.
Furthermore, for penitentes to develop, they must grow faster than any other geomorphic process can modify the surface. Europa is subjected to bombardment both by conventional impactors (meteoroids, comets) and by ions accelerated by Jupiter's magnetic field5,26,27. Both of these processes will act diffusively to smooth out local topographic highs. The most recent estimates5,8,9 suggest that ion sputtering is probably dominant over impact gardening on Europa today, with rates of ~2 × 10−2 m Ma–1. At first order, for penitentes to develop, the sublimation rate must minimally exceed these diffusive processes. We assess sublimation rates using global maps of peak brightness temperatures coupled to profiles of temperature variation throughout the day4,5 to input into temperature-dependent equations of state (see Methods). This allows us to predict the approximate rates of uniform sublimation at varying Europan latitudes (Fig. 2). Bulk surface sublimation rates exceed likely sputtering erosion rates equatorwards of latitudes 24° N/S ( ± 6°), dependent on the modelling assumptions. We hypothesize that penitentes can grow, and indeed have grown, in this region.
Fig. 2: Modelled variation in rates of surface sublimation, and equivalent total depth of ice removal, with Europan latitude.
Latitudinally dependent sublimation rates (top axis) and corresponding total sublimated ice over a 50 Ma timescale (bottom axis) are derived from distinct brightness temperature data sets from two Galileo orbits, G7 (blue circles, solid line) and I25 (green crosses, dotted line), each of which are centered on opposite hemispheres. Due to truncated observations, both maxima and minima are shown for orbit I25. Temperatures are estimated based on an emissivity value of 0.90 (see Methods). Green and blue shaded regions indicate conservative rate estimates for the two data sets. Red dashes show average rates of surface overturn by sputtering. Red arrows indicate the latitudinal range in which predicted sublimation rates, based on G7 and I25 orbit observations, equal the overturn rate driven by sputtering. In both hemispheres, sublimation outcompetes sputtering erosion in a broad equatorial band equatorwards of ~ ± 24° latitude, and it is this surface that could develop penitentes.
Studies of terrestrial development of penitentes provide support for order-of-magnitude estimates of the dimensions of these structures, at least with respect to their aspect ratios. On Earth, the rate of growth as well as the characteristic separation scale of the ice blades is modelled to be set by the balance between heat conductivity in the ice, mass diffusion, and bolometric albedo16. On Earth the mass diffusion term is, in turn, set by an atmospheric boundary layer thickness. This does not apply to Europa, however, due to its insignificant atmosphere (~10−8 Pa). For Europa, we first estimate the rate of ice sublimation at the equator, finding approximately 0.3 m Ma–1 (see Methods). Based on this analysis we infer that sublimation outpaces sputtering and impact gardening by an order of magnitude.
Based upon our analysis, up to 15 m of sublimation has occurred over 50 Ma, which is the average surface age of Europa5,8. We next assume that penitentes grow with constant aspect ratio, which we assume to be ~2:1. Thus we conclude that maximum penitente depth could reach ~15 m with spacing of ~7.5 m near the equator (Fig. 2). We infer that the penitentes will become shallower, less well developed and increasingly asymmetric (and thus mechanically unstable) with distance from the equator23 (see Methods).
Our sublimation calculations are zonally averaged, and do not account for a number of local or poorly constrained effects. For example, fissured, ridged, and chaotic textures seen at >0.1 km scales indicate that resurfacing occurs in different places at different times1,26. Young areas will clearly lack major penitentes, and older areas should have better developed structures. Local surface inclination will also alter growth rates and stability. We have not accounted for spatial variation in sputtering rates, particularly with respect to their leading-trailing hemispheric asymmetry27,28. We also cannot quantify the role of particulate impurity within and on the ice, and so this is not treated here6,7. Magnitudes of local relief and surface non-volatile contamination at Europa are badly constrained, especially at the key metre scales, but are likely variable and might be locally substantial29. Contamination can produce both positive and negative feedbacks11,22, and locally suppress penitente growth entirely if a substantial non-volatile surface lag has formed11,15. Re-deposition of sublimated ice will occur at high latitudes, polar-facing slopes, and local cold-traps5,6.
Supporting observations
Given our estimates of penitente spacing (≤7.5 m), available imaging from the Galileo Orbiter's camera is too coarse to permit detection. Current roughness estimates are either at scales too coarse ( >101 m, from imaging30) or too fine ( <10−2 m, from optical photometry10). Two independent and largely unexplained sets of ground-based radar and Galileo Orbiter thermal observations reveal, however, that the surface properties of Europa equatorwards of ~ ± 25° are systematically different to those polewards of those latitudes:
Instantaneous disk resolved radar returns from Europa reveal a striking equatorial minimum in the total power returned at 13 cm wavelengths17 (Fig. 3a).
Maps of Europa's night-time brightness temperatures from Galileo's PPR instrument reveal a very similar equatorial minimum4,5 (Fig. 3b). Previous authors have interpreted such brightness temperatures as indicating a relative minimum in surface thermal inertia at the equator4,31.
Fig. 3: Remote sensing evidence consistent with an equatorial band of penitentes on Europa.
a, Instantaneous total power radar albedo, M, returned from a 12.6 cm radar sounding of Europa using the Arecibo telescope. b, Instantaneous night-time brightness temperatures from the E17 orbital pass of Europa as inferred from Galileo PPR data (wavelength range 0.35–∼100 μm). Local time (top axis) is presented in Europa equivalent hours of the day. The instantaneous acquisition of the PPR data used here causes much of the surface viewed by that instrument to be seen at an oblique angle. The base map, from Galileo and Voyager images, is in a cylindrical projection and gridded at 30° of latitude and longitude. Panel a produced using data presented in Ostro et al.17, AGU. Panel b produced and modified using data presented in Rathbun et al.31, Elsevier; base map courtesy of P. Schenk/NASA.
The known geology and visible surface patterning of Europa do not systematically change at the equator4,5, and this has made the above observations enigmatic. However, a penitente-like, ordered surface roughness, or texture, provides a possible solution. Because light entering a penitente hollow will, on average, interact more than once with the walls before emerging, the development of ice blades in these latitudes would increase the flat-surface-equivalent absorption coefficient, even with no change to fine scale material properties. In other words, the form of such a surface makes it an effective absorber for wavelengths shorter than the scale of the structure. By Kirchhoff's law, this also means that such a surface will be a more effective emitter, compared to an equivalent flat surface.
Further support comes from the leading/trailing hemispheric asymmetry in radar albedo of the equatorial regions. The trailing hemisphere (270° W) is much more heavily contaminated with particulates transported there by the magnetosphere. The trailing hemisphere, however, has a higher equatorial radar albedo than the leading hemisphere (Fig. 3a). This is counterintuitive if the contaminants aid absorption of radar in the subsurface, but fits if high particulate concentrations partially suppress penitente formation.
Europa radar observations reveal atypical circular polarization ratios. This atypical pattern may also result from the presence of penitente fields. Earth-based whole-disc radar observations at wavelengths λ = 12.6 cm reveal that unlike all known rocky bodies, the ratio of same-sense to opposite-sense circularly polarized radar, μc, exceeds 1.0 for Europa, that is, the typical ray strikes an even number of surfaces before being detected17,18. Traditional explanations for this 'startling'18 effect have relied on arbitrary, complex subsurface geometries—either randomly orientated ice dykes and fractures32, or buried, ideally-shaped impact craters33. However, a bladed surface texture at the surface could easily fulfill such a role, with the steeply inclined, opposing walls of the blades replacing the fractures or buried crater walls34. In incident radar at decimetre scales, the equator appears to be an anomalously effective absorber, hence the low radar albedo.
The apparent depression of the instantaneous nighttime brightness temperatures (Fig. 3b) derived from the Galileo Orbiter's PPR data observed in the equatorial band is harder to explain than the radar analysis. Published models of increased surface roughness struggle to reproduce this effect4. However, we speculate that the reported reduction in instantaneous nighttime brightness temperatures may be a consequence of viewing angle effects. Because of the radiative scattering occurring within the penitentes, the tips of their blades cool significantly faster than the pits between them; oblique viewing angles will obstruct views of the pit interiors and so proportionately cooler temperatures would be presented to the observer.
Moreover, if anomalous circular polarization ratios on Europa observed in radar data are driven primarily by ordered surface roughness, similar polarization ratios on other icy moons of Jupiter18 may indicate surfaces likewise roughened by penitente growth. We note that the Jovian system may occupy a restricted 'sweet spot' in the solar system for the development of such features formed in H2O ice. Penitente formation is used to explain the extremely large ridges in the bladed terrain of Pluto, which are carved in massive deposits of methane ice19.
Implications for future missions
In summary, we have performed an approximate calculation of sublimation rates on Europa, indicating that fields of penitentes may grow up to 15 m high in 50 Ma near the equator. We suggest that in equatorial regions sublimation erosion likely dominates over other erosional processes. Puzzling properties of the radar and thermal observations of Europa's equatorial belt can be explained by the presence of penitente fields in this region. The implications of penitente fields at potential landing sites should motivate further detailed quantitative analysis. Observations made by the upcoming Europa Clipper mission high-resolution imaging system and ground-penetrating radar of these regions can directly test our conclusions.
We estimate a daily averaged amount of sublimated H2O ice from Europa based on following the methodology of Lebofsky35. We identify ρsqavg to be the daily averaged mass loss rate of H2O ice (kg m−2 s−1). The formula expressing this sublimation rate is given by
$$\begin{array}{l}\rho _sq^{avg} \approx \frac{{\delta \left( {T_{s0}} \right) \cdot P_{vap}\left( {T_{s0}} \right)}}{{4\pi v_a\left( {T_{s0}} \right)}} = \frac{{P_{vap}\left( {T_{s0}} \right)}}{{2\pi \sqrt L }};\\ v_a \equiv \sqrt {\frac{{kT_{s0}}}{{2\pi m_{H_2O}}}} ,\qquad \delta \left( {T_{s0}} \right) \equiv \sqrt {\frac{{2kT_{s0}}}{{\pi Lm_{H_2O}}}} \end{array}$$
The derivation of the above expression for ρsqavg (see details below) takes into account the fact that most sublimation occurs in the few hours straddling high noon. The density of water ice is ρs = 920 kg m−3. Pvap is the temperature dependent vapour pressure of H2O. Ts0 is the noon time temperature on Europa at a given latitude λ. δ is a factor that is much less than one and accounts for the fact that most sublimation occurs around high noon. The characteristic velocity of particles in a Maxwell-Boltzmann gas is va. The Boltzmann constant is k and \(m_{H_2O}\) is the mass of a H2O molecule. L = 3 ×106 J Kg−1 is the heat of sublimation for H2O. The noon time temperature at a given latitude λ is estimated from the relationship
$$T_{s0} = \left[ {\frac{{\left( {1 - \omega } \right)}}{{\sigma \epsilon }}F_{inc}} \right]^{1/4};\quad F_{inc}{\mathrm{ = }}F_{eur}{\mathrm{cos}}\lambda ,\;F_{eur} \approx 50\;{\mathrm{W}}\;{\mathrm{m}}^{ - 2}$$
in which Finc is the incident solar irradiance at latitude λ, Feur is the solar irradiance at Jupiter, ω ~ 0.67 is the surface albedo of Europa's ice and ϵ ≈ 0.9 is its emissivity4,5,36. The Stefan-Boltzmann constant is σ. An analytic form for H2O's vapour pressure, which accounts for new experimental findings37, is discussed in detail below. Adopting an equatorial noon value of Ts0 = 134 K, we find that equation (1) predicts a sublimation lowering rate of about 0.3 m Ma–1, which amounts to 15 m of ice sublimated in 50 Ma which is the given average age of Europa's surface.
The remaining Methods section provides a detailed description of how we estimate the amount of ice sublimated away from Europa's surface. To lowest order we follow the methodology of Lebofsky35 supplemented by the work of Claudin et al.16. We define q to be the sublimation rate of surface ice (kg m−2 s−1) divided by the surface ice density (kg m−3). Therefore q has units of m s−1 and we write ∂th = q, where h is the level height of the ice. Three equations govern the evolution of h and the vapour content in Europa's ballistic atmosphere. The first of these represents the rate of change of h as driven by the balance of energy gained and lost,
$$\rho _sL\partial _th = (1 - \omega )F_{inc} - \varepsilon \sigma T_s^4,\qquad F_{inc} = F_{jup}\cos \lambda$$
where L = 3 × 106 J kg−1 is the heat of sublimation for H2O, Finc is the incoming solar radiation at a given latitude on Europa's at noon where Fjup ~ 50 W m−2 is the solar irradiance at Jupiter and λ is latitude. ω ~ 0.67 is the measured ice albedo for Europa's surface. ε is the emissivity of Europa's surface ice. The surface ice density is ρs ~ 920 kg m−3. Finally, σ and Ts are respectively the Stefan-Boltzmann constant and the ice surface temperature. Note that Ts varies over the course of the day as the sun crosses the sky. The first expression on the right hand side of equation (3) represents the gain of solar irradiance while the second represents radiative losses to space. Note that for Europa, equation (3) is in very nearly steady state which means that to lowest order the expression is satisfied when \((1 - \omega )F_{inc} = \varepsilon \sigma T_s^4.\) Based on analysis of brightness temperature data acquired by Galileo4,5 as well as Voyager thermal emission spectra4,36, we adopt an emissivity ε = 0.90. With peak brightness temperatures at equatorial noon to be about Tb ~ 131 K9, the above albedo and emissivity estimates yield a surface ice temperature at equatorial noon of Ts(t =noon) ≡ Ts0 = Tb/ϵ1/4 ≈ 134.5 K. We shall use assume this value to be typical of the equator at noon throughout.
The next equation follows the detailed change of the surface as a result of direct exchange between the atmosphere and vapour pressure driven sublimation,
$$\rho _sq = \rho _s\partial _th = v_a\left( {\rho _a - \rho _{vap}\left( {T_s} \right)} \right);\qquad v_a \equiv \left( {\frac{{kT_s}}{{2\pi m_{H_2O}}}} \right)^{1/2}$$
The quantity va is the typical value of the velocity in a Maxwell-Boltzmann distribution at temperature Ts and ρvap(Ts) is the saturation vapour density at Ts. \(m_{H_2O}\) is the mass of the hydrogen molecule. ρa is the surface mass density of water vapour. The equation represents the rate at which H2O molecules get absorbed by the surface (assuming 100% sticking probability) minus the rate the solid ice ablates due to its ice vapour pressure. Observations of Europa's noon time surface temperature4,5,9 indicates a partial vapour pressure of H2O near its surface to be about a several 10−8 Pa (see further below). With the relationship between vapour pressure and density given by ρvap = Pvap/cs2, where \(c_s \equiv \sqrt {kT/m_{H_2O}}\) is the isothermal sound speed, we find that the corresponding value for ρvap is approximately 9.85 × 10−13 kg m–3.
To illustrate the potential for penitente formation, we assume that all emitted water vapour is effectively lost which means setting ρa to zero, because Europa's atmosphere can be approximated as a vacuum. Thus, an upper bound estimate to the amount of surface H2O lost is given by
$$\rho _s\partial _th = - v_a\rho _{vap}\left( {T_s} \right) = - P_{vap}\left( {T_s} \right)v_a{\mathrm{/}}c_s^2$$
Our task is to estimate a daily averaged value for vaρvap, which we hereafter refer to as ρsqavg, and then extrapolate from this daily average to 50 Ma.
Because Ts varies over the course of the day and since Pvap has an Arrhenius form, calculating a daily average for the total number of H2O molecules emitted requires some finesse. However, an analytical form is possible. We designate tday to be the length of one Europan day. We define the daily averaged vapour pressure to be
$$P_{vap}^{{\mathrm{avg}}} \equiv \frac{1}{{t_{day}}}\mathop {\int}\limits_{t_{day}} {P_{vap}(T_s)dt} ,$$
For the vapour pressure of H2O, we fit a curve based on the data points acquired for water's phase diagram as summarized in Fray and Schmitt37. We note that the theoretical extrapolation of Feistel et al.38 significantly underestimates H2O's vapour pressure compared to experimental findings for temperatures below T = 140 K39,40, see also Fig. 3 of Fray and Schmitt37. We adopt the following fitted form to be a more accurate representation of H2O's behavior for the temperature range below 140 K:
$$\begin{array}{l}P_{vap}(T) \approx P_0\exp \left[ {\frac{{Lm_{H_2O}}}{k}\left( {\frac{1}{{T_{130}}} - \frac{1}{T}} \right)} \right];\\ P_0 = 2.30 \times 10^{ - 8}{\mathrm{Pa,}}\\ T_{130} \equiv 130\;{\mathrm{K}}{\mathrm{}}\end{array}$$
P0 is the measured value of H2O's vapour pressure at T = 130 K based on a fit to the aforementioned experimental measurements39,40. We note that the previously estimated H2O sublimation rates on Europa9, which are based on the theoretical extrapolation of Feistel et al.38, are underestimated by a factor of six or more.
Given Pvap's strong exponential dependence on 1/T, over the course of one day the majority of surface sublimated H2O is emitted within a few hours around noon. Combining equation (3) with Europa's surface brightness temperature analysis4, the latter of which shows that Europa's surface temperature does not fall much below 74 K after the Sun sets, we adopt the following expression for Europa's surface temperature over the course of one Europan day:
$$\begin{array}{l}T_s = {\mathrm{max}}\left[ {T_{s0}\left( {\cos \frac{{2\pi t}}{{t_{day}}}} \right)^{1/4},T_{min}} \right];\\ T_{s0} \equiv \left[ {\frac{{(1 - \omega )F_{inc}}}{{\sigma \epsilon }}} \right]^{1/4}\end{array}$$
where we have introduced Ts0 to be the latitudinal dependent local noontime surface temperature. Because our concern is mostly centered on the few hours around noon, the surface temperature expression in equation (8) may be Taylor expanded as
$$T_s \approx T_{s0}\left[ {1 - \frac{1}{8}\left( {\frac{{2\pi t}}{{t_{day}}}} \right)^2} \right]$$
Inserting equation (9) into the daily averaged integral expression equation (6) via equation (7), and making use of well-known techniques in the asymptotic evaluation of integrals41 we arrive at
$$\begin{array}{l}P_{vap}^{{\mathrm{avg}}} = \delta (T_{s0}) \cdot P_{vap}\left( {T_{s0}} \right);\\ \delta (T_{s0}) \equiv \sqrt {\frac{{2kT_{s0}}}{{\pi Lm_{H_2O}}}} = \frac{{2v_a}}{{\sqrt L }}\end{array}$$
and the corresponding daily averaged flux of sublimated gas is given by the expression
$$\rho _sq^{avg} \approx \frac{{P_{vap}^{{\mathrm{avg}}}(T_{s0})}}{{4\pi v_a(T_{s0})}} = \frac{{\delta (T_{s0}) \cdot P_{vap}(T_{s0})}}{{4\pi v_a(T_{s0})}} = \frac{{P_{vap}(T_{s0})}}{{2\pi \sqrt L }}$$
Equation (10) says that the daily averaged vapour pressure is equal to the vapour pressure at noon diminished by the multiplicative factor δ, while equation (11) gives the corresponding daily averaged sublimated mass-flux of H2O from the surface.
For example, for a surface temperature at the equator in which Ts0 = 134 K, we find that va ≈ 98.5 m s−1 and that δ = 0.114. Based on equation (7), Pvap(Ts0) ≈ 1.02 × 10−7 Pa. Thus, the daily averaged mass flux of H2O at the equator is approximately ρsqavg = 9.37 × 10−12 kg m−2 s−1, which is equivalent to 3.13 × 1010 H2O molecules cm−2 s−1—a figure that is 6–9 times larger than previous estimates42,43. This loss rate translates to approximately 2.98 × 103 Kg m−2 Ma−1, which is equivalent to about 15 m of ice over 50 Ma.
The data that support the findings of this study are available on the NASA Planetary Data System (PDS) (https://pds.nasa.gov/).
Greeley, R. et al. in Jupiter: The Planet, Satellites and Magnetosphere (eds Bagenal, F. & Dowling, T. E.) 329–362 (Cambridge University Press, Cambridge, 2004).
Luchitta, B. K. & Soderblom, L. A. in Satellites of Jupiter (ed. Morrison, D.) 521–555 (University of Arizona Press, Tucson, 1982).
Schenk, P. M., Matsuyama, I. & Nimmo, F. True polar wander on Europa from global-scale small-circle depressions. Nature 453, 368–371 (2008).
Spencer, J. R., Tamppari, L. K., Martin, T. Z. & Travis, L. D. Temperatures on Europa from Galileo photopolarimeter-radiometer: nighttime thermal anomalies. Science 284, 1514–1516 (1999).
Moore, J. M. et al. in Europa (eds Pappalardo, R. T. et al.) 329–349 (University of Arizona Press, Tucson, 2009).
Moore, J. M. et al. Mass movement and landform degradation on the icy Galilean satellites: results of the Galileo Nominal Mission. Icarus 140, 294–312 (1999).
Buratti, B. J. & Golombek, M. P. Geologic implications of spectrophotometric measurements of Europa. Icarus 75, 437–449 (1988).
Bierhaus, E. B., Zahnle, K. & Chapman, C. R. in Europa (eds Pappalardo, R. T., McKinnon, W. B. & Khurana, K. K.) 161–180 (University of Arizona Press, Tucson, 2009).
Johnson, R. E. et al. in Europa (eds Pappalardo, R. T., McKinnon, W. B. & Khurana, K. K.) 507–527 (University of Arizona Press, Tucson, 2009).
Domingue, D. L., Hapke, B. W., Lockwood, G. W. & Thompson, D. T. Europa's phase curve: implications for surface structure. Icarus 90, 30–42 (1991).
Betterton, M. Theory of structure formation in snowfields motivated by penitentes, suncups, and dirt cones. Phys. Rev. E 63, 056129 (2001).
Lliboutry, L. The origin of penitents. J. Glaciol. 2, 331–338 (1954).
Corripio, J. G. Modelling the Energy Balance of High Altitude Glacierised Basins in the Central Andes (Univ. Edinburgh, 2002).
Cathles, L. M. Radiative Energy Transport on the Surface of an Ice Sheet (Univ. Chicago, 2011).
Rhodes, J. J., Armstrong, R. L. & Warren, S. G. Mode of formation of 'ablation hollows' controlled by dirt content of snow. J. Glaciol. 33, 135–139 (1987).
Claudin, P., Jarry, H., Vignoles, G., Plapp, M. & Andreotti, B. Physical processes causing the formation of penitentes. Phys. Rev. E 92, 033015 (2015).
Ostro, S. J. et al. Europa, Ganymede, and Callisto: new radar results from Arecibo and Goldstone. J. Geophys. Res. 97, 18227–18244 (1992).
Ostro, S. J. in Satellites of Jupiter (ed. Morrison, D.) 213–236 (Univ. Arizona Press, Tucson, 1982).
Moore, J. M. et al. Bladed terrain on Pluto: possible origins and evolution. Icarus 300, 129–144 (2018).
Saur, J., Strobel, D. F. & Neubauer, F. M. Interaction of the Jovian magnetosphere with Europa: constraints on the neutral atmosphere. J. Geophys. Res. 103, 19947–19962 (1998).
Moore, J. M. et al. Sublimation as a landform-shaping process on Pluto. Icarus 287, 320–333 (2017).
Bergeron, V., Berger, C. & Betterton, M. Controlled irradiative formation of penitentes. Phys. Rev. Lett. 96, 098502 (2006).
Cathles, L. M., Abbot, D. S. & MacAyeal, D. R. Intra-surface radiative transfer limits the geographic extent of snow penitents on horizontal snowfields. J. Glaciol. 60, 147–154 (2014).
Lhermitte, S., Abermann, J. & Kinnard, C. Albedo over rough snow and ice surfaces. Cryosphere 8, 1069–1086 (2014).
Ward, W. R. & Canup, R. M. The obliquity of Jupiter. Astrophys. J. 640, L91–L94 (2006).
Bierhaus, E. B. et al. Pwyll secondaries and other small craters on Europa. Icarus 153, 264–276 (2001).
Paranicas, C., Cooper, J. F., Garrett, H. B., Johnson, R. E. & Sturner, S. J. in Europa (eds Pappalardo, R. T., McKinnon, W. B. & Khurana, K. K.) 529–544 (Univ. Arizona Press, Tuscon, 2009).
Morrison, D. & Morrison, N. in Planetary Satellites (ed. Burns, J.) 363–378 (Univ. Arizona Press, Tucson, 1977).
Grundy, W. M. et al. New horizons mapping of Europa and Ganymede. Science 318, 234–237 (2007).
Schenk, P. M. Slope characteristics of Europa: constraints for landers and radar sounding. Geophys. Res. Lett. 36, L15204 (2009).
Rathbun, J. A., Rodriguez, N. J. & Spencer, J. R. Galileo PPR observations of Europa: hotspot detection limits and surface thermal properties. Icarus 210, 763–769 (2010).
Goldstein, R. M. & Green, R. R. Ganymede: radar surface characteristics. Science 207, 179–180 (1980).
Ostro, S. J. & Pettengill, G. H. Icy craters on the galilean satellites? Icarus 34, 268–279 (1978).
Campbell, B. A. High circular polarization ratios in radar scattering from geologic targets. J. Geophys. Res. 117, E06008 (2012).
Lebofsky, L. A. Stability of frosts in the solar system. Icarus 25, 205–217 (1975).
Spencer, J. R. The Surfaces of Europa, Ganymede, and Callisto: An Investigation Using Voyager IRIS Thermal Infrared Spectra (Univ. Arizona, 1987).
Fray, N. & Schmitt, B. Sublimation of ices of astrophysical interest: a bibliographic review. Planet. Space. Sci. 57, 2053–2080 (2009).
Feistel, R. & Wagner, W. Sublimation pressure and sublimation enthalpy of H2O ice Ih between 0 and 273.16 K. Geochim. Cosmochim. Acta 71, 36–45 (2007).
White, B. E., Hessinger, J. & Pohl, R. O. Annealing and sublimation of noble gas and water ice films. J. Low Temp. Phys. 111, 233–246 (1998).
Mauersberger, K. & Krankowsky, D. Vapor pressure above ice at temperatures below 170 K. Geophys. Res. Lett. 30, 1121 (2003).
Bender, C. M. & Orszag, S. A. Advanced Mathematical Methods for Scientists and Engineers (McGraw-Hill, New York, 1978).
Shematovich, V. I., Johnson, R. E., Cooper, J. F. & Wong, M. C. Surface bound atmosphere of Europa. Icarus 173, 480–498 (2005).
Smyth, W. H. & Marconi, M. L. Europa's atmosphere, gas tori, and magnetospheric implications. Icarus 181, 510–526 (2006).
We thank D. Blankenship, K. Mitchell, F. Nimmo and G. Tucker, and especially J. Spencer for discussions that shaped the form of this paper. Funding was from the Europa Pre-Project Mission Concept Study via the Jet Propulsion Laboratory, California Institute of Technology. We are grateful to P. Engebretson for contribution to figure production. We thank C. Chavez for her help with manuscript preparation.
School of Earth & Ocean Sciences, Cardiff University, Cardiff, UK
Daniel E. J. Hobley
NASA Ames Research Center, Moffett Field, CA, USA
Jeffrey M. Moore & Orkan M. Umurhan
Dept. Environmental Sciences, University of Virginia, Charlottesville, VA, USA
Alan D. Howard
Jeffrey M. Moore
Orkan M. Umurhan
D.E.J.H. compiled data, performed and interpreted numerical analyses, and wrote the bulk of the paper. J.M.M. conceived and designed the study and organized the revision of the manuscript. A.D.H. was involved in the study, design, interpretation, and revision. Both J.M.M. and A.D.H. performed preliminary analyses. O.M.U. significantly revised the numerical analyses found in the Methods section. All authors discussed the results and commented on the manuscript.
Correspondence to Daniel E. J. Hobley.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Hobley, D.E.J., Moore, J.M., Howard, A.D. et al. Formation of metre-scale bladed roughness on Europa's surface by ablation of ice. Nature Geosci 11, 901–904 (2018). https://doi.org/10.1038/s41561-018-0235-0
Issue Date: December 2018
Buoyancy-driven dissolution of inclined blocks: Erosion rate and pattern formation
Caroline Cohen
, Michael Berhanu
, Julien Derr
& Sylvain Courrech du Pont
Physical Review Fluids (2020)
Reply to: Penitente formation is unlikely on Europa
, Jeffrey M. Moore
, Alan D. Howard
& Orkan M. Umurhan
Nature Geoscience (2020)
Regional study of Europa's photometry
Ines Belgacem
, Frédéric Schmidt
& Grégory Jonniaux
Penitente formation is unlikely on Europa
K. P. Hand
, D. Berisford
, T. Daimaru
, J. Foster
, A. E. Hofmann
& B. Furst
Dust and Snow Cover on Saturn's Icy Moons
A. Le Gall
, R. D. West
& L. E. Bonnefoy
Geophysical Research Letters (2019)
Icy worlds
Nature Geoscience | Matters Arising
Nature Geoscience ISSN 1752-0908 (online)
|
CommonCrawl
|
An insight into the mathematical mind: Box-Muller transforms
Written by Colin+ in ranting.
While writing an obituary for George Box, I stumbled on something I thought was ingenious: a method for generating independent pairs of numbers drawn from the normal distribution.
I'll concede: that's not necessarily something that makes the average reader-in-the-street stop in their tracks and say "Wow!" In honesty, it would probably make the average reader-in-the-street rapidly become a reader-on-the-other-side-of-the-street. However, I thought an article on it might provide some insight into two mathematical minds: that of George Box, one of the greatest1 statisticians of the 20th century, and that of me, possibly the greatest mathematical hack of the 21st.
How the Box-Muller transform works
If you want to apply the Box-Muller transform, you need two numbers drawn from a uniform distribution - so they're equally likely to take on any value between 0 and 1. Let's call these numbers $U$ and $V$. Box and Muller claim that if you work out
$$X = \sqrt{-2 \ln (U)} \cos (2\pi V)$$
$$Y = \sqrt{-2 \ln (U)} \sin (2\pi V)$$
then $X$ and $Y$ are independent (information about one tells you nothing about the other) and normally distributed with a mean of 0 and a standard deviation of 1. I'm not going to prove that, because I don't know how, but I can explain what's happening.
There's a hint in my choices of letter: you might recognise that you could simplify these down to $X = R \cos(\theta)$ and $Y = R\sin(\theta)$, which are just the sides of a triangle. The $R = \sqrt{-2\ln(U)}$ is the distance from $(0,0)$ - because $U$ is between 0 and 1, $\ln(U)$ is anywhere from $-\infty$ to 02 Multiplying by -2 turns it into a nice positive number (so you can take its square root really) and tends to reduce the distance from the origin. For normally-distributed variables, you want the distances to clump up in the middle; that's what the 2 is for.
The $\theta = 2\pi V$ is much simpler: it just says 'move in a random direction'.
What Colin did next
My immediate thought was, 'I wonder if I can use that to work out the probability tables for $z$-scores you get in formula books!' What do you mean, that wasn't your immediate thought?3 Long story short: the answer is no; I just wanted to show you my thought process and that not everything in maths works out as neatly as you'd like.
My insight was that the probability of generating an $X$ value smaller than some constant $k$ would be the same as the probability of generating $U$ and $V$ values that gave smaller $X$s. So far so obvious! In that case, it's just a case of rearranging the formulas to get expressions for (say) $V$ in terms of $U$ and integrating to find the appropriate area.
So I tried that:
$$\sqrt{-2 \ln (U)} \cos(2\pi V) = k \\
\cos(2\pi V) = \sqrt{ \frac{k^2}{-2\ln(U)}} \\
V = \frac{1}{2\pi}\cos^{-1}\left( \sqrt{ \frac{k^2}{-2\ln(U)}} \right)$$
Yikes. I don't fancy trying to integrate that - the arccos is bad enough, but the $\ln(U)$ on the bottom? Forget about it.
Let's try the other way:
-2\ln(U) = k^2 \sec^2(2\pi V) \\
U = e^{-\frac{k^2}{2}\sec^2(2\pi V)}$$
Curses! I don't think that's going to work, either. $e^{\sec^2 x}$ isn't an integral I know how to do - so I'm stymied.
Back to the drawing board, I'm afraid - this time, I didn't get the cookie of a new maths discovery; the difference between a poor mathematician and a decent mathematician is that a poor mathematician says "I got it wrong, I'm rubbish;" the decent mathematician says either "ah well. Next puzzle!" or "ah well! Try again."
The great mathematicians, of course, see right to the end of the puzzle before they start.
It's the way they write the questions: a case study
The Mathematical Ninja's Topological Takedown
Tackling 'When Will I Ever Use This?'
If not the greatest [↩]
It's exponentially distributed, since you ask. [↩]
Weirdo! [↩]
2 comments on "An insight into the mathematical mind: Box-Muller transforms"
MathbloggingAll
An insight into the mathematical mind: Box-Muller transforms http://t.co/YATNA6Z34s
Pingback: How to generate Gaussian distributed numbers - Alan Zucconi
|
CommonCrawl
|
From formulasearchengine
{{#invoke:Infobox|infobox}}
The Andromeda Galaxy Template:IPAc-en is a spiral galaxy approximately 780 kiloparsecs (2.5 million light-years; 2.4Template:E km) from Earth[1] in the Andromeda constellation. Also known as Messier 31, M31, or NGC 224, it is often referred to as the Great Andromeda Nebula in older texts. The Andromeda Galaxy is the nearest spiral galaxy to our Milky Way galaxy, but not the nearest galaxy overall. It gets its name from the area of the sky in which it appears, the constellation of Andromeda, which was named after the mythological princess Andromeda. The Andromeda Galaxy is the largest galaxy of the Local Group, which also contains the Milky Way, the Triangulum Galaxy, and about 44 other smaller galaxies.
The Andromeda Galaxy is probably the most massive galaxy in the Local Group as well,[2] despite earlier findings that suggested that the Milky Way contains more dark matter and could be the most massive in the grouping.[3] The 2006 observations by the Spitzer Space Telescope revealed that M31 contains one trillion (1012) stars:[4] at least twice the number of stars in the Milky Way galaxy, which is estimated to be 200–400 billion.[5]
The Andromeda Galaxy is estimated to be 1.5Template:E solar masses,[2] while the mass of the Milky Way is estimated to be 8.5Template:E solar masses. In comparison a 2009 study estimated that the Milky Way and M31 are about equal in mass,[6] while a 2006 study put the mass of the Milky Way at ~80% of the mass of the Andromeda Galaxy. The two galaxies are expected to collide in 3.75 billion years, eventually merging to form a giant elliptical galaxy [7] or perhaps a large disk galaxy.[8]
At 3.4, the apparent magnitude of the Andromeda Galaxy is one of the brightest of any Messier objects,[9] making it visible to the naked eye on moonless nights even when viewed from areas with moderate light pollution. Although it appears more than six times as wide as the full Moon when photographed through a larger telescope, only the brighter central region is visible to the naked eye or when viewed using binoculars or a small telescope.
1 Observation history
1.1 Island universe
2.1 Formation and history
2.2 Recent distance estimate
2.3 Mass and luminosity estimates
2.3.1 Mass
2.3.2 Luminosity
4 Nucleus
5 Discrete sources
6 Satellites
7 Future collision with the Milky Way
Observation history
Great Andromeda Nebula by Isaac Roberts, 1899
The Persian astronomer Abd al-Rahman al-Sufi wrote a line about the chained constellation in his Book of Fixed Stars around 964, describing it as a "small cloud".[10][11] Star charts of that period have it labeled as the Little Cloud.[11] The first description of the object based on telescopic observation was given by German astronomer Simon Marius on December 15, 1612.[12] Charles Messier catalogued it as object M31 in 1764 and incorrectly credited Marius as the discoverer, unaware of Al Sufi's earlier work. In 1785, the astronomer William Herschel noted a faint reddish hue in the core region of M31. He believed it to be the nearest of all the "great nebulae" and based on the color and magnitude of the nebula, he incorrectly guessed that it was no more than 2,000 times the distance of Sirius.[13]
William Huggins in 1864 observed the spectrum of M31 and noted that it differed from a gaseous nebula.[14] The spectra of M31 displayed a continuum of frequencies, superimposed with dark absorption lines that help identify the chemical composition of an object. The Andromeda nebula was very similar to the spectra of individual stars, and from this it was deduced that M31 had a stellar nature. In 1885, a supernova (known as S Andromedae) was seen in M31, the first and so far only one observed in that galaxy. At the time M31 was considered to be a nearby object, so the cause was thought to be a much less luminous and unrelated event called a nova, and was named accordingly "Nova 1885".[15]
M31 above the Very Large Telescope.[16]
The first photographs of M31 were taken in 1887 by Isaac Roberts from his private observatory in Sussex, England. The long-duration exposure allowed the spiral structure of the galaxy to be seen for the first time.[17] However, at the time this object was still commonly believed to be a nebula within our galaxy, and Roberts mistakenly believed that M31 and similar spiral nebulae were actually solar systems being formed, with the satellites nascent planets.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} The radial velocity of this object with respect to our solar system was measured in 1912 by Vesto Slipher at the Lowell Observatory, using spectroscopy. The result was the largest velocity recorded at that time, at Template:Convert, moving in the direction of the Sun.[18]
Island universe
Location of M31 in the Andromeda constellation
In 1917, American astronomer Heber Curtis observed a nova within M31. Searching the photographic record, 11 more novae were discovered. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred elsewhere in the sky. As a result he was able to come up with a distance estimate of Template:Convert. He became a proponent of the so-called "island universes" hypothesis, which held that spiral nebulae were actually independent galaxies.[19]
In 1920, the Great Debate between Harlow Shapley and Curtis took place, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula was an external galaxy, Curtis also noted the appearance of dark lanes resembling the dust clouds in our own Galaxy, as well as the significant Doppler shift. In 1922 Ernst Öpik presented a method to estimate the distance of M31 using the measured velocities of its stars. His result put the Andromeda Nebula far outside our Galaxy at a distance of about Template:Convert.[20] Edwin Hubble settled the debate in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of M31. These were made using the 2.5-metre (100-in) Hooker telescope, and they enabled the distance of Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within our Galaxy, but an entirely separate galaxy located a significant distance from our own.[21]
Stars in the Andromeda Galaxy's disc[22]
M31 plays an important role in galactic studies, since it is the nearest spiral galaxy (although not the nearest galaxy). In 1943 Walter Baade was the first person to resolve stars in the central region of the Andromeda Galaxy. Based on his observations of this galaxy, he was able to discern two distinct populations of stars based on their metallicity, naming the young, high velocity stars in the disk Type I and the older, red stars in the bulge Type II. This nomenclature was subsequently adopted for stars within the Milky Way, and elsewhere. (The existence of two distinct populations had been noted earlier by Jan Oort.)[23] Baade also discovered that there were two types of Cepheid variables, which resulted in a doubling of the distance estimate to M31, as well as the remainder of the Universe.[24]
Radio emission from the Andromeda Galaxy was first detected by Hanbury Brown and Cyril Hazard at Jodrell Bank Observatory using the 218-ft Transit Telescope, and was announced in 1950[25][26] (Earlier observations were made by radio astronomy pioneer Grote Reber in 1940, but were inconclusive, and were later shown to be an order of magnitude too high). The first radio maps of the galaxy were made in the 1950s by John Baldwin and collaborators at the Cambridge Radio Astronomy Group.[27] The core of the Andromeda Galaxy is called 2C 56 in the 2C radio astronomy catalogue. In 2009, the first planet may have been discovered in the Andromeda Galaxy. This candidate was detected using a technique called microlensing, which is caused by the deflection of light by a massive object.[28]
The Andromeda Galaxy as seen by NASA's Wide-field Infrared Survey Explorer
The measured distance to the Andromeda Galaxy was doubled in 1953 when it was discovered that there is another, dimmer type of Cepheid. In the 1990s, measurements of both standard red giants as well as red clump stars from the Hipparcos satellite measurements were used to calibrate the Cepheid distances.[29][30]
Formation and history
According to a team of astronomers reporting in 2010, M31 was formed out of the collision of two smaller galaxies between 5 and 9 billion years ago.[31]
A paper published in 2012[32] has outlined M31's basic history since its birth. According to it, Andromeda was born roughly 10 billion years ago from the merger of many smaller protogalaxies, leading to a galaxy smaller than the one we see today.
The most important event in M31's past history was the merger mentioned above that took place 8 billion years ago. This violent collision formed most of its (metal-rich) galactic halo and extended disk and during that epoch Andromeda's star formation would have been very high, to the point of becoming a luminous infrared galaxy for roughly 100 million years.
M31 and the Triangulum Galaxy (M33) had a very close passage 2–4 billion years ago. This event produced high levels of star formation across the Andromeda Galaxy's disk – even some globular clusters – and disturbed M33's outer disk.
While there has been activity during the last 2 billion years, this has been much lower than during the past. During this epoch, star formation throughout M31's disk decreased to the point of nearly shutting down, then increased again relatively recently. There have been interactions with satellite galaxies like M32, M110, or others that have already been absorbed by M31. These interactions have formed structures like Andromeda's Giant Stellar Stream. A merger roughly 100 million years ago is believed to be responsible for a counter-rotating disk of gas found in the center of M31 as well as the presence there of a relatively young (100 million years old) stellar population.
Recent distance estimate
At least four distinct techniques have been used to measure distances to the Andromeda Galaxy.
In 2003, using the infrared surface brightness fluctuations (I-SBF) and adjusting for the new period-luminosity value of Freedman et al. 2001 and using a metallicity correction of −0.2 mag dex−1 in (O/H), an estimate of Template:Convert was derived.
The Andromeda Galaxy pictured in ultraviolet light by GALEX
Using the Cepheid variable method, an estimate of 2.51 ± 0.13 Mly (770 ± 40 kpc) was reported in 2004.[33][34]
In 2005 Ignasi Ribas (CSIC, Institute for Space Studies of Catalonia (IEEC)) and colleagues announced the discovery of an eclipsing binary star in the Andromeda Galaxy. The binary star, designated M31VJ00443799+4129236,Template:Efn has two luminous and hot blue stars of types O and B. By studying the eclipses of the stars, which occur every 3.54969 days, the astronomers were able to measure their sizes. Knowing the sizes and temperatures of the stars, they were able to measure the absolute magnitude of the stars. When the visual and absolute magnitudes are known, the distance to the star can be measured. The stars lie at a distance of Template:Convert and the whole Andromeda Galaxy at about Template:Convert.[1] This new value is in excellent agreement with the previous, independent Cepheid-based distance value.
M31 is close enough that the Tip of the Red Giant Branch (TRGB) method may also be used to estimate its distance. The estimated distance to M31 using this technique in 2005 yielded Template:Convert.[35]
Averaged together, all these distance measurements give a combined distance estimate of Template:Convert.Template:Efn Based upon the above distance, the diameter of M31 at the widest point is estimated to be Template:Convert. Applying trigonometry (arctangent), that figures to extending at an apparent 3.18° angle in the sky.
Mass and luminosity estimates
Mass estimates for the Andromeda Galaxy's halo (including dark matter) give a value of approximately Template:Solar mass[2] (or 1.5 trillion solar masses) compared to Template:Solar mass for the Milky Way. This contradicts earlier measurements, that seem to indicate that Andromeda and the Milky Way are almost equal in mass. Even so, M31's spheroid actually has a higher stellar density than that of the Milky Way[36] and its galactic stellar disk is about twice the size of that of the Milky Way.[37]
M31 appears to have significantly more common stars than the Milky Way, and the estimated luminosity of M31, Template:Solar luminosity, is about 25% higher than that of our own galaxy.[38] However, the galaxy has a high inclination as seen from Earth and its interstellar dust absorbs an unknown amount of light, so it is difficult to estimate its actual brightness and other authors have given other values for the luminosity of the Andromeda Galaxy (including to propose it is the second brightest galaxy within a radius of 10 megaparsecs of the Milky Way, after the Sombrero Galaxy[39]) , the most recent estimation (done in 2010 with the help of Spitzer Space Telescope) suggesting an absolute magnitude (in the blue) of −20.89 (that with a color index of +0.63 translates to an absolute visual magnitude of −21.52,Template:Efn compared to −20.9 for the Milky Way), and a total luminosity in that wavelength of Template:Solar luminosity[40]
The rate of star formation in the Milky Way is much higher, with M31 producing only about one solar mass per year compared to 3–5 solar masses for the Milky Way. The rate of supernovae in the Milky Way is also double that of M31.[41] This suggests that M31 once experienced a great star formation phase, but is now in a relative state of quiescence, whereas the Milky Way is experiencing more active star formation.[38] Should this continue, the luminosity in the Milky Way may eventually overtake that of M31.
According to recent studies, like the Milky Way, the Andromeda Galaxy lies in what in the galaxy color–magnitude diagram is known as the green valley, a region populated by galaxies in transition from the blue cloud (galaxies actively forming new stars) to the red sequence (galaxies that lack star formation). Star formation activity in green valley galaxies is slowing as they run out of star-forming gas in the interstellar medium. In simulated galaxies with similar properties, star formation will typically have been extinguished within about five billion years from now, even accounting for the expected, short-term increase in the rate of star formation due to the collision between Andromeda and the Milky Way.[42]
The Andromeda Galaxy seen in infrared by the Spitzer Space Telescope, one of NASA's four Great Space Observatories
Image of the Andromeda Galaxy taken by Spitzer in infrared, 24 micrometres (Credit:NASA/JPL–Caltech/K. Gordon, University of Arizona)
File:A Swift Tour of M31.ogv
A Galaxy Evolution Explorer image of the Andromeda Galaxy. The bands of blue-white making up the galaxy's striking rings are neighborhoods that harbor hot, young, massive stars. Dark blue-grey lanes of cooler dust show up starkly against these bright rings, tracing the regions where star formation is currently taking place in dense cloudy cocoons. When observed in visible light, Andromeda's rings look more like spiral arms. The ultraviolet view shows that these arms more closely resemble the ring-like structure previously observed in infrared wavelengths with NASA's Spitzer Space Telescope. Astronomers using Spitzer interpreted these rings as evidence that the galaxy was involved in a direct collision with its neighbor, M32, more than 200 million years ago.
Based on its appearance in visible light, the Andromeda Galaxy is classified as an SA(s)b galaxy in the de Vaucouleurs–Sandage extended classification system of spiral galaxies.[43] However, data from the 2MASS survey showed that the bulge of M31 has a box-like appearance, which implies that the galaxy is actually a barred spiral galaxy like the Milky Way, with the Andromeda Galaxy's bar viewed almost directly along its long axis.[44]
In 2005, astronomers used the Keck telescopes to show that the tenuous sprinkle of stars extending outward from the galaxy is actually part of the main disk itself.[37] This means that the spiral disk of stars in M31 is three times larger in diameter than previously estimated. This constitutes evidence that there is a vast, extended stellar disk that makes the galaxy more than Template:Convert in diameter. Previously, estimates of the Andromeda Galaxy's size ranged from Template:Convert across.
The galaxy is inclined an estimated 77° relative to the Earth (where an angle of 90° would be viewed directly from the side). Analysis of the cross-sectional shape of the galaxy appears to demonstrate a pronounced, S-shaped warp, rather than just a flat disk.[45] A possible cause of such a warp could be gravitational interaction with the satellite galaxies near M31. The galaxy M33 could be responsible for some warp in M31's arms, though more precise distances and radial velocities are required.
Spectroscopic studies have provided detailed measurements of the rotational velocity of M31 at various radii from the core. In the vicinity of the core, the rotational velocity climbs to a peak of Template:Convert at a radius of Template:Convert, then descends to a minimum at Template:Convert where the rotation velocity may be as low as Template:Convert. Thereafter the velocity steadily climbs again out to a radius of Template:Convert, where it reaches a peak of Template:Convert. The velocities slowly decline beyond that distance, dropping to around Template:Convert at Template:Convert. These velocity measurements imply a concentrated mass of about Template:Solar mass in the nucleus. The total mass of the galaxy increases linearly out to Template:Convert, then more slowly beyond that radius.[46]
The spiral arms of M31 are outlined by a series of H II regions, first studied in great detail by Walter Baade and described by him as resembling "beads on a string". his studies show two spiral arms that appear to be tightly wound, although they are more widely spaced than in our galaxy.[47] His descriptions of the spiral structure, as each arm crosses the major axis of M31, are as follows[48]§pp1062[49]§pp92:
Baade's spiral arms of M31
Arms (N=cross M31's major axis at north, S=cross M31's major axis at south)
Distance from center (arcminutes) (N*/S*)
Distance from center (kpc) (N*/S*)
N1/S1 3.4/1.7 0.7/0.4 Dust arms with no OB associations of HII regions.
N2/S2 8.0/10.0 1.7/2.1 Dust arms with some OB associations.
N3/S3 25/30 5.3/6.3 As per N2/S2, but with some HII regions too.
N4/S4 50/47 11/9.9 Large numbers of OB associations, HII regions, and little dust.
N5/S5 70/66 15/14 As per N4/S4 but much fainter.
N6/S6 91/95 19/20 Loose OB associations. No dust visible.
N7/S7 110/116 23/24 As per N6/S6 but fainter and inconspicuous.
Since the Andromeda Galaxy is seen close to edge-on, however, the studies of its spiral structure are difficult. While as stated above rectified images of the galaxy seem to show a fairly normal spiral galaxy with the arms wound up in a clockwise direction, exhibiting two continuous trailing arms that are separated from each other by a minimum of about Template:Convert and that can be followed outward from a distance of roughly Template:Convert from the core, other alternative spiral structures have been proposed such as a single spiral arm[50] or a flocculent[51] pattern of long, filamentary, and thick spiral arms.[43][52]
The most likely cause of the distortions of the spiral pattern is thought to be interaction with galaxy satellites M32 and M110.[53] This can be seen by the displacement of the neutral hydrogen clouds from the stars.[54]
In 1998, images from the European Space Agency's Infrared Space Observatory demonstrated that the overall form of the Andromeda Galaxy may be transitioning into a ring galaxy. The gas and dust within M31 is generally formed into several overlapping rings, with a particularly prominent ring formed at a radius of Template:Convert from the core.[55] This ring is hidden from visible light images of the galaxy because it is composed primarily of cold dust.
Later studies with the help of the Spitzer Space Telescope showed how Andromeda's spiral structure in the infrared appears to be composed of two spiral arms that emerge from a central bar and continue beyond the large ring mentioned above. Those arms, however, are not continuous and have a segmented structure.[53]
Close examination of the inner region of M31 with the same telescope also showed a smaller dust ring that is believed to have been caused by the interaction with M32 more than 200 million years ago. Simulations show that the smaller galaxy passed through the disk of the galaxy in Andromeda along the latter's polar axis. This collision stripped more than half the mass from the smaller M32 and created the ring structures in M31.[56] It is the co-existence of the long-known large ring-like feature in the gas of Messier 31, together with this newly discovered inner ring-like structure, offset from the barycenter, that suggested a nearly head-on collision with the satellite M32, a milder version of the Cartwheel encounter.[57]
Studies of the extended halo of M31 show that it is roughly comparable to that of the Milky Way, with stars in the halo being generally "metal-poor", and increasingly so with greater distance.[36] This evidence indicates that the two galaxies have followed similar evolutionary paths. They are likely to have accreted and assimilated about 100–200 low-mass galaxies during the past 12 billion years.[58] The stars in the extended halos of M31 and the Milky Way may extend nearly one-third the distance separating the two galaxies.
HST image of the Andromeda Galaxy core showing possible double structure. NASA/ESA photo
M31 is known to harbor a dense and compact star cluster at its very center. In a large telescope it creates a visual impression of a star embedded in the more diffuse surrounding bulge. The luminosity of the nucleus is in excess of the most luminous globular clusters.{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }}
Chandra X-ray telescope image of the center of M31. A number of X-ray sources, likely X-ray binary stars, within Andromeda's central region appear as yellowish dots. The blue source at the center is at the position of the supermassive black hole.
In 1991 Tod R. Lauer used WFPC, then on board the Hubble Space Telescope, to image M31's inner nucleus. The nucleus consists of two concentrations separated by Template:Convert. The brighter concentration, designated as P1, is offset from the center of the galaxy. The dimmer concentration, P2, falls at the true center of the galaxy and contains a black hole measured at 3–5 × 107 Template:Solar mass in 1993,[59] and at 1.1–2.3 × 108 Template:Solar mass in 2005.[60] The velocity dispersion of material around it is measured to be ≈ 160 km/s.[61]
Scott Tremaine has proposed that the observed double nucleus could be explained if P1 is the projection of a disk of stars in an eccentric orbit around the central black hole.[62] The eccentricity is such that stars linger at the orbital apocenter, creating a concentration of stars. P2 also contains a compact disk of hot, spectral class A stars. The A stars are not evident in redder filters, but in blue and ultraviolet light they dominate the nucleus, causing P2 to appear more prominent than P1.[63]
While at the initial time of its discovery it was hypothesized that the brighter portion of the double nucleus was the remnant of a small galaxy "cannibalized" by M31,[64] this is no longer considered a viable explanation, largely because such a nucleus would have an exceedingly short lifetime due to tidal disruption by the central black hole. While this could be partially resolved if P1 had its own black hole to stabilize it, the distribution of stars in P1 does not suggest that there is a black hole at its center.[62]
Discrete sources
Artist's concept of the Andromeda Galaxy core showing a view across a disk of young, blue stars encircling a supermassive black hole. NASA/ESA photo
Apparently, by late 1968, no X-rays had been detected from the Andromeda Galaxy.[65] A balloon flight on October 20, 1970, set an upper limit for detectable hard X-rays from M31.[66]
Multiple X-ray sources have since been detected in the Andromeda Galaxy, using observations from the ESA's XMM-Newton orbiting observatory. Robin Barnard et al. hypothesized that these are candidate black holes or neutron stars, which are heating incoming gas to millions of kelvins and emitting X-rays. The spectrum of the neutron stars is the same as the hypothesized black holes, but can be distinguished by their masses.[67]
There are approximately 460 globular clusters associated with the Andromeda Galaxy.[68] The most massive of these clusters, identified as Mayall II, nicknamed Globular One, has a greater luminosity than any other known globular cluster in the Local Group of galaxies.[69] It contains several million stars, and is about twice as luminous as Omega Centauri, the brightest known globular cluster in the Milky Way. Globular One (or G1) has several stellar populations and a structure too massive for an ordinary globular. As a result, some consider G1 to be the remnant core of a dwarf galaxy that was consumed by M31 in the distant past.[70] The globular with the greatest apparent brightness is G76 which is located in the south-west arm's eastern half.[11] Another massive globular cluster -named 037-B327-, discovered in 2006 as is heavily reddened by the Andromeda Galaxy's interstellar dust, was thought to be more massive than G1 and the largest cluster of the Local Group;[71] however other studies have shown is actually similar in properties to G1.[72]
Unlike the globular clusters of the Milky Way, which show a relatively low age dispersion, Andromeda's globular clusters have a much larger range of ages: from systems as old as the galaxy itself to much younger systems, with ages between a few hundred million years to five billion years[73]
In 2005, astronomers discovered a completely new type of star cluster in M31. The new-found clusters contain hundreds of thousands of stars, a similar number of stars that can be found in globular clusters. What distinguishes them from the globular clusters is that they are much larger — several hundred light-years across — and hundreds of times less dense. The distances between the stars are, therefore, much greater within the newly discovered extended clusters.[74]
In the year 2012, a microquasar, a radio burst emanating from a smaller black hole, was detected in the Andromeda Galaxy. The progenitor black hole was located near the galactic center and had about 10 M⊙{\displaystyle {\begin{smallmatrix}M_{\odot }\end{smallmatrix}}} . Discovered through a data collected by the ESA's XMM-Newton probe, and subsequently observed by NASA's Swift and Chandra, the Very Large Array, and the Very Long Baseline Array, the microquasar was the first observed within the Andromeda Galaxy and the first outside of the Milky Way Galaxy.[75]
{{#invoke:main|main}}
Like the Milky Way, the Andromeda Galaxy has satellite galaxies, consisting of 14 known dwarf galaxies. The best known and most readily observed satellite galaxies are M32 and M110. Based on current evidence, it appears that M32 underwent a close encounter with M31 (Andromeda) in the past. M32 may once have been a larger galaxy that had its stellar disk removed by M31, and underwent a sharp increase of star formation in the core region, which lasted until the relatively recent past.[76]
M110 also appears to be interacting with M31, and astronomers have found in the halo of M31 a stream of metal-rich stars that appear to have been stripped from these satellite galaxies.[77] M110 does contain a dusty lane, which may indicate recent or ongoing star formation.[78]
In 2006 it was discovered that nine of these galaxies lie along a plane that intersects the core of the Andromeda Galaxy, rather than being randomly arranged as would be expected from independent interactions. This may indicate a common tidal origin for the satellites.[79]
Future collision with the Milky Way
The Andromeda Galaxy is approaching the Milky Way at about Template:Convert.[80] It has been measured approaching relative to our sun at around Template:Convert[43] as the sun orbits around the center of our galaxy at a speed of approximately Template:Convert. This makes Andromeda one of the few blueshifted galaxies that we observe. Andromeda's tangential or side-ways velocity with respect to the Milky Way is relatively much smaller than the approaching velocity and therefore it is expected to directly collide with the Milky Way in about 4 billion years. A likely outcome of the collision is that the galaxies will merge to form a giant elliptical galaxy[81] or perhaps even a large disk galaxy.[8] Such events are frequent among the galaxies in galaxy groups. The fate of the Earth and the Solar System in the event of a collision is currently unknown. Before the galaxies merge, there is a small chance that the Solar System could be ejected from the Milky Way or join M31.[82]
{{#invoke:Portal|portal}}
File:DPAG 1999 2077 Andromeda-Galaxie.jpg
The Andromeda Galaxy on a German postage stamp of 1999
Andromeda Nebula in fiction
List of Messier objects
List of galaxies
New General Catalogue
NGC 206 – the brightest star cloud in the Andromeda Galaxy
Template:Notes
↑ 1.0 1.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ 2.0 2.1 2.2 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ Template:Cite news
↑ Template:Cite web
↑ {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ 11.0 11.1 11.2 {{#invoke:citation/CS1|citation |CitationClass=book }}
↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ 36.0 36.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }}
↑ 37.0 37.1 {{#invoke:Citation/CS1|citation |CitationClass=journal }} Also see the press release, Template:Cite press release
↑ 43.0 43.1 43.2 Template:Cite web
↑ Template:Cite press release
↑ Template:Cite arXiv
↑ Template:Cite doi
Cite error: <ref> tag with name "jensenetal2003" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "SIMBAD-M31" defined in <references> is not used in prior text.
Cite error: <ref> tag with name "GAXEL" defined in <references> is not used in prior text.
Template:Sister
Template:WikiSky
StarDate: M31 Fact Sheet
Simbad data on M31
Messier 31, SEDS Messier pages
A Giant Globular Cluster in M31 1998 October 17.
M31: The Andromeda Galaxy 2004 July 18.
Andromeda Island Universe 2005 December 22.
Andromeda Island Universe 2010 January 9.
WISE Infrared Andromeda 2010 February 19
M31 and its central Nuclear Spiral
Amateur photography – M31
Globular Clusters in M31 at The Curdridge Observatory
First direct distance to Andromeda − Astronomy magazine article
Andromeda Galaxy at SolStation.com
Andromeda Galaxy at The Encyclopedia of Astrobiology, Astronomy, & Spaceflight
M31, the Andromeda Galaxy at NightSkyInfo.com
Template:Cite news
Hubble Finds Mysterious Disk of Blue Stars Around Black Hole Hubble observations (Sep 20 2005) put the mass of the Andromeda core black hole at 140 million solar masses
M31 (Apparent) Novae Page (IAU)
Multi-wavelength composite
Andromeda Project (crowd-source)
Template:Cite web
Andromeda Galaxy (M31) at Constellation Guide
APOD - 2013 August 1 (M31's angular size compared with full Moon)
Template:Messier objects Template:Ngc5
Template:Sky Template:Good article Template:Andromeda galaxy
Retrieved from "https://en.formulasearchengine.com/index.php?title=Andromeda_Galaxy&oldid=285579"
Commons category with local link different than on Wikidata
Unbarred spiral galaxies
Local Group
Andromeda Subgroup
Andromeda (constellation)
NGC objects
PGC objects
UGC objects
Objects within 10 Mly of Earth
Astronomical objects known since antiquity
MCG objects
IRAS objects
About formulasearchengine
|
CommonCrawl
|
What (precisely) is a block cipher?
If I follow the wikipedia or crypto.stackexchange definition, any simple XOR encryption where the key is as long as the plain text should qualify as a secure block cipher.
Now I thought what would happen if I just use this together with a block cipher mode of operation? After all, they should build a secure scheme with any secure block cipher. As you can probably guess by now, the outcome of this is nowhere near being secure. (Not even considering the small key space in this particular example, that could easily be fixed, but there are very obvious connections between the plain and the cipher text.)
So my question is: What is a block-cipher, so that it qualifies for use with any of the block cipher modes of operation?
block-cipher algorithm-design modes-of-operation
cooky451cooky451
A block cipher is (or tries to be) a pseudorandom permutation on a given space. Let $\mathcal{M}$ be the set of $n$-bit blocks for a given $n$. There are $2^n$ possible block values, and a permutation on $\mathcal{M}$ sends each block value to another value. There are $2^n!$ such permutations. A block cipher is a mapping from key values (in a given key space $\mathcal{K}$) to permutations on $\mathcal{M}$.
For instance, AES-256 is a block cipher which operates on 128-bit blocks and uses a 256-bit key. Each possible key value ($2^{256}$ possibilities) selects a permutation among the $2^{128}!$ of the space of 128-bit blocks (which has size $2^{128}$).
For the block cipher $\phi$ to have any practical value, it shall be easily computed: given a key $k$ and an input block $x$, applying the permutation $\phi_k$ selected by $k$ on $x$ takes a small amount of computing power. Usually, the inverse permutation is also easily computed: given $k$ and $y$, find $x$ such that $\phi_k(x) = y$.
The main security assumption on block ciphers is indistinguishability: for whoever does not know $k$, the permutation $\phi_k$ should behave as if it had been selected at random and uniformly among the $2^n!$ possible permutations of $\mathcal{M}$. This can be illustrated with the following experiment: suppose that I give two black boxes which compute, respectively, $\phi_k$ and on inputs $x$ of your choosing, and yields the corresponding output. I do not tell you the value of $k$. At any time, when you have sent $q$ requests ($q$ block values $x$ to encrypt), your goal is to predict the output of the box on yet another input $x$ that is distinct from your previous $q$ inputs. Since $\phi_k$ is a permutation, you know that the output will be different from the $q$ previous outputs, so there are $2^n-q$ possibilities. The block cipher is "secure" if you cannot predict the output correctly with probability substantially better than $1/(2^n-q)$.
(This "black box" model is a bit simplified; in fact, I should give you two black boxes computing $\phi_k$ and $\phi_k^{-1}$ respectively, and each of your $q$ request is for one of the boxes, at your choice.)
The "black box" model explains why a simple XOR is a very poor block cipher. If you define $\phi$ such that keys are also sequences of $n$-bits, and $\phi_k(x) = k \oplus x$, then a single request to the encryption black box allows recomputing $k$ (with $k = x \oplus \phi_k(x)$) and thus predict output for all other requests with 100% accuracy.
Terminology is the following:
Known Plaintext Attack: attacker obtains $q$ pairs $(x,\phi_k(x))$ but does not get to choose $x$ or $\phi_k(x)$.
Chosen Plaintext Attack: attacker obtains $q$ pairs $(x,\phi_k(x))$ where he can choose the $x$ values at will.
Adaptive Chosen Plaintext Attack: a CPA attack where the attacker is allowed to think a lot between each two requests; meaning that when he selects the next request $x$ to send to the black box, then he can do so after due inspection of the previously obtained outputs.
Chosen Ciphertext Attack: attacker obtains $q$ pairs $(\phi_k^{-1}(y),y)$ where he can choose the $y$ values at will.
Adaptive Chosen Ciphertext Attack: a CCA attack where the attacker can think between any two successive requests.
The "gold standard" is called IND-CCA2 which means "indistinguishable from a random permutation against adaptive chosen ciphertext attacks".
(The description above is slightly simplified; true formal exposition would use Turing machines and a game between a challenger and a prover; but it should give you the intuition.)
There are limits to the indistinguishability. Indeed, if you allow $q$ to raise up to $2^n-1$ then the notion ceases to be interesting; if the attacker can send requests for all possible block values except one, then he can predict the missing one with probability 1: it is the one output that he did not get yet. Also, indistinguishability cannot hold beyond exhaustive search on the key space: if the attacker can try all possible key values, then he can look for a match with the outputs he got. Once he gets the key, he can predict with probability 1.
It is customary to study algorithms where keys are sequences of $r$ bits (e.g. $r = 256$ for AES-256). The "exhaustive search" attack has average cost $2^{r-1}$ (the attacker needs, on average, to try half the keys before hitting the right one). So, the security of any block cipher tends to break down when the attacker is "allowed" too many requests and/or too much computing power.
In practice, for a block cipher with $n$-bit blocks and $r$-bit keys, we consider an attack to be "academically successful" if it can predict the black box output with the following conditions:
Attacker can send $q$ requests to the encryption or decryption box with $q \leq 2^{n-1}$ (attacker can obtain "half of the code book").
Attacker can spend enough CPU to compute $2^{r-1}$ evaluations of the function.
Attacker also has access to $2^{r-1}$ bits of very fast RAM.
Attacker's probability to predict the next output is at least $3/4$.
With $2^{r-1}$ evaluations of the function, attacker's success with the simple exhaustive search and $2^{n-1}$ requests will be $1/2+2^{-(n-1)}$ (50% chance of having found the key through the search, and, if not, the knowledge that the next output will be in the code book half that the attacker did not get with his $q$ initial requests). Hence the $3/4$ criterion: attacker must be able to do better than this generic attack.
For instance, if you consider 3DES: block size is $n = 64$, and key size is $r = 168$ (the standard says "192 bits" but the algorithm simply ignores 24 of these bits, so for the cryptographer the key size is 168 bits). There is a known "meet-in-the-middle" attack which requires $2^{62}$ bits of storage (for $2^{56}$ words of 64 bits), computational effort equivalent to about $(2/3)·2^{112}$ evaluations of 3DES, and only 2 known plaintexts. This attack cannot be implemented with existing technology (the storage requirement would be quite expensive, since we are talking about half a millions of terabytes of fast RAM; and the CPU requirements is simply completely out of reach because it would requires way more energy than what Mankind produces as a whole). Yet it counts as a break. In that sense, 3DES is "academically broken".
Modes of operation turn a block cipher into something which can encrypt and decrypt messages of almost arbitrary length, not just single block values. Each mode of operation has its own requirements, in particular with any Initialization Vector. For instance, with CBC, in a "black box" model similar to what is described above, the encryption system must generate IV values randomly, uniformly, with values which are not predictable by the attacker (the BEAST attack on SSL/TLS builds on IV predictability, in the sense that the attacker can know the IV before choosing his plaintext for the next request).
As a generic rule, most modes of operation run into trouble when the amount of encrypted data exceeds about $2^{n/2}$ blocks. This is the threshold at which a pseudorandom permutation begins to behave differently from a pseudorandom function (a permutation is injective: it won't give you the same output with two distinct inputs; whereas a random function, on average, is not injective). This is why AES was defined with 128-bit blocks, as opposed to 3DES 64-bit blocks: to give us enough room in practical situations.
The problem isn't with the definition. Modes of operation presuppose that the called block cipher is secure* for some large number of encryption/decryption queries (usually up to $2^{\frac{b}{2}}$ queries, where $b$ is the block length of the cipher). Simple xor encryption with the key (which does meet the basic definition of a block cipher) is only secure for 1 query. Hence, modes of encryption built on top of simple xor encryption will be trivially breakable when attempting to encrypt more than $b$ bits.
*(meaning not efficiently distinguishable from a random permutation)
J.D.J.D.
Not the answer you're looking for? Browse other questions tagged block-cipher algorithm-design modes-of-operation or ask your own question.
What is a block cipher?
Which one of the Block Cipher modes is the best?
Block Cipher Mode Amicable to Fast Key Change/Rotation Like XOR?
Encrypting twice with same key gives back plain text
Why are modes of operation used, what attacks do they prevent?
Security of a self-decrypting block-cipher?
Security of a parallelizable block cipher mode
Is there an avalanche effect using block cipher modes of operation?
A block cipher whose key changes after each block
Block cipher birthday bound and a KDF workaround
Block cipher linearity (in relation to hill ciphers)
|
CommonCrawl
|
Difference between revisions of "Colloquia 2012-2013"
Street (talk | contribs)
(→Spring 2012)
Jeanluc (talk | contribs)
|Jan 23
|[http://maeresearch.ucsd.edu/spagnolie/ Saverio Spagnolie] (Brown)
|''Hydrodynamics of Self-Propulsion Near a Boundary: Construction of a Numerical and Asymptotic Toolbox''
|Jean-Luc
All colloquia are on Fridays at 4:00 pm in Van Vleck B239, unless otherwise indicated.
Sep 9 Manfred Einsiedler (ETH-Zurich) Periodic orbits on homogeneous spaces Fish
Sep 16 Richard Rimanyi (UNC-Chapel Hill) Global singularity theory Maxim
Sep 23 Andrei Caldararu (UW-Madison) The Pfaffian-Grassmannian derived equivalence (local)
Sep 30 Scott Armstrong (UW-Madison) Optimal Lipschitz extensions, the infinity Laplacian, and tug-of-war games (local)
Oct 7 Hala Ghousseini (University of Wisconsin-Madison) Developing Mathematical Knowledge for Teaching in, from, and for Practice Lempp
Oct 14 Alex Kontorovich (Yale) On Zaremba's Conjecture Shamgar
oct 19, Wed Bernd Sturmfels (UC Berkeley) Convex Algebraic Geometry distinguished lecturer Shamgar
oct 20, Thu Bernd Sturmfels (UC Berkeley) Quartic Curves and Their Bitangents distinguished lecturer Shamgar
oct 21 Bernd Sturmfels (UC Berkeley) Multiview Geometry distinguished lecturer Shamgar
Oct 28 Roman Holowinsky (OSU) Equidistribution Problems and L-functions Street
Nov 4 Sijue Wu (U Michigan) Wellposedness of the two and three dimensional full water wave problem Qin Li
Nov 7, Mo, 3pm, SMI 133 Sastry Pantula (NSCU and DMS/NSF) Opportunities in Mathematical and Statistical Sciences at DMS Joint Math/Stat Colloquium
Nov 11 Henri Berestycki (EHESS and University of Chicago) Reaction-diffusion equations and propagation phenomena Wasow lecture
Nov 16, Wed Henry Towsner (U of Conn-Storrs) An Analytic Approach to Uniformity Norms Steffen
Nov 18 Benjamin Recht (UW-Madison, CS Department) The Convex Geometry of Inverse Problems Jordan
Nov 22, Tue, 2:30PM, B205 Zhiwei Yun (MIT) Motives and the inverse Galois problem Tonghai
Nov 28, Mon, 4PM Burglind Joricke (Institut Fourier, Grenoble) Analytic knots, satellites and the 4-ball genus Gong
Nov 29, Tue, 2:30PM, B102 Isaac Goldbring (UCLA) "Nonstandard methods in Lie theory" Lempp
Nov 30, Wed, 4PM Bing Wang (Simons Institute) Uniformization of algebraic varieties Sean
Dec 2 Robert Dudley (University of California, Berkeley) From Gliding Ants to Andean Hummingbirds: The Evolution of Animal Flight Performance Jean-Luc
Dec 5, Mon, 2:25PM, Room 901 Dima Arinkin (UNC-Chapel Hill) Autoduality of Jacobians for singular curves Andrei
Dec 7, Wed, 4PM Toan Nguyen (Brown University) On the stability of Prandtl boundary layers and the inviscid limit of the Navier-Stokes equations Misha Feldman
Dec 9 Xinwen Zhu (Harvard University) Adelic uniformization of moduli of G-bundles Tonghai
Dec 12, Mon, 4PM Jonathan Hauenstein (Texas A&M) Numerical solving of polynomial equations and applications Thiffeault
Jan 23 Saverio Spagnolie (Brown) Hydrodynamics of Self-Propulsion Near a Boundary: Construction of a Numerical and Asymptotic Toolbox Jean-Luc
Jan 27 Ari Stern (UCSD) Jean-Luc
Feb 3 Akos Magyar (UBC) TBA Street
Feb 10 Melanie Wood (UW Madison) TBA local
Feb 17 Milena Hering (University of Connecticut) TBA Andrei
Feb 24 Malabika Pramanik (University of British Columbia) TBA Benguria
March 2 Guang Gong (University of Waterloo) TBA Shamgar
March 16 Charles Doran (University of Alberta) TBA Matt Ballard
March 23 Martin Lorenz (Temple University) TBA Don Passman
March 30 Wilhelm Schlag (University of Chicago) TBA Street
April 6 Spring recess
April 13 Ricardo Cortez (Tulane) TBA Mitchell
April 18 Benedict H. Gross (Harvard) TBA distinguished lecturer
April 20 Robert Guralnick (University of South California) TBA Shamgar
May 4 Mark Andrea de Cataldo (Stony Brook) TBA Maxim
May 11 Tentatively Scheduled Shamgar
Fri, Sept 9: Manfred Einsiedler (ETH-Zurich)
Periodic orbits on homogeneous spaces
We call an orbit xH of a subgroup H<G on a quotient space Gamma \ G periodic if it has finite H-invariant volume. These orbits have intimate connections to a variety of number theoretic problems, e.g. both integer quadratic forms and number fields give rise periodic orbits and these periodic orbits then relate to local-global problems for the quadratic forms or to special values of L-functions. We will discuss whether a sequence of periodic orbits equidistribute in Gamma \ G assuming the orbits become more complicated (which can be measured by a discriminant). If H is a diagonal subgroup (also called torus or Cartan subgroup), this is not always the case but can be true with a bit more averaging. As a theorem of Mozes and Shah show the case where H is generated by unipotents is well understand and is closely related to the work of M. Ratner. We then ask about the rate of approximation, where the situation is much more complex. The talk is based on several papers which are joint work with E.Lindenstrauss, Ph. Michel, and A. Venkatesh resp. with G. Margulis and A. Venkatesh.
Fri, Sept 16: Richard Rimanyi (UNC)
Global singularity theory
The topology of the spaces A and B may force every map from A to B to have certain singularities. For example, a map from the Klein bottle to 3-space must have double points. A map from the projective plane to the plane must have an odd number of cusp points.
To a singularity one may associate a polynomial (its Thom polynomial) which measures how topology forces this particular singularity. In the lecture we will explore the theory of Thom polynomials and their applications in enumerative geometry. Along the way, we will meet a wide spectrum of mathematical concepts from geometric theorems of the ancient Greeks to the cohomology ring of moduli spaces.
Fri, Sept 23: Andrei Caldararu (UW-Madison)
The Pfaffian-Grassmannian derived equivalence
String theory relates certain seemingly different manifolds through a relationship called mirror symmetry. Discovered about 25 years ago, this story is still very mysterious from a mathematical point of view. Despite the name, mirror symmetry is not entirely symmetric -- several distinct spaces can be mirrors to a given one. When this happens it is expected that certain invariants of these "double mirrors" match up. For a long time the only known examples of double mirrors arose through a simple construction called a flop, which led to the conjecture that this would be a general phenomenon. In joint work with Lev Borisov we constructed the first counterexample to this, which I shall present. Explicitly, I shall construct two Calabi-Yau threefolds which are not related by flops, but are derived equivalent, and therefore are expected to arise through a double mirror construction. The talk will be accessible to a wide audience, in particular to graduate students. There will even be several pictures!
Fri, Sept 30: Scott Armstrong (UW-Madison)
Optimal Lipschitz extensions, the infinity Laplacian, and tug-of-war games
Given a nice bounded domain, and a Lipschitz function defined on its boundary, consider the problem of finding an extension of this function to the closure of the domain which has minimal Lipschitz constant. This is the archetypal problem of the calculus of variations "in the sup-norm". There can be many such minimal Lipschitz extensions, but there is there is a unique minimizer once we properly "localize" this Lipschitz minimizing property. This minimizer is characterized by the infinity Laplace equation: the Euler-Lagrange equation for our optimization problem. This PDE is a very highly degenerate nonlinear elliptic equation which does not possess smooth solutions in general. In this talk I will discuss what we know about the infinity Laplace equation, what the important open questions are, and some interesting recent developments. We will even play a probabilistic game called "tug-of-war".
Fri, Oct 7: Hala Ghousseini (University of Wisconsin-Madison)
Developing Mathematical Knowledge for Teaching in, from, and for Practice
Recent research in mathematics education has established that successful teaching requires a specialized kind of professional knowledge known as Mathematical Knowledge for Teaching (MKT). The mathematics education community, however, is beginning to appreciate that to be effective, teachers not only need to know MKT but also be able to use it in interaction with students (Hill & Ball, 2010). Very few examples exist at the level of actual practice of how novice teachers develop such knowledge for use. I will report on my current work on the Learning in, from, and for Practice project to develop, implement, and study what mathematics teacher educators can do to support novice teachers in acquiring and using Mathematical Knowledge for Teaching.
Fri, Oct 14: Alex Kontorovich (Yale)
On Zaremba's Conjecture
It is folklore that modular multiplication is "random". This concept is useful for many applications, such as generating pseudorandom sequences, or in quasi-Monte Carlo methods for multi-dimensional numerical integration. Zaremba's theorem quantifies the quality of this "randomness" in terms of certain Diophantine properties involving continued fractions. His 40-year old conjecture predicts the ubiquity of moduli for which this Diophantine property is uniform. It is connected to Markoff and Lagrange spectra, as well as to families of "low-lying" divergent geodesics on the modular surface. We prove that a density one set satisfies Zaremba's conjecture, using recent advances such as the circle method and estimates for bilinear forms in the Affine Sieve, as well as a "congruence" analog of the renewal method in the thermodynamical formalism. This is joint work with Jean Bourgain.
Wed, Oct 19: Bernd Sturmfels (Berkeley)
Convex Algebraic Geometry
This lecture concerns convex bodies with an interesting algebraic structure. A primary focus lies on the geometry of semidefinite optimization. Starting with elementary questions about ellipses in the plane, we move on to discuss the geometry of spectrahedra, orbitopes, and convex hulls of real varieties.
Thu, Oct 20: Bernd Sturmfels (Berkeley)
Quartic Curves and Their Bitangents
We present a computational study of plane curves of degree four, with primary focus on writing their defining polynomials as sums of squares and as symmetric determinants. Number theorists will enjoy the appearance of the Weyl group [math]E_7[/math] as the Galois group of the 28 bitangents. Based on joint work with Daniel Plaumann and Cynthia Vinzant, this lecture spans a bridge from 19th century algebra to 21st century optimization.
Fri, Oct 21: Bernd Sturmfels (Berkeley)
Multiview Geometry
The study of two-dimensional images of three-dimensional scenes is foundational for computer vision. We present work with Chris Aholt and Rekha Thomas on the polynomials characterizing images taken by [math]n[/math] cameras. Our varieties are threefolds that vary in a family of dimension [math]11n-15[/math] when the cameras are moving. We use toric geometry and Hilbert schemes to characterize degenerations of camera positions.
Fri, Oct 28: Roman Holowinsky (OSU)
Equidistribution Problems and L-functions
There are several equidistribution problems of arithmetic nature which have had shared interest between the fields of Ergodic Theory and Number Theory. The relation of such problems to homogeneous flows and the reduction to analysis of special values of automorphic L-functions has resulted in increased collaboration between these two fields of mathematics. We will discuss two such equidistribution problems: the equidistribution of Heegner points for negative quadratic discriminants and the equidistribution of mass of Hecke eigenforms. Equidistribution follows upon establishing subconvexity bounds for the associated L-functions and are fine examples as to why one might be interested in such objects.
Fri, Nov 4: Sijue Wu (U Michigan)
Wellposedness of the two and three dimensional full water wave problem
We consider the question of global in time existence and uniqueness of solutions of the infinite depth full water wave problem. We show that the nature of the nonlinearity of the water wave equation is essentially of cubic and higher orders. For any initial data that is small in its kinetic energy and height, we show that the 2-D full water wave equation is uniquely solvable almost globally in time. And for any initial interface that is small in its steepness and velocity, we show that the 3-D full water wave equation is uniquely solvable globally in time.
Mo, Nov 7: Sastry Pantula (DMS/NSF, NCSU)
Opportunities in Mathematical and Statistical Sciences at DMS
In this talk, I will give you an overview of the funding and other opportunities at DMS for mathematicians and statisticians. I will also talk about our new program in computational and data-enabled science and engineering in mathematical and statistical sciences (CDS&E-MSS).
Fri, Nov 11: Henri Berestycki (EHESS and University of Chicago)
Reaction-diffusion equations and propagation phenomena
Starting with the description of reaction-diffusion mechanisms in physics, biology and ecology, I will explain the motivation for this class of non-linear partial differential equations and mention some of the interesting history of these systems. Then, I will review classical results in the homogeneous setting and discuss their relevance. The second part of the lecture will be concerned with recent developments in non-homogeneous settings, in particular for Fisher-KPP type equations. Such problems are encountered in models from ecology. The mathematical theory will be seen to shed light on questions arising in this context.
Wed, Nov 16: Henry Towsner (U of Conn-Storrs)
An Analytic Approach to Uniformity Norms
The Gowers uniformity norms have proven to be a powerful tool in extremal combinatorics, and a number of "structure theorems" have been given showing that the uniformity norms provide a dichotomy between "structured" objects and "random" objects. While analogous norms (the Gowers-Host-Kra norms) exist in dynamical systems, they do not quite correspond to the uniformity norms in the finite setting. We describe an analytic approach to the uniformity norms in which the "correspondence principle" between the finite setting and the infinite analytic setting remains valid.
Fri, Nov 18: Ben Recht (UW-Madison)
The Convex Geometry of Inverse Problems
Deducing the state or structure of a system from partial, noisy measurements is a fundamental task throughout the sciences and engineering. The resulting inverse problems are often ill-posed because there are fewer measurements available than the ambient dimension of the model to be estimated. In practice, however, many interesting signals or models contain few degrees of freedom relative to their ambient dimension: a small number of genes may constitute the signature of a disease, very few parameters may specify the correlation structure of a time series, or a sparse collection of geometric constraints may determine a molecular configuration. Discovering, leveraging, or recognizing such low-dimensional structure plays an important role in making inverse problems well-posed.
In this talk, I will propose a unified approach to transform notions of simplicity and latent low-dimensionality into convex penalty functions. This approach builds on the success of generalizing compressed sensing to matrix completion, and greatly extends the catalog of objects and structures that can be recovered from partial information. I will focus on a suite of data analysis algorithms designed to decompose general signals into sums of atoms from a simple---but not necessarily discrete---set. These algorithms are derived in a convex optimization framework that encompasses previous methods based on l1-norm minimization and nuclear norm minimization for recovering sparse vectors and low-rank matrices. I will provide sharp estimates of the number of generic measurements required for exact and robust recovery of a variety of structured models. I will then detail several example applications and describe how to scale the corresponding inference algorithms to massive data sets.
Tue, Nov 22: Zhiwei Yun (MIT)
"Motives and the inverse Galois problem"
We will use geometric Langlands theory to solve two problems simultaneously. One is Serre's question about whether there exist motives over Q with motivic Galois groups of type E_8 or G_2; the other is whether there are Galois extensions of Q with Galois groups E_8(p) or G_2(p) (the finite simple groups of Lie type). The answers to both questions are YES. No familiarity with either motives or geometric Langlands or E_8 will be assumed.
Mon, Nov 28: Burglind Joricke (Institut Fourier, Grenoble)
"Analytic knots, satellites and the 4-ball genus"
After introducing classical geometric knot invariants and satellites I will concentrate on knots or links in the unit sphere in $\mathbb C^2$ which bound a complex curve (respectively, a smooth complex curve) in the unit ball. Such a knot or link will be called analytic (respectively, smoothly analytic). For analytic satellite links of smoothly analytic knots there is a sharp lower bound for the 4-ball genus. It is given in terms of the 4-ball genus of the companion and the winding number. No such estimate is true in the general case. There is a natural relation to the theory of holomorphic mappings from open Riemann surfaces into the space of monic polynomials without multiple zeros. I will briefly touch related problems.
Tue, Nov 29: Isaac Goldbring (UCLA)
"Nonstandard methods in Lie theory"
Nonstandard analysis is a way of rigorously using "ideal" elements, such as infinitely small and infinitely large elements, in mathematics. In this talk, I will survey the use of nonstandard methods in Lie theory. I will focus on two applications in particular: the positive solution to Hilbert's fifth problem (which establishes that locally euclidean groups are Lie groups) and nonstandard hulls of infinite-dimensional Lie groups and algebras. I will also briefly discuss the recent work of Breuillard, Green, and Tao (extending work of Hrushovski) concerning the classification of approximate groups, which utilizes nonstandard methods and the local version of Hilbert's fifth problem in an integral way. I will assume no prior knowledge of nonstandard analysis or Lie theory.
Wed, Nov 30: Bing Wang (Simons Center for Geometry and Physics)
Uniformization of algebraic varieties
For algebraic varieties of general type with mild singularities, we show the Bogmolov-Yau inequality holds. If equality is attained, then this variety is a global quotient of complex hyperbolic space away from a subvariety.
Mon, Dec 5: Dima Arinkin (UNC-Chapel Hill)
"Autoduality of Jacobians for singular curves"
Let C be a (smooth projective algebraic) curve. It is well known that the Jacobian J of C is a principally polarized abelian variety. In other words, J is self-dual in the sense that J is identified with the space of topologically trivial line bundles on itself.
Suppose now that C is singular. The Jacobian of C parametrizes topologically trivial line bundles on C; it is an algebraic group which is no longer compact. By considering torsion-free sheaves instead of line bundles, one obtains a natural singular compactification J' of J.
In this talk, I consider (projective) curves C with planar singularities. The main result is that J' is self-dual: J' is identified with a space of torsion-free sheaves on itself. This autoduality naturally fits into the framework of the geometric Langlands conjecture; I hope to sketch this relation in my talk.
Wed, Dec 7: Toan Nguyen (Brown University)
"On the stability of Prandtl boundary layers and the inviscid limit of the Navier-Stokes equations"
In fluid dynamics, one of the most classical issues is to understand the dynamics of viscous fluid flows past solid bodies (e.g., aircrafts, ships, etc...), especially in the regime of very high Reynolds numbers (or small viscosity). Boundary layers are typically formed in a thin layer near the boundary. In this talk, I shall present various ill-posedness results on the classical Prandtl boundary-layer equation, and discuss the relevance of boundary-layer expansions and the vanishing viscosity limit problem of the Navier-Stokes equations. I will also discuss viscosity effects in destabilizing stable inviscid flows.
Fri, Dec 9: Xinwen Zhu (Harvard University)
"Adelic uniformization of moduli of G-bundles"
It is well-known from Weil that the isomorphism classes of rank n vector bundles on an algebraic curve can be written as the set of certain double cosets of GL(n,A), where A is the adeles of the curve. I will introduce such presentation in the world of algebraic geometry and discuss two of its applications: the first is the Tamagawa number formula in the function field case (proved by Gaitsgory-Lurie), which is a formula for the volume of the automorphic space; and thesecond is the Verlinde formula in positive characteristic, which is a formula for the dimensions of global sections of certain line bundles on the moduli spaces.
Mon, Dec 12: Jonathan Hauenstein (Texas A&M)
"Numerical solving of polynomial equations and applications"
Systems of polynomial equations arise in many areas of mathematics, science, economics, and engineering with their solutions, for example, describing equilibria of chemical reactions and economics models, and the design of specialized robots. These applications have motivated the development of numerical methods used for solving polynomial systems, collectively called Numerical Algebraic Geometry. This talk will explore fundamental numerical algebraic geometric algorithms for solving systems of polynomial equations and the application of these algorithms to problems in arising engineering and mathematical biology.
Retrieved from "https://www.math.wisc.edu/wiki/index.php?title=Colloquia_2012-2013&oldid=3264"
|
CommonCrawl
|
www.governatori.net
Business Process Modelling
Belief Revision
Contracts Architecture and Languages
Defeasible Logic
Deontic Logic and Normative Reasoning
Labelled Deduction
Modal Logic
Non-monotonic Reasoning
Time and Temporal Reasoning
Publications on Deontic Logic and Normative Reasoning
Grigoris Antoniou, David Billington, Guido Governatori, and Michael J. Maher.
On the modeling and analysis of regulations. In Proceedings of the Australian Conference Information Systems, pages 20-29, 1999.
Abstract:Regulations are a wide-spread and important part of government and business. They codify how products must be made and processes should be performed. Such regulations can be difficult to understand and apply. In an environment of growing complexity of, and change in, regulation, automated support for reasoning with regulations is becoming increasingly necessary. In this paper we report on ongoing work which aims at providing automated support for the drafting and use of regulations using logic modelling techniques. We highlight the support that can be provided by logic modelling, describe the technical foundation of our project, and report on the status of the project and the next steps.
Grigoris Antoniou, Nikos Dimaresis, and Guido Governatori.
A system for modal and deontic defeasible reasoning. In Mehmet A. Orgun and John Thornton, editors, 20th Australian Joint Conference on Artificial Intelligence, AI 2007, LNAI 4830, pages 609-613. Springer, 2007. Copyright © 2007 Springer.
Abstract: Defeasible reasoning is a well-established nonmonotonic reasoning approach that has recently been combined with semantic web technologies. This paper describes modal and deontic extensions of defeasible logic, motivated by potential applications for modelling multi-agent systems and policies. It describes a logic metaprogram that captures the underlying intuitions, and outlines an implemented system.
Alberto Artosi, Paola Cattabriga, and Guido Governatori.
An automated approach to normative reasoning. In Joost Breuker, editor, Artificial Normative Reasoning, pages 132-145, Amsterdam, 1994. ECAI'94.
KED: A deontic theorem prover. In Carlo Biagioli, Giovanni Sartor, and Daniela Tiscornia, editors, Workshop on Legal Application of Logic Programming, pages 60-76, Firenze, 1994. ICLP'94, IDG.
Alberto Artosi and Guido Governatori.
A tableaux methodology for deontic conditional logics. In ΔEON'98, 4th International Workshop on Deontic Logic in Computer Science, pages 65-81, Bologna, 1998. CIRFID.
Abstract:In this paper we present a theorem proving methodology for a restricted but significant fragment of the conditional language made up of (boolean combinations of) conditional statements with unnested antecedents. The method is based on the possible world semantics for conditional logics. The label formalism introduced in \cite{cade,jelia} to account for the semantics of normal modal logics is easily adapted to the semantics of conditional logics by simply indexing labels with formulas. The inference rules are provided by the propositional system KE+ - a tableau-like analytic proof system devised to be used both as a refutation and a direct method of proof - enlarged with suitable elimination rules for the conditional connective. The theorem proving methodology we are going to present can be viewed as a first step towards developing an appropriate algorithmic framework for several conditional logics for (defeasible) conditional obligation.
Alberto Artosi, Guido Governatori, and Giovanni Sartor.
Towards a computational treatment of deontic defeasibility. In Mark Brown and José Carmo, editors, Deontic Logic Agency and Normative Systems, Workshop on Computing, pages 27-46, Berlin, 1996. Springer-Verlag, Copyright © 1996 Springer-Verlag.
Abstract:In this paper we describe an algorithmic framework for a multi-modal logic arising from the combination of the system of modal (epistemic) logic devised by Meyer and van der Hoek for dealing with nonmonotonic reasoning with a deontic logic of the Jones and Pörn-type. The idea behind this (somewhat eclectic) formal set-up is to have a modal framework expressive enough to model certain kinds of deontic defeasibility, in particular by taking into account preferences on norms. The appropriate inference mechanism is provided by a tableau-like modal theorem proving system which supports a proof method closely related to the semantics of modal operators. We argue that this system is particularly well-suited for mechanizing nonmonotonic forms of inference in a monotonic multi-modal setting.
Guido Boella, Guido Governatori, Antonino Rotolo, and Leendert van der Torre.
A formal study on legal compliance and interpretation. In Thomas Meyer and Eugenia Ternovska, editors, 13 International Workshop on Non-Monotonic Reasoning (NMR 2010). CEUR Workshop Proceedings, 2010.
Abstract: This paper proposes a logical framework to capture the norm change power and the limitations of the judicial system in revising the set of constitutive rules defining the concepts on which the applicability of norms is based. In particular, we reconstruct the legal arguments leading to an extensive or restrictive interpretation of norms.
Lex minus dixit quam voluit, lex magis dixit quam voluit: A formal study on legal compliance and interpretation. In P. Casanovas, U. Pagallo, G. Ajani, and G. Sartor, editors, AI approaches to the complexity of legal systems, LNAI, Berlin, 2010. Springer, Copyright © 2010 Springer.
Abstract: This paper argues in favour of the necessity of dynamically restricting and expanding the applicability of norms regulating computer systems like multiagent systems, in situations where the compliance to the norm does not achieve the purpose of the norm. We propose a logical framework which distinguishes between constitutive and regulative norms and captures the norm change power and at the same time the limitations of the judicial system in dynamically revising the set of constitutive rules defining the concepts on which the applicability of norms is based. In particular, the framework is used to reconstruct some interpretive arguments described in legal theory such as those corresponding to the Roman maxims lex minus dixit quam voluit and lex magis dixit quam voluit. The logical framework is based on an extension of defeasible logic.
A logical understanding of legal interpretation. In Proceedings of KR 2010. AAAI, 2010.
Abstract: If compliance with a norm does not achieve its purpose, then its applicability must dynamically be restricted or expanded. Legal interpretation is a mechanism from law allowing norms to be adapted to unforeseen situations. We model this mechanism for norms regulating computer systems by representing the purpose of norms by social goals and by revising the constitutive rules defining the applicability of norms. We illustrate the interpretation mechanism by examples.
Dov M. Gabbay and Guido Governatori.
Dealing with label dependent deontic modalities. In Paul McNamara and Henry Prakken, editors, Norms, Logics and Information Systems. New Studies in Deontic Logic, pages 311-330. IOS Press, Amsterdam, 1998.
Abstract:In this paper, following Scott's advice, we argue that normative reasoning can be represented in a multi-setting framework; in particular in a multi-modal one, where modalities are indexed. Indexed modalities can model several aspects involved in normative reasoning. Systems are combined using Gabbay's fibring methodology which provides complete semantics that can be used to model a labelled tableau-like proofs system.
Jonathan Gelati, Guido Governatori, Antonino Rotolo, and Giovanni Sartor.
Declarative power, representation, and mandate: A formal anaysis. In Trevor Bench-Capon, Aspassia Daskalopulu, and Radboudb Winkels, editors, Legal Knowledge and Information Systems, number 89 in Frontieres in Artificial Intelligence and Applications, pages 41-52. IOS Press, Amsterdam, 2002.
Abstract:In this paper we provide a formal framework for developing the idea of normative co-ordination. We argue that this idea is based on the assumption that agents can achieve flexible co-ordination by conferring normative positions to other agents. These positions include duties, permissions, and powers. In particular, we introduce the idea of declarative power, which consists in the capacity of the power-holder of creating normative positions, involving other agents, simply by ``proclaiming'' such positions. In addition, we account also for the concepts of representation, consisting in the representative's capacity of acting in the name of his principal, and of mandate, which corresponds the mandatee's duty to act as the mandator has requested. Finally, we show how the above framework can be applied to the contract-net protocol.
Normative autonomy and normative co-ordination: Declarative power, representation, and mandate. Artificial Intelligence and Law, 12 (1-2), 53-81. 2004. Copyright © 2004 Springer. The original publication is available at www.springerlink.com
Abstract:In this paper we provide a formal analysis of the idea of normative co-ordination. We argue that this idea is based on the assumption that agents can achieve flexible co-ordination by conferring normative positions to other agents. These positions include duties, permissions, and powers. In particular, we explain the idea of declarative power, which consists in the capacity of the power-holder of creating normative positions, involving other agents, simply by ``proclaiming'' such positions. In addition, we account also for the concepts of representation, namely the representative's capacity of acting in the name of his principal, and of mandate, which is the mandatee's duty to act as the mandator has requested. Finally, we show how the framework can be applied to represent the contract-net protocol. Some brief remarks on future research and applications conclude this contribution.
Thomas F. Gordon, Guido Governatori, and Antonino Rotolo.
Rules and norms: Requirements for rule interchange languages in the legal domain. In Guido Governatori, John Hall, and Adrian Paschke, editors, Rule Representation, Interchange and Reasoning on the Web, number 5858 in LNCS, pages 282-296, Berlin, 5-7 November 2009. Springer, Copyright © 2009 Springer.
Abstract: In this survey paper we summarize the requirements for rule interchange languages for applications in the legal domain and use these requirements to evaluate RuleML, SBVR, SWRL and RIF. We also present the Legal Knowledge Interchange Format (LKIF), a new rule interchange format developed specifically for applications in the legal domain.
Guido Governatori.
Un modello formale per il ragionamento giuridico. PhD thesis, CIRFID, University of Bologna, Bologna, 1997.
Ideality and subideality from a computational point of view. In Alberto Artosi, Manuel Atienza, and Hajme Yoshino, editors, From Practical Reason to Legal Computer Science. Legal Computer Science, volume Part II, pages 315-329. Clueb, Bologna, 1998.
Abstract:In this paper we suggest ways in which logic and law may usefully relate; and we present an analytic proof system dealing with the Jones Pörn's deontic logic of Ideality and Subideality, which offers some suggestions about how to embed legal systems in label formalism.
Defeasible description logic. In Grigoris Antoniou and Harold Boley, editors, Rules and Rule Markup Languages for the Semantic Web: Third International Workshop, RuleML 2004, number 3323 in LNCS, pages 98-112, Berlin, 8 November 2004. Springer-Verlag Copyright © 2004, Springer. The original pubblication is available at www.springerlink.com
Abstract:We propose to extend description logic with defeasible rules, and to use the inferential mechanism of defeasible logic to reason with description logic constructors.
Representing business contracts in {RuleML}. International Journal of Cooperative Information Systems, 14 no. 2-3, June-September 2005.
Abstract:This paper presents an approach for the specification and implementation of translating contracts from a human-oriented form into an executable representation for monitoring. This will be done in the setting of \RuleML. The task of monitoring contract execution and performance requires a logical account of deontic and defeasible aspects of legal language; currently such aspects are not covered by \RuleML; accordingly we show how to extend it to cover such notions. From its logical form, the contract will be thus transformed into a machine readable rule notation and eventually implemented as executable semantics via any mark-up languages depending on the client's preference, for contract monitoring purposes.
Law, logic and business processes. In Third International Workshop on Requirements Engineering and Law. IEEE, 2010, Copyrigth © 2010 IEEE.
Abstract: Since its inception one of the aims of legal informatics has been to provide tools to support and improve the day to day activities of legal and normative practice and a better understanding of legal reasoning. The internet revolutions, where more and more daily activities are routinely performed with the support of ITC tools, offers new opportunities to legal informatics. We argue that the current technology begins to be mature enough to embrace in the challenge to make intelligent ICT support widespread in the legal and normative domain. In this paper we examine a logical model to encode norms and we use the formalisation of relevant law and regulations for regulatory compliance for business processes.
A logic framework of normative-based contract management. In Satoshi Tojo, editor, Fourth International Workshop on Juris-informatics (JURISIN 2010), November 18-19 2010.
Abstract: In this paper an extended Defeasible Logic framework is presented to do the representation and reasoning work for the normative-based contract management. A simple case based on FIDIC is followed as the usage example. This paper is based on the idea that normative concepts and normative rules should play the decisive roles in the normative-based contract management. Those normative concepts and rules are based on the normative literals and operators like action, obligation, permission and violation. The normative reduction is based on the normative concepts, normative connections and normative rules, especially on the superiority relation over the defeasible rules.
On the relationship between Carneades and defeasible logic. In Tom van Engers, editor, Proceedings of the 13th International Conference on Artificial Intelligence and Law (ICAIL 2011). ACM Press, 2011. Copyrigth © 2011 ACM Press.
Abstract: We study the formal relationships between the inferential aspects of Carneades (a general argumentation framework) and Defeasible Logic. The outcome of the investigation is that the current proof standards proposed in the Carneades framework correspond to some variants of Defeasible Logic.
Guido Governatori, Marlon Dumas, Arthur H.M. ter Hofstede, and Phillipa Oaks.
A formal approach to protocols and strategies for (legal) negotiation. In Henry Prakken, editor, Procedings of the 8th International Conference on Artificial Intelligence and Law, pages 168-177. IAAIL, ACM Press, 2001, Copyright © 2001 ACM.
Abstract:We propose a formal and executable framework for expressing protocols and strategies for automated (legal) negotiation. In this framework a party involved in a negotiation is represented through a software agent composed of four modules: (i) a communication module which manages the interaction with the other agents; (ii) a control module; (iii) a reasoning module specified as a defeasible theory; and (iv) a knowledge base which bridges the control and the reasoning modules, while keeping track of past decisions and interactions. The choice of defeasible logic is justified against a set of desirable criteria for negotiation automation languages. Moreover, the suitability of the framework is illustrated through two case studies.
Guido Governatori, Jonathan Gelati, Antonino Rotolo, and Giovanni Sartor.
Actions, institutions, powers. preliminary notes. In Gabriela Lindemann, Daniel Moldt, Mario Paolucci, and Bin Yu, editors, International Workshop on Regulated Agent-Based Social Systems: Theories and Applications (RASTA'02), volume 318 of Mitteilung, pages 131-147, 2002. Fachbereich Informatik, Universität Hamburg.
Abstract:In this paper we analyse some logical notions relevant for representing the dynamics of institutionalised organisations. In particular, some well-known action concepts introduced in the Kanger-Lindahl-Pörn logical theory of agency are discussed and integrated. Secondly, moving from the work of Jones and Sergot, a logical characterisation is provided of the ideas of institutional links, ``counts-as'' connections, and institutional facts. This approach is then enriched by a new modal operator $\mathit{proc}$, intended to account for the autonomous and decentralised creation of new institutional facts and normative positions within institutions.
Guido Governatori, Jörg Hoffmann, Shazia Sadiq, and Ingo Weber.
Detecting regulatory compliance for business process models through semantic annotations. In 4th International Workshop on Business Process Design, Milan, 1 September 2008 2008.
Abstract: A given business process may face a large number of regulatory obligations the process may or comply with. Providing tools and techniques through which an evaluation of the compliance degree of a given process can be undertaken is seen as a key objective in emerging business process platforms. We address this problem through a diagnostic framework that provides the ability to assess the compliance gaps present in a given process. Checking whether a process is compliant with the rules involves enumerating all reachable states and is hence, in general, a hard search problem. The approach taken here allows to provide useful diagnostic information in polynomial time. The approach is based on two underlying techniques. A conceptually faithful representation for regulatory obligations is firstly provided by a formal rule language based on a non-monotonic deontic logic of violations. Secondly, processes are formalized through semantic annotations that allow a logical state space to be created. The intersection of the two allows us to devise an efficient method to detect compliance gaps; the method guarantees to detect all obligations that will necessarily arise during execution, but that will not necessarily be fulfilled.
Guido Governatori, Joris Hulstijn, Règis Riveret, and Antonino Rotolo.
On the representation of deadlines in a rental agreement. In Arno R. Lodder and Laurens Mommers, editors, Legal Knowledge and Information Systems, pages 167-168. IOS Press, Amsterdam, 2007.
Abstract: The paper provides a conceptual analysis of deadlines, represented in Temporal Modal Defeasible Logic. The typology is based on the following parameters: kind of deontic operator, maintenance or achievement, presence of explicit sanctions, and persistence after the deadline. The adequacy of the typology is validated against a case study of a rental agreement.
Guido Governatori, Joris Hulstijn, Régis Riveret, and Antonino Rotolo.
Characterising deadlines in temporal modal defeasible logic. In Mehmet A. Orgun and John Thornton, editors, 20th Australian Joint Conference on Artificial Intelligence, AI 2007, LNAI 4830, pages 486-496. Springer, 2007. Copyright © 2007 Springer.
Abstract: We provide a conceptual analysis of several kinds of deadlines, represented in Temporal Modal Defeasible Logic. The paper presents a typology of deadlines, based on the following parameters: deontic operator, maintenance or achievement, presence or absence of sanctions, and persistence after the deadline. The deadline types are illustrated by a set of examples.
Guido Governatori and Renato Iannella.
Modelling and reasoning languages for social networks policies. In Enterprise Distributed Object Computing Conference, 2009. EDOC '09. IEEE International, pages 193-200. IEEE, 2009, Copyright © 2009 IEEE.
Abstract: Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.
A modelling and reasoning framework for social networks policies. Enterprise Information Systems, 2010 Copyrigth © 2010 Taylor & Francis.
Guido Governatori, Alessio Lomuscio, and Marek Sergot.
A tableaux system for deontic interpreted systems. In Tamás D. Gedeon and Lance Chun Che Fung, editors, AI 2003: Advances in Artificial Intelligence, volume 2903 of LNAI, pages 339-351, Springer-Verlag, Berlin, 2003. Copyright © 2003 Springer-Verlag.
Abstract:We develop a labelled tableaux system for the modal logic $KD45^i-j_n$ extended with epistemic notions. This logic characterises a particular type of interpreted systems used to represent and reason about states of correct and incorrect functioning behaviour of the agents in a system, and of the system as a whole. The resulting tableaux system provides a simple decision procedure for the logic. We discuss these issues and we illustrate them with the help of simple examples.
Guido Governatori and Zoran Milosevic.
Dealing with contract violations: formalism and domain specific language. Proceedings of EDOC 2005. IEEE Press, 2005, pp. 46-57. Copyright © 2005 IEEE.
Abstract:This paper presents a formal system for reasoning about violations of obligations in contracts. The system is based on the formalism for the representation of contrary-to-duty obligations. These are the obligations that take place when other obligations are violated as typically applied to penalties in contracts. The paper shows how this formalism can be mapped onto the key policy concepts of a contract specification language. This language, called Business Contract Language (BCL) was previously developed to express contract conditions of relevance for run time contract monitoring. The aim of this mapping is to establish a formal underpinning for this key subset of BCL.
Guido Governatori, and Zoran Milosevic
An Approach for Validating BCL Contract Specifications In Claudio Bartolini, Guido Governatori, and Zoran Milosevic (eds). Proceedings on the 2nd EDOC Workshop on Contract Architecures and Languages (CoALa 2005). Enschede, NL, 20 September 2005. IEEE Press.
Abstract:We continue the study, started in [5], on the formal relationships between a domain specific contract language (BCL) and the logic of violation (FCL) proposed in [6,7]. We discuss the use of logical methods for the representation and analysis of business contracts. The proposed analysis is based on the notions of normal and canonical forms of contracts expressed in FCL. Finally we present a mapping from FCL to BCL that can be used to provide an executable model of a formal representation of a contract.
A Formal Analysis of a Business Contract Language. International Journal of Cooperative Information Systems 15, in print. Copyright © 2006 World Scientific Press.
Abstract: This paper presents a formal system for reasoning about violations of obligations in contracts. The system is based on the formalism for the representation of contrary-to-duty obligations. These are the obligations that take place when other obligations are violated as typically applied to penalties in contracts. The paper shows how this formalism can be mapped onto the key policy concepts of a contract specification language, called Business Contract Language (BCL), previously developed to express contract conditions for run time contract monitoring. The aim of this mapping is to establish a formal underpinning for this key subset of BCL.
Guido Governatori, Zoran Milosevic, and Sahzia Sadiq
Compliance checking between business processes and business contracts 10th International Enterprise Distributed Object Computing Conference (EDOC 2006). IEEE Press, 2006, pp. 221-232. Copyright © 2006 IEEE.
Abstract: It is a typical scenario that many organisations have their business processes specified independently of their business contracts. This is because of the lack of guidelines and tools that facilitate derivation of processes from contracts but also because of the traditional mindset of treating contracts separately from business processes. This paper provides a solution to one specific problem that arises from this situation, namely the lack of mechanisms to check whether business processes are compliant with business contracts. The central part of the paper are logic based formalism for describing both the semantics of contract and the semantics of compliance checking procedures.
Guido Governatori, Vineet Padmanabhan, Antonino Rotolo, and Abdul Sattar.
A defeasible logic for modelling policy-based intentions and motivational attitudes. Logic Journal of the IGPL, 17(3), 2009. Copyright © 2009 Oxford University Press.
Abstract: In this paper we show how defeasible logic could formally account for the non-monotonic properties involved in motivational attitudes like intention and obligation. Usually, normal modal operators are used to represent such attitudes wherein classical logical consequence and the rule of necessitation comes into play i.e., $\vdash A / \vdash \Box A$, that is from $\vdash A$ derive $\vdash\Box A$. This means that such formalisms are affected by the Logical Omniscience problem. We show that policy-based intentions exhibit non-monotonic behaviour which could be captured through a non-monotonic system like defeasible logic. To this end we outline a defeasible logic of intention that specifies how modalities can be introduced and manipulated in a non-monotonic setting without giving rise to the problem of logical omniscience. In a similar way we show how to add deontic modalities defeasibly and how to integrate them with other motivational attitudes like beliefs and goals. Finally we show that the basic aspect of the BOID architecture is captured by this extended framework.
Guido Governatori, Monica Palmirani, Régis Riveret, Antonino Rotolo and Giovanni Sartor.
Normative Modifications in Defeasible Logic. In Marie-Francine Moens, editor, Jurix'05: The Eighteenth Annual Conference, in print. IOS Press, Amsterdam 2005.
Abstract: This paper proposes a framework based on Defeasible Logic (DL) to reason about normative modifications. We show how to express them in DL and how the logic deals with conflicts between temporalised normative modifications. Some comments will be given with regard to the phenomenon of retroactivity.
Guido Governatori, and Duy Pham Hoang
DR-CONTRACT: An Architecture for e-Contracts in Defeasible Logic In Claudio Bartolini, Guido Governatori, and Zoran Milosevic (eds). Proceedings on the 2nd EDOC Workshop on Contract Architecures and Languages (CoALa 2005). Enschede, NL, 20 September 2005. IEEE Press.
Abstract:In this paper we present an architecture to represent and reason on e-Contracts based on the DR-device architecture supplemented with a deontic defeasible logic of violation. We motivate the choice for the logic and we show how to extend RuleML to capture the notions relevant to describe e-contracts for a monitoring perspective in Defeasible Logic.
Guido Governatori and Duy Hoang Pham
A Semantic Web Based Architecture for e-Contracts in Defeasible Logic. In A. Adi, S. Stoutenberg and S. Tabet, editors, Rules and Rule Markup Languages for the Semantic Web. RuleML 2005, pages 145-159. LNCS 3791, Springer, Berlin, 2005. The original pubblication is available at www.springerlink.com.
Abstract: We introduce the DR-CONTRACT architecture to represent and reason on e-Contracts. The architecture extends the DR-device architecture by a deontic defeasible logic of violation. We motivate the choice for the logic and we show how to extend \RuleML to capture the notions relevant to describe e-contracts for a monitoring perspective in Defeasible Logic.
Guido Governatori and Duy Hoang Pham.
DR-CONTRACT: An Architecture for e-Contracts in Defeasible Logic. International Journal of Business Process Integration and Management, 5(4), 2009.
Abstract: We introduce the DR-CONTRACT architecture to represent and reason on e-Contracts. The architecture extends the DR-device architecture by a deontic defeasible logic of violation. We motivate the choice for the logic and we show how to extend RuleML to capture the notions relevant to describe e-contracts for a monitoring perspective in Defeasible Logic.
Guido Governatori and Antonino Rotolo.
Logic of Violations: A Gentzen System for Reasoning with Contrary-To-Duty Obligations Australasian Journal of Logic 4: 193-215, 2006.
Abstract: In this paper we present a Gentzen system for reasoning with contrary-to-duty obligations. The intuition behind the system is that a contrary-to-duty is a special kind of normative exception. The logical machinery to formalise this idea is taken from substructural logics and it is based on the definition of a new non-classical connective capturing the notion of reparational obligation. Then the system is tested against well-known contrary-to-duty paradoxes.
A Gentzen system for reasoning with contrary-to-duty obligations. a preliminary study. In Andrew J.I. Jones and John Horty, editors, Δeon'02, pages 97-116, London, May 2002. Imperial College.
Abstract:In this paper we present a Gentzen system for reasoning with contrary-to-duty obligations. The intuition behind the system is that a contrary-to-duty is a special kind of normative exception. The logical machinery to formalize this idea is taken from substructural logics and it is based on the definition of a new non-classical connective capturing the notion of reparational obligation. Then the system is tested against well-known contrary-to-duty paradoxes.
A computational framework for non-monotonic agency, institutionalised power and multi-agent systems. In Daniéle Bourcier, editor, Legal Knowledge and Inforamtion Systems, volume 106 of Frontieres in Artificial Intelligence and Applications, pages 151-152, IOS Press, Amsterdam, 2003.
Defeasible logic: Agency and obligation. In Alessio Lomuscio and Donald Nute, editors, Deontic Logic in Computer Science, number 3065 in LNAI, pages 114-128, Springer-Verlag, Berlin, 2004. Copyright © 2004 Spinger.
Abstract:We propose a computationally oriented non-monotonic multi-modal logic arising from the combination of agency, intention and obligation. We argue about the defeasible nature of these notions and then we show how to represent and reason with them in the setting of defeasible logic.
Modelling contracts using RuleML. In Thomas Gordon, editor, Legal Knowledge and Information Systems, volume 120 of Frontieres in Artificial Intelligence and Applications, pages 141-150, Amsterdam, 2004. IOS Press.
Abstract:This paper presents an approach for the specification and implementation of e-contracts for Web monitoring. This is done in the setting of RuleML. We argue that monitoring contract execution requires also a logical account of deontic concepts and of violations. Accordingly, RuleML is extended to cover these aspects.
A computational framework for institutional agency. Artificial Intelligence and Law, 16 no. 1 pp. 25-52, 2008., Copyright © 2008 Springer.
Abstract: This paper provides a computational framework, based on Defeasible Logic, to capture some aspects of institutional agency. Our background is Kanger-Lindahl-Pörn account of organised interaction, which describes this interaction within a multi-modal logical setting. This work focuses in particular on the notions of counts-as link and on those of attempt and of personal and direct action to realise states of affairs. We show how standard Defeasible Logic can be extended to represent these concepts: the resulting system preserves some basic properties commonly attributed to them. In addition, the framework enjoys nice computational properties, as it turns out that the extension of any theory can be computed in time linear to the size of the theory itself.
An algorithm for business process compliance. In Enrico Francesconi, Giovani Sartor, and Daniela Tiscornia, editors, Legal Knowledge and Information Systems (Jurix 2008), Frontieres in Artificial Intelligence and Applications 189, pages 186-191. IOS Press, 2008.
Abstract: This paper provides a novel mechanism to check whether business processes are compliant with business rules regulating them. The key point is that compliance is a relationship between two sets of specifications: the specifications for executing a business process and the specifications regulating it.
BIO logical agents: Norms, beliefs, intentions in defeasible logic. Journal of Autonomous Agents and Multi Agent Systems, 2008. Copyright © 2008 Springer.
Abstract: In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue, in particular, that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.
Changing legal systems: Abrogation and annulment. Part I: Revision of defeasible theories. In Ron van der Meyden and Leon van der Torre, editors, 9th International Conference on Deontic Logic in Computer Science (DEON2008), Lecture Notes in Computer Science. Springer, 2008. Copyright © 2008 Springer.
Abstract: In this paper we investigate how to model legal abrogation and annulment in Defeasible Logic. We examine some options that embed in this setting, and similar rule-based systems, ideas from belief and base revision. In both cases, our conclusion is negative, which suggests to adopt a different logical model.
Changing legal systems: Abrogation and annulment. Part II: Temporalised defeasible logic. In Guido Boella, Harko Verhagen, and Muindhar Singh, editors, Proceedings of Normative Multi Agent Systems (NorMAS 2008, Luxembourg 15-16 July 2008.
Abstract: In this paper we propose a temporal extension of Defeasible Logic to model legal modifications, such as abrogation and annulment. Hence, this framework overcomes the difficulty, discussed elsewhere \cite{deon-part1}, of capturing these modification types using belief and base revision.
Changing legal systems: legal abrogations and annulments in defeasible logic. Logic Journal of IGPL, 18 no. 1 pp. 157-194, 2009. Copyright © 2010 Oxford University Press.
Abstract: In this paper we investigate how to represent and reason about legal abrogations and annulments in Defeasible Logic. We examine some options that embed in this setting, and in similar rule-based systems, ideas from belief and base revision. In both cases, our conclusion is negative, which suggests to adopt a different logical model. This model expresses temporal aspects of legal rules, and distinguishes between two main timelines, one internal to a given temporal version of the legal system, and another relative to how the legal system evolves over time. Accordingly, we propose a temporal extension of Defeasible Logic suitable to express this model and to capture abrogation and annulment. We show that the proposed framework overcomes the difficulties discussed in regard to belief and base revision, and is sufficiently flexible to represent many of the subtleties characterizing legal abrogations and annulments.
How do agents comply with norms?. In Guido Boella, Pablo Noriega, Gabriella Pigozzi, and Harko Verhagen, editors, Normative Multi-Agent Systems, number 09121 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2009. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, Germany.
Abstract: The import of the notion of institution in the design of MASs requires to develop formal and efficient methods for modeling the interaction between agents' behaviour and normative systems. This paper discusses how to check whether agents' behaviour is compliant with the rules regulating them. The key point of our approach is that compliance is a relationship between two sets of specifications: the specifications for executing a process and the specifications regulating it. We propose a logic-based formalism for describing both the semantics of normative specifications and the semantics of compliance checking procedures.
How do agents comply with norms?. In IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technologies, 2009. WI-IAT '09. , volume 3, pages 488-491. IEEE, 2009, Copyright © 2009 IEEE.
Abstract: The import of the notion of institution in the design of MASs requires to develop formal and efficient methods for modeling the interaction between agents' behaviour and normative systems. This paper discusses how to check whether agents' behaviour complies with the rules regulating them. The key point of our approach is that compliance is a relationship between two sets of specifications: the specifications for executing a process and the specifications regulating it. We propose a formalism for describing both the semantics of normative specifications and the semantics of compliance checking procedures.
A conceptually rich model of business process compliance. In Sebastian Link and Aditya Ghose, editors, 7th Asia-Pacific Conference on Conceptual Modelling (APCCM 2010), CRPIT. ACS, 2010.
Abstract: In this paper we extend the preliminary work developed elsewhere and investigate how to characterise many aspects of the compliance problem in business process modeling. We first define a formal and conceptually rich language able to represent, and reason about, chains of reparational obligations of various types. Second, we devise a mechanism for normalising a system of legal norms. Third, we specify a suitable language for business process modeling able to automate and optimise business procedures and to embed normative constraints. Fourth, we develop an algorithm for compliance checking and discuss some computational issues regarding the possibility of checking compliance runtime or of enforcing it at design time.
On the complexity of temporal defeasible logic. In Thomas Meyer and Eugenia Ternovska, editors, 13 International Workshop on Non-Monotonic Reasoning (NMR 2010), CEUR Workshop Proceedings, 2010.
Abstract: In this paper we investigate the complexity of temporal defeasible logic, and we propose an efficient algorithm to compute the extension of a temporalised defeasible theory. We motivate the logic showing how it can be used to model deadlines.
Norm compliance in business process modeling. In Mike Dean, John Hall, Antonino Rotolo, and Said Tabet, editors, RuleML 2010: 4th International Web Rule Symposium, number 6403 in LNCS, pages 194-209, Berlin, 2010. Springer. Copyrigth © 2010 Springer.
Abstract: We investigate the concept of norm compliance in business process modeling. In particular we propose an extension of Formal Contract Logic (FCL), a combination of defeasible logic and a logic of violation, with a richer deontic language capable of capture many different facets of normative requirements. The resulting logic, called Process Compliance Logic (PCL), is able to capture both semantic compliance and structural compliance. This paper focuses on structural compliance, that is we show how PCL can capture obligations concerning the structure of a business process.
Guido Governatori and Rotolo Antonino.
Justice delayed is justice denied: Logics for a temporal account of reparations and legal compliance. In João Leite, Paolo Torroni, Thomas Ågotnes, Guido Boella, and Leon van der Torre, editors, CLIMA XII, 12th International Workshop on Computational Logic and Multi-Agent Sytems, number LNCS. Springer, 2011, Copyrigth © 2011 Springer.
Abstract: In this paper we extend the logic of violation proposed by Governatori and Rotolo with time, more precisely, we temporalise that logic. The resulting system allows us to capture many subtleties of the concept of legal compliance. In particular, the formal characterisation of compliance can handle different types of legal obligation and different temporal constraints over them. The logic is also able to represent, and reason about, chains of reparative obligations, since in many cases the fulfillment of these types of obligation still amount to legally acceptable situations.
Guido Governatori, Antonino Rotolo and Vineet Padmanabhan.
The Cost of Social Agents. In 5th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS06), ACM Press, 2006. Copyright © 2006 ACM.
Abstract: In this paper we follow the BOID (Belief, Obligation, Intention, Desire) architecture to describe agents and agent types in Defeasible Logic. We argue that the introduction of obligations can provide a new reading of the concepts of intention and intentionality. Then we examine the notion of social agent (i.e., an agent where obligations prevail over intentions) and discuss some computational and philosophical issues related to it. We show that the notion of social agent either requires more complex computations or has some philosophical drawbacks.
Guido Governatori, Antonino Rotolo, Régis Riveret, Monica Palmirani and Giovanni Sartor.
Variations of Temporal Defeasible Logic for Modelling Norm Modifications. In Radboud Winkels, editor, Proceedings of 11th International Conference on Artificial Intelligence and Law, 155-159. ACM Press, New York, 2007. Copyright © 2007 ACM
Abstract: This paper proposes some variants of Temporal Defeasible Logic (TDL) to reason about normative modifications. These variants make it possible to differentiate cases in which, for example, modifications at some time change legal rules but their conclusions persist afterwards from cases where also their conclusions are blocked.
Guido Governatori, Antonino Rotolo, and Rossella Rubino.
Implementing temporal defeasible logic for modeling legal reasoning. In 3rd Juris-Informatics Workshop (Jurisin 2009), LNAI, Berlin, 2010. Springer, Copyright © 2010 Springer.
Abstract: In this paper we briefly present an efficient implementation of temporal defeasible logic, and we argue that it can be used to efficiently capture the the legal concepts of persistence, retroactivity and periodicity. In particular, we illustrate how the system works with a real life example of a regulation.
Guido Governatori, Antonino Rotolo, and Giovanni Sartor.
Temporalised normative positions in defeasible logic. In Anne Gardner, editor, Procedings of the 10th International Conference on Artificial Intelligence and Law, pages 25-34. ACM Press, 6-10 June 2005, Copyright © 2005 ACM.
Abstract:We propose a computationally oriented non-monotonic multi-modal logic arising from the combination of temporalised agency and temporalised normative positions. We argue about the defeasible nature of these notions and then we show how to represent and reason with them in the setting of Defeasible Logic.
Guido Governatori and Shazia Sadiq.
The journey to business process compliance. In Jorge Cardoso and Wil van der Aalst, editors, Handbook of Research on BPM, IGI Global, 2009.
Abstract: It is a typical scenario that many organisations have their business processes specified independently of their business obligations (which includes contractual obligations to business partners, as well as obligations a business has to fulfil against regulations and industry standards). This is because of the lack of guidelines and tools that facilitate derivation of processes from contracts but also because of the traditional mindset of treating contracts separately from business processes. This chapter will provide a solution to one specific problem that arises from this situation, namely the lack of mechanisms to check whether business processes are compliant with business contracts. The chapter begins by defining the space for business process compliance and the eco-system for ensuring that process are compliant. The key point is that compliance is a relationship between two sets of specifications: the specifications for executing a business process and the specifications regulating a business. The central part of the chapter focuses on a logic based formalism for describing both the semantics of normative specifications and the semantics of compliance checking procedures.
Guido Governatori and Giovanni Sartor.
Burdens of proof in monological argumentation. In Radboud Winkels, editor, Legal Knowledge and Information Systems JURIX 2010: The Twenty-Third Annual Conference, Frontiers in Artificial Intelligence and Applications, Amsterdam, 2010. IOS Press.
Abstract: We shall argue that burdens of proof are relevant also to monological reasoning, i.e., for deriving the conclusions of a knowledge-base allowing for conflicting arguments. Reasoning with burdens of proof can provide a useful extension of current argument-based non-monotonic logics, at least a different perspective on them. Firstly we shall provide an objective characterisation of burdens of proof, assuming that burdens concerns rule antecedents (literals in the body of rules), rather than agents. Secondly, we shall analyse the conditions for a burden to be satisfied, by considering credulous or skeptical derivability of the concerned antecedent or of its complement. Finally, we shall develop a method for developing inferences out of a knowledge base merging rules and proof burdens in the framework of defeasible logic.
Guido Governatori and Andrew Stranieri.
Towards the application of association rules for defeasible rules discovery. In Bart Verheij, Arno Lodder, Ronald P. Loui, and Antoniette J. Muntjerwerff, editors, Legal Knowledge and Information Systems, pages 63-75, Amsterdam, 2001. JURIX, IOS Press.
Abstract:In this paper we investigate the feasibility of Knowledge Discovery from Database (KDD) in order to facilitate the discovery of defeasible rules that represent the ratio decidendi underpinning legal decision making. Moreover we will argue in favour of Defeasible Logic as the appropriate formal system in which the extracted principles should be encoded.
Guido Governatori and Duy~Hoang Pham.
A defeasible logic for modelling policy-based intentions and motivational attitudes. International Journal of Business Process Integration and Management, 5(4), 2009.
Guido Governatori, Subhasis Thakur, and Duy Hoang Pham.
A compliance model of trust. In Enrico Francesconi, Giovani Sartor, and Daniela Tiscornia, editors, Legal Knowledge and Information Systems (Jurix 2008), Frontieres in Artificial Intelligence and Applications 189, pages 118-127. IOS Press, 2008.
Abstract: We present a model of past interaction trust model based on compliance of expected behaviours.
Benjamin Johnston and Guido Governatori.
Induction of defeasible logic theories in the legal domain. In Giovanni Sartor, editor, Procedings of the 9th International Conference on Artificial Intelligence and Law, pages 204-213. IAAIL, ACM Press, 2003. Copyright © 2003 ACM.
Abstract:The market for intelligent legal information systems remains relatively untapped and while this might be interpreted as an indication that it is simply impossible to produce a system that satisfies the needs of the legal community, an analysis of previous attempts at producing such systems reveals a common set of deficiencies that in-part explain why there have been no overwhelming successes to date. Defeasible logic, a logic with proven successes at representing legal knowledge, seems to overcome many of these deficiencies and is a promising approach to representing legal knowledge. Unfortunately, an immediate application of technology to the challenges in this domain is an expensive and computationally intractable problem. So, in light of the benefits, we seek to find a practical algorithm that uses heuristics to discover an approximate solution. As an outcome of this work, we have developed an algorithm that integrates defeasible logic into a decision support system by automatically deriving its knowledge from databases of precedents. Experiments with the new algorithm are very promising - delivering results comparable to and exceeding other approaches.
Vineet Padmanabhan, Guido Governatori, Shazia Sadiq, Robert Colomb and Antonino Rotolo.
Process Modelling: The Deontic Way. In Markus Stumptner, Sven Hartmann and Yasushi Kiyoki, editors, Database Technology 2006, number 53 in Conference Research and Practice of Information Technology. Australian Computer Science Association, ACS, 16-19 January 2006. Copyright © 2006 ACS.
Abstract:Current enterprise systems rely heavily on the modelling and enactment of business processes. One of the key criteria for a business process is to represent not just the behaviours of the participants but also how the contractual relationships among them evolve over the course of an interaction. In this paper we provide a framework in which one can define policies/ business rules using deontic assignments to represent the contractual relationships. To achieve this end we use a combination of deontic/normative concepts like proclamation, directed obligation and direct action to account for a deontic theory of commitment which in turn can be used to model business processes in their organisational settings. In this way we view a business process as a social interaction process for the purpose of doing business. Further, we show how to extend the i* framework, a well known organisational modelling technique, so as to accommodate our notion of deontic dependency.
Monica Palmirani, Guido Governatori, and Giuseppe Contissa.
Temporal dimensions in rules modelling. In Radboud Winkels, editor, Legal Knowledge and Information Systems JURIX 2010: The Twenty-Third Annual Conference, Frontiers in Artificial Intelligence and Applications, Amsterdam, 2010. IOS Press.
Abstract: Typically legal reasoning involves multiple temporal dimensions. The contribution of this work is to extend LKIF-rules (LKIF is a proposed mark-up language designed for legal documents and legal knowledge in ESTRELLA Project [3]) with temporal dimensions. We propose an XML-schema to model the various aspects of the temporal dimensions in legal domain, and we discuss the design choices. We illustrate the use of the temporal dimensions in rules with the help of real life examples.
Monica Palmirani, Guido Governatori, and Contissa Giuseppe.
Modelling temporal legal rules. In Tom van Engers, editor, Proceedings of the 13th International Conference on Artificial Intelligence and Law (ICAIL 2011). ACM Press, 2011. Copyrigth © 2011 ACM Press.
Abstract: Legal reasoning involves multiple temporal dimensions but the existing state of the art of legal representation languages does not allow us to easily combine expressiveness, performance and legal reasoning requirements. Moreover we also aim at the combination of legal temporal reasoning with the defeasible logic approach, maintaining a computable complexity. The contribution of this work is to extend LKIF-rules with temporal dimensions and defeasible tools, extending our previous work.
Régis Riveret, Antonino Rotolo and Guido Governatori.
Interaction between Normative Systems and Cognitive agents in Temporal Modal Defeasible Logic. In Guido Boella, Leon van der Torre and Harko Verhagen, editors, Normative Multi-agent Systems. Dagstuhl Seminar Proceedings 7122. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany, Dagstuhl, Germany, 2007.
Abstract: While some recent frameworks on cognitive agents addressed the combination of mental attitudes with deontic concepts, they commonly ignore the representation of time. We propose in this paper a variant of Temporal Modal Defeasible Logic to deal in particular with temporal intervals.
Bram Roth, Régis Riveret, Antonino Rotolo and Guido Governatori.
Strategic Argumentation: A Game Theoretical Investigation. In Radboud Winkels, editor, Proceedings of 11th International Conference on Artificial Intelligence and Law, pp. 81-90. ACM Press, New York, 2007. Copyright © 2007 ACM
Abstract: Argumentation is modelled as a game where the payoffs are measured in terms of the probability that the claimed conclusion is, or is not, defeasibly provable, given a history of arguments that have actually been exchanged, and given the probability of the factual premises. The probability of a conclusion is calculated using a standard variant of Defeasible Logic, in combination with standard probability calculus. It is a new element of the present approach that the exchange of arguments is analysed with game theoretical tools, yielding a prescriptive and to some extent even predictive account of the actual course of play. A brief comparison with existing argument-based dialogue approaches confirms that such a prescriptive account of the actual argumentation has been almost lacking in the approaches proposed so far.
Miao Wang and Guido Governatori.
A Logic Framework of Normative-based Contract Management. Formal Methods in Electronic Commerce 2007. Stanford University, Palo Alto, CA. June 4, 2007.
Abstract: We explore of the feasibility of the computationally oriented institutional agency framework proposed by Governatori and Rotolo testing it against an industrial strength scenario. In particular we show how to encode in defeasible logic the dispute resolution policy described in Article 67 of FIDIC.
|
CommonCrawl
|
How many X-rays does a light bulb emit?
I read somewhere that most things1 emits all kinds of radiation, just very few of some kinds. So that made me wondering whether there is a formula to calculate how many X-rays an 100W incandescent light bulb would emit, for example in photons per second. For example, we already know that it emits infrared and visible light.
I find it hard to describe what I have tried. I searched on the internet for a formula, but couldn't find it. Yet I thought this was an interesting question, so I posted it here.
1 Black holes don't emit any radiation excepted for Hawking radiation if I get it right.
homework-and-exercises electromagnetic-radiation thermal-radiation estimation x-rays
wythagoras
wythagoraswythagoras
$\begingroup$ I was about to rant about wrong grammar for using "many" instead of "much" to describe intensity of radiation until I realized that photons are technically countable. $\endgroup$
– slebetman
$\begingroup$ @annav, is there really any upper energy limit to what we can call "X-rays"? I always thought that "X-ray" referred to high-energy photons that emanate from electron interactions, and "Gamma Ray" referred to high-energy photons that emanate from atomic nuclei. I have worked with medical machines generating photons with energies as high as 25 MeV---way higher than most gammas---and the manuals always said "X-ray". $\endgroup$
– Solomon Slow
$\begingroup$ About 25 bursts of gamma rays per year due to natural alpha decay of Tungsten producing secondary gamma rays. $\endgroup$
– Count Iblis
$\begingroup$ @ErikE Note that in some dialects (specifically in the UK) hundred sounds like 'undred, and the decision to use 'a' or 'an' depends on whether the initial sound is a vowel or consonant, so this is a rare case where the dialect affects orthography. I have seen a similar case with "a historical" / "an historical", depending on whether you voice the initial h. $\endgroup$
– Mario Carneiro
$\begingroup$ @MarioCarneiro I see! I was thinking only "one hundred". I would never omit the "one". $\endgroup$
– ErikE
The formula you want is called Planck's Law. Copying Wikipedia:
The spectral radiance of a body, $B_{\nu}$, describes the amount of energy it gives off as radiation of different frequencies. It is measured in terms of the power emitted per unit area of the body, per unit solid angle that the radiation is measured over, per unit frequency.
$$ B_\nu(\nu, T) = \frac{ 2 h \nu^3}{c^2} \frac{1}{e^\frac{h\nu}{k_\mathrm{B}T} - 1} $$
Now to work out the total power emitted per unit area per solid angle by our lightbulb in the X-ray part of the EM spectrum we can integrate this to infinity:
$$P_{\mathrm{X-ray}} = \int_{\nu_{min}}^{\infty} \mathrm{B}_{\nu}d\nu, $$
where $\nu_{min}$ is where we (somewhat arbitrarily) choose the lowest frequency photon that we would call an X-ray photon. Let's say that a photon with a 10 nm wavelength is our limit. Let's also say that 100W bulb has a surface temperature of 3,700 K, the melting temperature of tungsten. This is a very generous upper bound - it seems like a typical number might be 2,500 K.
We can simplify this to:
$$ P_{\mathrm{X-ray}} = 2\frac{k^4T^4}{h^3c^2} \sum_{n=1}^{\infty} \int_{x_{min}}^{\infty}x^3e^{-nx}dx, $$
where $x = \frac{h\nu}{kT}$. wythagoras points out we can express this in terms of the incomplete gamma function, to get
$$ 2\frac{k^4T^4}{h^3c^2}\sum_{n=1}^{\infty}\frac{1}{n^4} \Gamma(4, n\cdot x) $$
Plugging in some numbers reveals that the n = 1 term dominates the other terms, so we can drop higher n terms, resulting in
$$ P \approx 10^{-154} \ \mathrm{Wm^{-2}}. $$
This is tiny. Over the course of the lifetime of the universe you can expect on average no X-Ray photons to be emitted by the filament.
More exact treatments might get you more exact numbers (we've ignored the surface area of the filament and the solid angle factor for instance), but the order of magnitude is very telling - there are no X-ray photons emitted by a standard light bulb.
Chris CundyChris Cundy
$\begingroup$ It's a great answer, thank you. But the number is much, much lower than I had expected. By the way, I know a way to solve the integral and the series, if you would like to know how I can write an answer. $\endgroup$
– wythagoras
$\begingroup$ By all means! I'd be really interested to see what you think. griffin175's answer physics.stackexchange.com/a/200883/81404 seems to roughly agree that there are basically no photons. $\endgroup$
– Chris Cundy
$\begingroup$ I'm having trouble with the closed form due to a stupid mistake. Doing the substitution $u=nx$, we get $$ \sum_{n=1}^{\infty} \int_{\nu_{min} \cdot n}^{\infty} \frac{1}{n}\left(\frac{u}{n}\right)^3e^{-u}\mathrm{d}u$$ $$ \sum_{n=1}^{\infty} \frac{1}{n^4} \Gamma(4,\nu_{min}\cdot n)$$ where $\Gamma$ is the upper complete gamma function. But even if $\Gamma(4,\nu_{min})$ is extremely small, rather something in the order of $e^{-\nu_{min}}\nu_{min}^3$, and $\nu_{min} = 3 \times 10^{16}$ if I didn't misunderstood you. $\endgroup$
$\begingroup$ Worth saying that Tungsten melts at 3695K. Assuming that's where you got the upper temperature bound from. $\endgroup$
– OrangeDog
$\begingroup$ Why did you say "we can simply this" and then make another equation that's like twice as big? Man, math is crazy. $\endgroup$
– corsiKa
The wavelengths of light emitted can be calculated using planks law and the temperature of the object. For your average 100W incandescent light bulb, the filament is 2823 kelvin according to google.
The spectral radiance, $B$, is equal to $$\frac{1.2\cdot10^{52}}{\mathrm{wavelength}^{5}\cdot e^{\frac{1.99\cdot10^{43}}{\mathrm{wavelength}\cdot4\cdot10^{26}}}-1}$$
Math to solve for spectral radiance is hard, so this online calculator will do all the work. X-rays are between 0.01nm and 10nm. The total radiance at 10nm is $2.7\cdot10^{-187}$photons/s/m2/sr/µm. That's so unbelievably small, It would take a very long time for that bulb to emit an xray photon. The calculator wont give the spectral radiance of the smaller wavelength xrays so we'll just use the biggest X-rays.
In order to figure out how many photons per second are emitted you would need to know the surface area of the filament. It's a tiny metal sting, that would be hard to find out, but if you really want to, break open a bulb and measure its length and diameter with a caliper. Estimate surface area using the surface area of a cylinder formula A=πdh. Forget the ends, they're too small to bother with.
If you don't want to go through the trouble of breaking a bulb, make a wild guesstimate. 0.6m length and $5\cdot10^{-4}$ diameter, being generous. area of 0.001 m2. So $2.7\cdot10^{-187}$photons/s/m^2/sr/µm, then with the given surface area, $2.7*10^{-190}$ photons/s/sr/µm. That's 8.5 photons every $10^{186}$ years. Maybe if you watch 100,000,000,000,000 light bulbs you might catch an X-ray within your lifetime.
griffin175griffin175
$\begingroup$ From tempurature you have its emmissions curve per $m^3$, and from specs (1600 lumens for 100 W bulb) you have the amount of visible light it emits. From that, you should be able to calculate surface area, no? $\endgroup$
– Yakk
$\begingroup$ 100,000,000,000,000 is nowhere NEAR enough bulbs. You need more like 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 bulbs. Give or take a couple of orders of magnitude. $\endgroup$
– Kyle Oman
$\begingroup$ @KyleOman You are right. Exepted that if out take all these bulbs as one big sphere, that the pressure will be much higher, so that the temperature will rise. $\endgroup$
$\begingroup$ Well, that's more than the estimated number of atoms in the observable Universe, so you'll also have an appreciable effect on cosmology (even before turning on the radiation field), and the whole thing will likely collapse gravitationally and heat that way (shining brightly in the X-ray, I would bet), and probably form the mother of all supermassive black holes. But I was mostly being (and still am being) somewhat facetious. Though a hundred trillion lightbulbs is a somewhat plausible number, whereas $10^{186}$ is on a totally different scale. Important to make that distinction imho. $\endgroup$
$\begingroup$ Fun fact: Most modern filaments are double-coiled tungsten alloy strings -- basically wound in a very tight coil, which is then wound in a looser coil... LOTS of surface area to be had! See upload.wikimedia.org/wikipedia/commons/0/08/Filament.jpg $\endgroup$
– Doktor J
I will give a closed form for the integral in Chris Cundy's answer.
Doing the substitution $u=nx$, we get $$ \sum_{n=1}^{\infty} \int_{x_{min} \cdot n}^{\infty} \frac{1}{n}\left(\frac{u}{n}\right)^3e^{-u}\mathrm{d}u$$
$$ \sum_{n=1}^{\infty} \frac{1}{n^4} \Gamma(4,x_{min}\cdot n)$$
where $\Gamma$ is the upper complete gamma function. We write $a=x_{min}$ as it will be used a lot so a short name is more useful. Using the reduction formula for the gamma function when the first argument is an integer, we get: $$ \sum_{n=1}^{\infty} \left(\frac{1}{n^4}e^{-an}\left(6+6an+3a^2n^2+a^3n^3\right)\right) $$
$$ 6\sum_{n=1}^{\infty} \frac{1}{n^4}e^{-an} + 6a\sum_{n=1}^{\infty} \frac{1}{n^3}e^{-an}+ 3a^2\sum_{n=1}^{\infty} \frac{1}{n^2}e^{-an}+a^3\sum_{n=1}^{\infty} \frac{1}{n}e^{-an}$$
Now note that $$\frac{\mathrm{d}}{\mathrm{d}a} \sum_{n=1}^{\infty} \frac{1}{n}e^{-an} = \sum_{n=1}^{\infty} \frac{\mathrm{d}}{\mathrm{d}a} \left[\frac{1}{n}e^{-an}\right]=\sum_{n=1}^{\infty}-e^{-an}=-\sum_{n=1}^{\infty}(e^{-a})^n=1-\frac{1}{1-e^{-a}}$$
$$\sum_{n=1}^{\infty} \frac{1}{n}e^{-an} = \int 1-\frac{1}{1-e^{-a}} \mathrm{d}a = -\ln|1-e^{-a}|$$
We'll get the other terms in a similiar way. The final result is:
$$\sum_{n=1}^{\infty} \int_{a}^{\infty} x^3e^{-nx}dx = -6\mathrm{Li}_4(e^a)+6a\mathrm{Li}_3(e^a)+6a^2 \mathrm{Li}_2(1-e^{-a})-9a^2\mathrm{Li}_2(e^a)\\+2a^3\ln|1-e^{-a}|-9a^3\ln|1-e^{a}|+5\frac{3}4 a^4$$
I used a Computer Algebra System to find this form. $\mathrm{Li}_n$ is the polylogarithm function.
Not the answer you're looking for? Browse other questions tagged homework-and-exercises electromagnetic-radiation thermal-radiation estimation x-rays or ask your own question.
EM Radiation and Heat
Does a source emitting visible light also emit infrared, microwave and radio waves?
How can I measure the ability of sunglasses to block UV radiation?
How do car sunshades work?
How does sunlight feel hot through glass?
Why do black bodies in thermal equilibrium with their surroundings only emit in non-visible regions of the electromagnetic spectrum?
Is high temperature a necessary condition for an object to be a blackbody?
|
CommonCrawl
|
2.3: Chi-Square Test of Goodness-of-Fit
[ "article:topic", "authorname:mcdonaldj", "transcluded:yes", "showtoc:no", "source-stats-1720" ]
Username: jhalpern
Temple U
2: Tests for Nominal Variables
Contributed by John H. McDonald
Associate Professor (Biological Sciences) at University of Delaware
When to use it
How the test works
Post-hoc test
Extrinsic Hypothesis examples
Intrinsic Hypothesis examples
Graphing the results
Similar tests
Chi-square vs. G–test
How to do the test
Power analysis
Skills to Develop
Study the use of chi-square test of goodness-of-fit when you have one nominal variable
To see if the number of observations in each category fits a theoretical expectation, and the sample size is large
Use the chi-square test of goodness-of-fit when you have one nominal variable with two or more values (such as red, pink and white flowers). You compare the observed counts of observations in each category with the expected counts, which you calculate using some kind of theoretical expectation (such as a \(1:1\) sex ratio or a \(1:2:1\) ratio in a genetic cross).
If the expected number of observations in any category is too small, the chi-square test may give inaccurate results, and you should use an exact test instead. See the web page on small sample sizes for discussion of what "small" means.
The chi-square test of goodness-of-fit is an alternative to the G–test of goodness-of-fit; each of these tests has some advantages and some disadvantages, and the results of the two tests are usually very similar. You should read the section on "Chi-square vs. G–test" near the bottom of this page, pick either chi-square or G–test, then stick with that choice for the rest of your life. Much of the information and examples on this page are the same as on the G–test page, so once you've decided which test is better for you, you only need to read one.
The statistical null hypothesis is that the number of observations in each category is equal to that predicted by a biological theory, and the alternative hypothesis is that the observed numbers are different from the expected. The null hypothesis is usually an extrinsic hypothesis, where you knew the expected proportions before doing the experiment. Examples include a \(1:1\) sex ratio or a \(1:2:1\) ratio in a genetic cross. Another example would be looking at an area of shore that had 59% of the area covered in sand, \(28\%\) mud and \(13\%\) rocks; if you were investigating where seagulls like to stand, your null hypothesis would be that \(59\%\) of the seagulls were standing on sand, \(28\%\) on mud and \(13\%\) on rocks.
In some situations, you have an intrinsic hypothesis. This is a null hypothesis where you calculate the expected proportions after you do the experiment, using some of the information from the data. The best-known example of an intrinsic hypothesis is the Hardy-Weinberg proportions of population genetics: if the frequency of one allele in a population is \(p\) and the other allele is \(q\), the null hypothesis is that expected frequencies of the three genotypes are \(p^2\), \(2pq\), and \(q^2\). This is an intrinsic hypothesis, because you estimate \(p\) and \(q\) from the data after you collect the data, you can't predict \(p\) and \(q\) before the experiment.
Unlike the exact test of goodness-of-fit, the chi-square test does not directly calculate the probability of obtaining the observed results or something more extreme. Instead, like almost all statistical tests, the chi-square test has an intermediate step; it uses the data to calculate a test statistic that measures how far the observed data are from the null expectation. You then use a mathematical relationship, in this case the chi-square distribution, to estimate the probability of obtaining that value of the test statistic.
You calculate the test statistic by taking an observed number (\(O\)), subtracting the expected number (\(E\)), then squaring this difference. The larger the deviation from the null hypothesis, the larger the difference is between observed and expected. Squaring the differences makes them all positive. You then divide each difference by the expected number, and you add up these standardized differences. The test statistic is approximately equal to the log-likelihood ratio used in the G–test. It is conventionally called a "chi-square" statistic, although this is somewhat confusing because it's just one of many test statistics that follows the theoretical chi-square distribution. The equation is:
\[\text{chi}^{2}=\sum \frac{(O-E)^2}{E}\]
As with most test statistics, the larger the difference between observed and expected, the larger the test statistic becomes. To give an example, let's say your null hypothesis is a \(3:1\) ratio of smooth wings to wrinkled wings in offspring from a bunch of Drosophila crosses. You observe \(770\) flies with smooth wings and \(230\) flies with wrinkled wings; the expected values are \(750\) smooth-winged and \(250\) wrinkled-winged flies. Entering these numbers into the equation, the chi-square value is \(2.13\). If you had observed \(760\) smooth-winged flies and \(240\) wrinkled-wing flies, which is closer to the null hypothesis, your chi-square value would have been smaller, at \(0.53\); if you'd observed \(800\) smooth-winged and \(200\) wrinkled-wing flies, which is further from the null hypothesis, your chi-square value would have been \(13.33\).
The distribution of the test statistic under the null hypothesis is approximately the same as the theoretical chi-square distribution. This means that once you know the chi-square value and the number of degrees of freedom, you can calculate the probability of getting that value of chi-square using the chi-square distribution. The number of degrees of freedom is the number of categories minus one, so for our example there is one degree of freedom. Using the CHIDIST function in a spreadsheet, you enter =CHIDIST(2.13, 1) and calculate that the probability of getting a chi-square value of \(2.13\) with one degree of freedom is \(P=0.144\).
The shape of the chi-square distribution depends on the number of degrees of freedom. For an extrinsic null hypothesis (the much more common situation, where you know the proportions predicted by the null hypothesis before collecting the data), the number of degrees of freedom is simply the number of values of the variable, minus one. Thus if you are testing a null hypothesis of a \(1:1\) sex ratio, there are two possible values (male and female), and therefore one degree of freedom. This is because once you know how many of the total are females (a number which is "free" to vary from \(0\) to the sample size), the number of males is determined. If there are three values of the variable (such as red, pink, and white), there are two degrees of freedom, and so on.
An intrinsic null hypothesis is one where you estimate one or more parameters from the data in order to get the numbers for your null hypothesis. As described above, one example is Hardy-Weinberg proportions. For an intrinsic null hypothesis, the number of degrees of freedom is calculated by taking the number of values of the variable, subtracting \(1\) for each parameter estimated from the data, then subtracting \(1\) more. Thus for Hardy-Weinberg proportions with two alleles and three genotypes, there are three values of the variable (the three genotypes); you subtract one for the parameter estimated from the data (the allele frequency, \(p\)); and then you subtract one more, yielding one degree of freedom. There are other statistical issues involved in testing fit to Hardy-Weinberg expectations, so if you need to do this, see Engels (2009) and the older references he cites.
If there are more than two categories and you want to find out which ones are significantly different from their null expectation, you can use the same method of testing each category vs. the sum of all other categories, with the Bonferroni correction, as I describe for the exact test. You use chi-square tests for each category, of course.
The chi-square of goodness-of-fit assumes independence, as described for the exact test
European crossbills (Loxia curvirostra) have the tip of the upper bill either right or left of the lower bill, which helps them extract seeds from pine cones. Some have hypothesized that frequency-dependent selection would keep the number of right and left-billed birds at a \(1:1\) ratio. Groth (1992) observed \(1752\) right-billed and \(1895\) left-billed crossbills.
Fig. 2.3.1 Male red crossbills, Loxia curvirostra, showing the two bill types.
Calculate the expected frequency of right-billed birds by multiplying the total sample size (\(3647\)) by the expected proportion (\(0.5\)) to yield \(1823.5\). Do the same for left-billed birds. The number of degrees of freedom when an for an extrinsic hypothesis is the number of classes minus one. In this case, there are two classes (right and left), so there is one degree of freedom.
The result is chi-square=\(5.61\), \(1d.f.\), \(P=0.018\), indicating that you can reject the null hypothesis; there are significantly more left-billed crossbills than right-billed.
Shivrain et al. (2006) crossed clearfield rice, which are resistant to the herbicide imazethapyr, with red rice, which are susceptible to imazethapyr. They then crossed the hybrid offspring and examined the \(F_2\) generation, where they found \(772\) resistant plants, \(1611\) moderately resistant plants, and \(737\) susceptible plants. If resistance is controlled by a single gene with two co-dominant alleles, you would expect a \(1:2:1\) ratio. Comparing the observed numbers with the \(1:2:1\) ratio, the chi-square value is \(4.12\). There are two degrees of freedom (the three categories, minus one), so the \(P\) value is \(0.127\); there is no significant difference from a \(1:2:1\) ratio.
Mannan and Meslow (1984) studied bird foraging behavior in a forest in Oregon. In a managed forest, \(54\%\) of the canopy volume was Douglas fir, \(28\%\) was ponderosa pine, \(5\%\) was grand fir, and \(1\%\) was western larch. They made \(156\) observations of foraging by red-breasted nuthatches; \(70\) observations (\(45\%\) of the total) in Douglas fir, \(79\) (\(51\%\)) in ponderosa pine, \(3\) (\(2\%\)) in grand fir, and \(4\) (\(3\%\)) in western larch. The biological null hypothesis is that the birds forage randomly, without regard to what species of tree they're in; the statistical null hypothesis is that the proportions of foraging events are equal to the proportions of canopy volume. The difference in proportions is significant (chi-square=\(13.59\), \(3d.f.\), \(P=0.0035\)).
Fig. 2.3.2 Female red-breasted nuthatch, Sitta canadensis.
The expected numbers in this example are pretty small, so it would be better to analyze it with an exact test. I'm leaving it here because it's a good example of an extrinsic hypothesis that comes from measuring something (canopy volume, in this case), not a mathematical theory; I've had a hard time finding good examples of this.
McDonald (1989) examined variation at the \(\mathit{Mpi}\) locus in the amphipod crustacean Platorchestia platensis collected from a single location on Long Island, New York. There were two alleles, \(\mathit{Mpi}^{90}\) and \(\mathit{Mpi}^{100}\) and the genotype frequencies in samples from multiple dates pooled together were \(1203\) \(\mathit{Mpi}^{90/90}\), \(2919\) \(\mathit{Mpi}^{90/100}\), and \(1678\) \(\mathit{Mpi}^{100/100}\). The estimate of the \(\mathit{Mpi}^{90}\) allele proportion from the data is \(5325/11600=0.459\). Using the Hardy-Weinberg formula and this estimated allele proportion, the expected genotype proportions are \(0.211\) \(\mathit{Mpi}^{90/90}\), \(0.497\) \(\mathit{Mpi}^{90/100}\), and \(0.293\) \(\mathit{Mpi}^{100/100}\). There are three categories (the three genotypes) and one parameter estimated from the data (the \(\mathit{Mpi}^{90}\) allele proportion), so there is one degree of freedom. The result is chi-square=\(1.08\), \(1d.f.\), \(P=0.299\), which is not significant. You cannot reject the null hypothesis that the data fit the expected Hardy-Weinberg proportions.
If there are just two values of the nominal variable, you shouldn't display the result in a graph, as that would be a bar graph with just one bar. Instead, just report the proportion; for example, Groth (1992) found \(52.0\%\) left-billed crossbills.
With more than two values of the nominal variable, you should usually present the results of a goodness-of-fit test in a table of observed and expected proportions. If the expected values are obvious (such as \(50\%\)) or easily calculated from the data (such as Hardy–Weinberg proportions), you can omit the expected numbers from your table. For a presentation you'll probably want a graph showing both the observed and expected proportions, to give a visual impression of how far apart they are. You should use a bar graph for the observed proportions; the expected can be shown with a horizontal dashed line, or with bars of a different pattern.
If you want to add error bars to the graph, you should use confidence intervals for a proportion. Note that the confidence intervals will not be symmetrical, and this will be particularly obvious if the proportion is near \(0\) or \(1\).
Fig. 2.3.3 Habitat use in the red-breasted nuthatch.. Gray bars are observed percentages of foraging events in each tree species, with 95% confidence intervals; black bars are the expected percentages.
Some people use a "stacked bar graph" to show proportions, especially if there are more than two categories. However, it can make it difficult to compare the sizes of the observed and expected values for the middle categories, since both their tops and bottoms are at different levels, so I don't recommend it.
You use the chi-square test of independence for two nominal variables, not one.
There are several tests that use chi-square statistics. The one described here is formally known as Pearson's chi-square. It is by far the most common chi-square test, so it is usually just called the chi-square test.
You have a choice of three goodness-of-fit tests: the exact test of goodness-of-fit, the G–test of goodness-of-fit,, or the chi-square test of goodness-of-fit. For small values of the expected numbers, the chi-square and G–tests are inaccurate, because the distributions of the test statistics do not fit the chi-square distribution very well.
The usual rule of thumb is that you should use the exact test when the smallest expected value is less than \(5\), and the chi-square and G–tests are accurate enough for larger expected values. This rule of thumb dates from the olden days when people had to do statistical calculations by hand, and the calculations for the exact test were very tedious and to be avoided if at all possible. Nowadays, computers make it just as easy to do the exact test as the computationally simpler chi-square or G–test, unless the sample size is so large that even computers can't handle it. I recommend that you use the exact test when the total sample size is less than \(1000\). With sample sizes between \(50\) and \(1000\) and expected values greater than \(5\), it generally doesn't make a big difference which test you use, so you shouldn't criticize someone for using the chi-square or G–test for experiments where I recommend the exact test. See the web page on small sample sizes for further discussion.
The chi-square test gives approximately the same results as the G–test. Unlike the chi-square test, the G-values are additive; you can conduct an elaborate experiment in which the G-values of different parts of the experiment add up to an overall G-value for the whole experiment. Chi-square values come close to this, but the chi-square values of subparts of an experiment don't add up exactly to the chi-square value for the whole experiment. G–tests are a subclass of likelihood ratio tests, a general category of tests that have many uses for testing the fit of data to mathematical models; the more elaborate versions of likelihood ratio tests don't have equivalent tests using the Pearson chi-square statistic. The ability to do more elaborate statistical analyses is one reason some people prefer the G–test, even for simpler designs. On the other hand, the chi-square test is more familiar to more people, and it's always a good idea to use statistics that your readers are familiar with when possible. You may want to look at the literature in your field and use whichever is more commonly used.
Of course, you should not analyze your data with both the G–test and the chi-square test, then pick whichever gives you the most interesting result; that would be cheating. Any time you try more than one statistical technique and just use the one that give the lowest P value, you're increasing your chance of a false positive.
I have set up a spreadsheet for the chi-square test of goodness-of-fit chigof.xls . It is largely self-explanatory. It will calculate the degrees of freedom for you if you're using an extrinsic null hypothesis; if you are using an intrinsic hypothesis, you must enter the degrees of freedom into the spreadsheet.
There are web pages that will perform the chi-square test here and here. None of these web pages lets you set the degrees of freedom to the appropriate value for testing an intrinsic null hypothesis.
Salvatore Mangiafico's R Companion has a sample R program for the chi-square test of goodness-of-fit.
Here is a SAS program that uses PROC FREQ for a chi-square test. It uses the Mendel pea data from above. The "WEIGHT count" tells SAS that the "count" variable is the number of times each value of "texture" was observed. The ZEROS option tells it to include observations with counts of zero, for example if you had \(20\) smooth peas and \(0\) wrinkled peas; it doesn't hurt to always include the ZEROS option. CHISQ tells SAS to do a chi-square test, and TESTP=(75 25); tells it the expected percentages. The expected percentages must add up to \(100\). You must give the expected percentages in alphabetical order: because "smooth" comes before "wrinkled," you give the expected frequencies for \(75\%\) smooth, \(25\%\) wrinkled.
DATA peas;
INPUT texture $ count;
DATALINES;
smooth 423
wrinkled 133
PROC FREQ DATA=peas;
WEIGHT count / ZEROS;
TABLES texture / CHISQ TESTP=(75 25);
RUN;
Here's a SAS program that uses PROC FREQ for a chi-square test on raw data, where you've listed each individual observation instead of counting them up yourself. I've used three dots to indicate that I haven't shown the complete data set.
INPUT texture $;
The output includes the following:
Chi-Square Test
for Specified Proportions
Chi-Square 0.3453
Pr > ChiSq 0.5568
You would report this as "chi-square=0.3453, 1 d.f., P=0.5568."
To do a power analysis using the G*Power program, choose "Goodness-of-fit tests: Contingency tables" from the Statistical Test menu, then choose "Chi-squared tests" from the Test Family menu. To calculate effect size, click on the Determine button and enter the null hypothesis proportions in the first column and the proportions you hope to see in the second column. Then click on the Calculate and Transfer to Main Window button. Set your alpha and power, and be sure to set the degrees of freedom (Df); for an extrinsic null hypothesis, that will be the number of rows minus one.
As an example, let's say you want to do a genetic cross of snapdragons with an expected \(1:2:1\) ratio, and you want to be able to detect a pattern with \(5\%\) more heterozygotes that expected. Enter \(0.25\), \(0.50\), and \(0.25\) in the first column, enter \(0.225\), \(0.55\), and \(0.225\) in the second column, click on Calculate and Transfer to Main Window, enter \(0.05\) for alpha, \(0.80\) for power, and \(2\) for degrees of freedom. If you've done this correctly, your result should be a total sample size of \(964\).
Picture of nuthatch from kendunn.smugmug.com.
Engels, W.R. 2009. Exact tests for Hardy-Weinberg proportions. Genetics 183: 1431-1441.
Groth, J.G. 1992. Further information on the genetics of bill crossing in crossbills. Auk 109:383–385.
Mannan, R.W., and E.C. Meslow. 1984. Bird populations and vegetation characteristics in managed and old-growth forests, northeastern Oregon. Journal of Wildlife Management 48: 1219-1238.
McDonald, J.H. 1989. Selection component analysis of the Mpi locus in the amphipod Platorchestia platensis. Heredity 62: 243-249.
Shivrain, V.K., N.R. Burgos, K.A.K. Moldenhauer, R.W. McNew, and T.L. Baldwin. 2006. Characterization of spontaneous crosses between Clearfield rice (Oryza sativa) and red rice (Oryza sativa). Weed Technology 20: 576-584.
John H. McDonald (University of Delaware)
2.2: Power Analysis
2.4: G–Test of Goodness-of-Fit
John H. McDonald
source-stats-1720
|
CommonCrawl
|
Experimental evaluation of theperformance of 2×2 MIMO-OFDM for vehicle-to-infrastructure communications
Okechukwu J. Onubogu1,
Karla Ziri-Castro1,
Dhammika Jayalath1 &
Hajime Suzuki2
In this paper, a novel 2×2 multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) testbed based on an Analog Devices AD9361 highly integrated radio frequency (RF) agile transceiver was specifically implemented for the purpose of estimating and analyzing MIMO-OFDM channel capacity in vehicle-to-infrastructure (V2I) environments using the 920 MHz industrial, scientific, and medical (ISM) band. We implemented two-dimensional discrete cosine transform-based filtering to reduce the channel estimation errors and show its effectiveness on our measurement results. We have also analyzed the effects of channel estimation error on the MIMO channel capacity by simulation. Three different scenarios of subcarrier spacing were investigated which correspond to IEEE 802.11p, Long-Term Evolution (LTE), and Digital Video Broadcasting Terrestrial (DVB-T)(2k) standards. An extensive MIMO-OFDM V2I channel measurement campaign was performed in a suburban environment. Analysis of the measured MIMO channel capacity results as a function of the transmitter-to-receiver (TX-RX) separation distance up to 250 m shows that the variance of the MIMO channel capacity is larger for the near-range line-of-sight (LOS) scenarios than for the long-range non-LOS cases, using a fixed receiver signal-to-noise ratio (SNR) criterion. We observed that the largest capacity values were achieved at LOS propagation despite the common assumption of a degenerated MIMO channel in LOS. We consider that this is due to the large angular spacing between MIMO subchannels which occurs when the receiver vehicle rooftop antennas pass by the fixed transmitter antennas at close range, causing MIMO subchannels to be orthogonal. In addition, analysis on the effects of different subcarrier spacings on MIMO-OFDM channel capacity showed negligible differences in mean channel capacity for the subcarrier spacing range investigated. Measured channels described in this paper are available on request.
Multiple-input multiple-output (MIMO) systems have attracted considerable attention due to the increasing requirements of high capacity, spectral efficiency, and reliability in wireless communications. For example, MIMO systems have been adopted in the Long-Term Evolution (LTE) system, and it is expected that the upcoming developments in IEEE 802.11p and Digital Video Broadcasting Terrestrial (DVB-T) wireless standards will include the use of MIMO. It has been shown [1] that MIMO, when deployed in a rich scattering environment, is capable of achieving high spectral efficiency, capacity, and reliability by exploiting the increased spatial degrees of freedom. MIMO is often combined with the orthogonal frequency division multiplexing (OFDM) in modern wireless standards in order to achieve higher data rates and performance improvements in a multipath fading environment without increasing the required bandwidth or transmission power.
Efficient vehicular communication is a key in the development of intelligent transport systems (ITS) and requires the exchange of messages between two vehicles (vehicle-to-vehicle or V2V communications) or between a vehicle and a roadside unit (vehicle-to-infrastructure or V2I communications). Basically, there are two kinds of vehicular applications: those dedicated to providing safety services and others for non-safety applications [2]. For safety purposes, the use of licensed band at 5.9 GHz has been considered to avoid the problem of interference typically faced in the use of industrial, scientific, and medical (ISM) radio bands. Non-safety applications use the ISM band for the purpose of infotainment (e.g., high data rate Internet access for video streaming) where the availability of the service is expected to be opportunistic. The 920 MHz ISM band in Australia occupies 918–926 MHz. Among the ISM bands, the 920 MHz band gives an optimal trade-off of robustness against slow fading, achieving a longer range in cluttered environments and having a sufficient bandwidth for the high-data-rate Internet access. This paper focuses on the non-safety V2I applications at the 920 MHz ISM band which promises to provide infotainment applications, mobile internet services, and social network applications which are widely used in people's daily activities in vehicles. The successful deployment of commercial MIMO systems will require a solid understanding of the channel characteristics in which it will operate. In order to assess the performance of new wireless communication systems using MIMO antennas, it is desirable to evaluate them in realistic measurement scenarios. Consequently, numerous MIMO channel measurement campaigns have been carried out in vehicular environments [3–7]. However, only a few research publications have considered MIMO V2V channels [8–12], and even fewer theoretically based research works have investigated MIMO V2I channels [13, 14]. A number of single-input single-output (SISO) antenna V2V and V2I channel measurement campaigns have been conducted [15, 16]. However, to the best of our knowledge, we are not aware of any MIMO-OFDM measurement results for V2I communications published in the scientific literature to date. In this paper, we focus on presenting the results of an experimental investigation of 2×2 MIMO-OFDM channel measurements performed in a real V2I driving scenario under both line-of-sight (LOS) and non-LOS (NLOS) conditions at the 920 MHz ISM band in a suburban environment. A channel sounding system based on a software-defined radio (SDR) platform was implemented and used to perform an extensive measurement campaign in a suburban environment. In comparison to the use of the conventional heavy and expensive radio frequency (RF) test equipment such as signal generators, vector network analyzers, and spectrum analyzers, SDR provides a flexible, inexpensive, and cost-effective measurement setup implemented in software that enables researchers to use and control the radio signal through software tools such as MATLAB.
The rapid development of MIMO systems has been based on the assumption that independent and identically distributed (i.i.d) or correlated Rayleigh fading with NLOS components is available and a high number of multipath components are created by the surrounding environment [17–19]. This, however, is not valid in all cases, and it is violated due to the existence of a LOS component that is stronger than other components. Hence, the channel can be more effectively modeled using the Ricean distribution. Conventionally, the presence of a LOS component is thought to limit the benefits of MIMO systems because of the rank deficiency of the channel matrix [20, 21]; however, a number of investigations [13, 14, 22–26] have shown that using antennas positioned or spaced in such a way that the LOS MIMO subchannels are orthogonal results in a full-rank MIMO channel matrix and therefore high-capacity channels. The common idea behind these approaches is to place the antenna elements sufficiently far apart so that the spatial LOS MIMO subchannels become orthogonal with a phase difference of π/2. The optimal spacings can be worked out via simple geometrical tools, while the channel matrix becomes full rank and delivers equal eigenvalues. This is known as an optimized LOS MIMO system [24]. We can determine the required inter-element spacings to achieve the maximum 2×2 MIMO capacity. The formula is a function of the inter-element distance, the transmitter-to-receiver (TX-RX) separation distance, the orientation of the arrays, and the carrier frequency.
This paper validates the theoretical maximum LOS MIMO capacity criteria in [14, 25, 27, 28] by presenting a measurement-based analysis of mean MIMO capacity (mean over the channel bandwidth and the time of 50 ms) as a function of TX-RX separation distance. The technique is based on the achievement of spatial multiplexing in scenarios by creating an artificial multipath not caused by physical objects but rather by deliberate antenna placement or separation of the antenna elements in such a way that a deterministic and constant orthogonal multipath is created at a specific TX-RX separation distance called D opt. This paper also analyzes the MIMO channel capacity for three different subcarrier spacings: large subcarrier spacing (LSS), medium subcarrier spacing (MSS), and small subcarrier spacing (SSS). These subcarrier spacings approximately correspond to IEEE 802.11p Wireless Access in Vehicular Environment, LTE, and the 2k version of the DVB-T standard. It is important to note that this paper analyzes the capacity of the MIMO channels with three different values of subcarrier spacing and not the capacity of the whole system. In this analysis, the MIMO channels are estimated by the least square (LS) channel estimation method with known channel training symbols [29]. The channel estimation error due to lower signal-to-noise ratio (SNR) at a longer TX-RX separation distance is substantially reduced by applying two-dimensional discrete cosine transform (2D DCT)-based filtering to take advantage of the time and frequency coherence of the channel [30–32].
The remainder of the paper is organized as follows. Section 1.2 describes the channel model, MIMO channel capacity, the derivation for maximum LOS MIMO channel capacity criteria, and the LS channel estimation. Section 1.3 presents the MIMO-OFDM V2I measurement equipment, measurement environment, and parameters. In Section 1.4, we present the analysis of measurement results. Finally, Section 2 summarizes the paper and adds concluding remarks.
Channel model and capacity
1.2.1 LOS MIMO channel model
To investigate the capacity of a MIMO channel in the presence of a LOS component, a suitable channel model for the MIMO channel is needed. According to [21, 28] and [25], a suitable way to model the channel matrix is as a sum of two components, a LOS component and a NLOS component. The ratio between the power of the two components gives the Ricean K factor. The MIMO channel matrix is modeled as
$$ \mathbf{H} =\sqrt{\frac{K}{K+1}}{\mathbf{H}}_{{\text{\tiny LOS}}} + \sqrt{\frac{1}{K+1}} {\mathbf{H}}_{{\text{\tiny NLOS}}} $$
where H LOS denotes the matrix containing the free space responses between all elements, H NLOS accounts for the scattered signals, and K is the Ricean K-factor which is equal to the ratio of the free space and scattered signals [33, 34]. As given in [25] and [27], the free space component, H LOS, of the complex response between a transmitting element m and a receiving element n (assuming that both elements are isotropic) is given as \(\phantom {\dot {i}\!}e^{-j \beta d_{n,m}}/d_{n,m}\) where β is the wave number corresponding to the carrier wavelength λ and it is given as \(\beta =\frac {2\pi }{\lambda }\). d n,m is the distance between the nth receiving element and the mth transmitting element. With the assumption that the difference in the pathloss is negligible and that there is no mutual coupling between the elements, the normalized free space response matrix of an n r ×n t MIMO system can be expressed as
$$\mathbf{H}_{{\text{\tiny LOS}}}= \left[\begin{array}{cccc} e^{-j \beta d_{1,1}}& e^{-j \beta d_{1,2}}& \dots & e^{-j \beta d_{1,n_{t}}}\\ e^{-j \beta d_{2,1}}& e^{-j \beta d_{2,2}}& \dots & e^{-j \beta d_{2,n_{t}}}\\ \vdots & \vdots & \vdots & \ddots\\ e^{-j \beta d_{n_{r},1}}& e^{-j \beta d_{n_{r},2}}& \dots & e^{-j \beta d_{n_{r},n_{t}}}\\ \end{array}\right] $$
where H LOS is totally deterministic and depends only on the positioning or separation distance between both elements of the RX and TX antennas. Contrastingly, the response due to H NLOS is a random complex matrix and is often modeled by stochastic process, i.e., \(\mathbf {H}_{{\text {\tiny NLOS}}} \epsilon \subset ^{n_{r}\times n_{t}}\) with i.i.d elements.
1.2.2 MIMO channel capacity
In any communication system, the fundamental measure of performance is the capacity of the channel, which is the maximum rate of communication for which arbitrarily small error probability can be achieved [35]. In this section, the capacity definition of MIMO systems is presented and the minimum and maximum capacity criteria are derived in terms of the channel response matrix. In this case, the receiver is assumed to have perfect channel state information (CSI) but no such prior knowledge is available at the transmitter. Hence, the Shannon capacity formula for the MIMO channel is given as [1, 36]
$$ C_{k} = \log_{2}\left(\det \left(\mathbf{I}_{n_{r}} + \frac{\rho}{n_{t}}\mathbf{H}_{k}\mathbf{H}_{k}^{H}\right)\right) $$
where C k is the MIMO channel capacity in bits per second per Hertz (bits/s/Hz) at the kth OFDM subcarrier, \(\mathbf {I}_{n_{r}}\) is the identity matrix and superscript H denotes the complex conjugate transpose (Hermitian), ρ corresponds to the average received SNR per receiver over MIMO subchannels, and n t and n r are the number of transmitting and receiving antenna elements, respectively. We note that the above capacity formula relies on a uniform power allocation scheme at all transmitting elements. Assuming that n t ≤n r , we can write
$$ C_{k}=\sum\limits_{m=1}^{n_{t}} \log_{2}\left(1 + \frac{\rho}{n_{t}}\gamma_{m}\mathbf{(H}_{k})\right) $$
where γ m ( H k ) is the mth eigenvalue of \(\mathbf {H}_{k}\mathbf {H}_{k}^{H}\). The maximum capacity is achieved when the channel is orthogonal [1], for which \(\mathbf {H}_{k}\mathbf {H}_{k}^{H}\) is a diagonal matrix with \(||\mathbf {H}_{k,m}||^{2}_{F}\) as its (m,m)-th element, where \(||\mathbf {H}_{k,m}||^{2}_{F}\) is the squared Frobenius norm of the mth row of H k . In this case, \(\gamma _{m}\mathbf {(H}_{k}) = ||\mathbf {H}_{k,m}||^{2}_{F}\). The MIMO channel capacity is calculated at each OFDM subcarrier using the above equation, while the MIMO-OFDM channel capacity is calculated as an average of the MIMO channel capacity over all OFDM subcarriers at a fixed signal SNR = 20 dB. For our measurement analysis, we chose signal SNR = 20 dB to have an even estimation of the MIMO channel capacity along the path and to emphasize effects of the MIMO channel structure on the capacity. The fixed value of signal SNR = 20 dB was chosen as an example, which is typically used for MIMO channel capacity analysis for the system using 16-quadrature amplitude modulation (QAM) or 64-QAM (e.g., [27, 37])
We note that the MIMO capacity value monotonically increases or decreases with the higher or lower signal SNR and it does not affect the conclusions of comparative analysis of higher or lower MIMO channel capacity as a function of subcarrier spacing or environment as have been performed in this paper. The MIMO channel capacity referred to in this paper corresponds to an ideal capacity, where it is assumed that a perfect CSI is available at the receiver. The actual capacity is typically reduced from this capacity due to the inaccuracy of the CSI at the receiver, especially in a mobile environment. In addition, the effects of inter-carrier interference (ICI) and inter-symbol interference (ISI) are ignored in this paper. We consider that the effects of ICI are small given the subcarrier spacing used and the maximum Doppler shift assumed. The ISI is expected to be removed by the use of cyclic prefix.
For a SISO fading channel with perfect knowledge of the channel at the receiver, the capacity of a SISO link is given as C= log2(1+ρ). Therefore, the capacity of an equivalent SISO link is equal to approximately 6.66 bps/Hz at ρ.
In this paper, there are two types of SNR being discussed. The first is nominated as ρ in (2) and is referred as signal SNR. The second SNR is referred to as channel estimation (CE) SNR which indicates the quality of the channel estimation.
1.2.3 Maximum LOS MIMO capacity criteria
In conventional MIMO systems, an inter-element antenna spacing equal to λ/2 is considered to be adequate to avoid strong correlation between received signals [38]. For most indoor applications, this inter-element spacing is still applicable; however, for V2I systems, such spacing cannot be used to achieve maximum capacity due to the larger distance between RX and TX, which tends to reduce the angular spacing and hence to increase correlation between subchannels. Figure 1 shows an example of the geometry of the positions of TX and RX antenna elements and associated parameters in V2I systems. In Fig. 1, θ is the angular spacing between the two MIMO subchannels. We used a 2×2 MIMO system as the basis of our derivation of the maximum capacity criterion. Figure 2 shows a simplified 2D version of the model.
Positions of TX and RX antenna elements with the RX vehicle (OBU antennas) mounted at the rooftop of a car moving along a street in x and y directions while the TX roadside unit (RSU antennas) was mounted at a fixed position
A 2×2 MIMO near-range LOS system with parallel arrays
The conditions for achieving the minimum and maximum capacities in a time-invariant or slow time-varying MIMO channel are given below as in [27]. The minimum MIMO capacity is obtained when \(\mathbf {HH}^{H} = n_{t}\mathbf {1}_{n_{r}}\), where \(\mathbf {1}_{n_{r}}\) is an n r ×n r all ones matrix. This corresponds to a system with rank-1 H LOS response with an associated capacity that is equivalent to that of a SISO channel given as C min= log2(1+n r ρ). The capacity in Eq. (2) is maximized for \(\mathbf {HH}^{H} = n_{t}\mathbf {I}_{n_{r}}\), i.e., when H is orthogonal. This response corresponds to a system with perfectly orthogonal MIMO subchannels, and the capacity of the MIMO channel is then equivalent to that of n r independent SISO channels given as follows: C max=n r log2(1+ρ), C max corresponds to a maximum capacity value of 13.32 bps/Hz for a 2×2 MIMO system. From the normalized free space channel response matrix of the n r ×n t MIMO system given above, the correlation matrix is given as follows:
$${\fontsize{9.2pt}{9.6pt}\selectfont{ {}\mathbf{HH}^{H}= \left[\begin{array}{ccc} n_{t} & \dots &\sum_{m=1}^{n_{t}}e^{-j\beta(d_{1,m}-d_{n_{r},m})}\\ \vdots & \ddots & \vdots\\ \sum_{m=1}^{n_{t}}e^{-j\beta(d_{n_{r},m}-d_{1,m})}& \dots & n_{r}\\ \end{array}\right]}} $$
In the case of the 2×2 MIMO system, the above equation can be simplified to
$${\fontsize{9.1pt}{9.6pt}\selectfont{ {}\mathbf{HH}^{H}= \left[\!\begin{array}{cc} 2 & e^{j\beta(d_{21}-d_{11})}+ e^{j\beta(d_{22}-d_{12})}\\ e^{j\beta(d_{11}-d_{21})}+ e^{j\beta(d_{12}-d_{22})}& 2\\ \end{array}\right]}} $$
It is clearly seen that the matrices are deterministic and depend on the distance between the TX and RX elements [39]. It can be seen that H H H becomes a square matrix in which all the elements of the principal diagonal are twos and all other elements are zeros when \(e^{j\beta (d_{11}-d_{21})}+e^{j\beta (d_{12}-d_{22})} = 0\phantom {\dot {i}\!}\). Based on the mathematical conditions given in the previous paragraph, it is clear that the maximum capacity of a 2×2 MIMO is achieved when
$$ \mathbf{HH}^{H} = n_{t}\mathbf{I}_{n_{r}} = 2\mathbf{I}_{2}, $$
that is, when all eigenvalues of H H H become equal and we end up with perfectly orthogonal MIMO spatial subchannels. Sarris et al. in [25, 27] have shown that this maximum capacity condition is satisfied when
$$ |d_{11}-d_{12}+d_{22}-d_{21}|= (2r+1)\frac{\lambda}{2}, r\epsilon Z^{+} $$
where λ is the wavelength at any carrier frequency, Z + represents the set of integers, and (2r+1) is an odd integer number. In physical terms, the authors in [25, 27] concluded that Eq. (5) stated that the approximate maximum capacity criterion corresponds to systems where the sum of the path differences (d 11−d 12) and (d 22−d 21) is an odd integer multiple of a half wavelength [27]. In Fig. 3, the RX is moving along the street while the TX is mounted in a fixed position. Both the TX and RX antennas have two antennas on each side.
TX and RX measurement setup
Equation (5) could be simplified further by making a number of assumptions. First, assume the two antenna arrays are parallel and have inter-element spacings, s 1 and s 2 as illustrated in Fig. 2. Then, d 11=d 22 and d 12=d 21, and hence, the maximum capacity is achieved when
$$ |d_{11}-d_{21}|= (2r+1)\frac{\lambda}{4},r\epsilon Z^{+} $$
Figure 1 illustrates a case for a 2×2 MIMO antenna array with a spacing of s 1 at TX and s 2 at RX. The equation for the TX-RX separation distance D and the angle of rotation θ between the first element of each array is defined in [25, 27] as
$$ D=\sqrt{dx^{2}+dy^{2}+dz^{2}} $$
$$ \cos\theta =\frac{\sqrt{dx^{2}+dy^{2}}}{D} $$
For V2I systems, the simplified maximum LOS-MIMO capacity criterion has been given in [13, 14, 25] as
$$ s_{1}s_{2} \approx (r+\frac{1}{2})\frac{D\lambda}{\cos^{2}\theta}, r\epsilon Z^{+} $$
This equation shows that if all s 1,s 2, D, θ, and λ satisfy (9) with a certain r ε Z +, the MIMO capacity will be at its maximum. It is interesting to note that (9) is a function of the inter-element distances (s 1,s 2), the TX-RX separation distance (D), the orientation of the arrays (θ), and the carrier frequency (f c ). Thus, the simplified 2×2 MIMO capacity criterion is expressed in terms of the s 1, s 2, D, θ, and f c . Equation (5) shows a criterion that should be satisfied to maximize the channel capacity. The criterion is simplified and concluded in (9). This implies that if (9) is met based on the positioning and separation of TX and RX, we will achieve the maximum channel capacity in a 2×2 MIMO system. For a time-invariant channel, λ, D, and θ are constant. As a consequence, the second part of (9) can be constant with a certain r ε Z +. We can satisfy (9) by setting appropriate s 1, s 2, and we can claim that the channel capacity is at its maximum when that is fulfilled. In mobile scenarios, D and θ may vary with the movement of the RX vehicle. To simplify further, the minimum optimal spacing or the first solution of (9) is satisfied when θ=0 and r=0. Therefore, the antenna separation distance required for optimal LOS MIMO operation is expressed as
$$ s_{1}s_{2}=\frac{D\lambda}{2} = \frac{Dc}{2f} $$
Conceptually, a larger TX-RX separation distance requires larger antenna element spacings, and lower frequencies require larger antenna spacings. For fixed f and D, the antenna arrays could be easily designed so that subchannel orthogonality can be achieved which will result to the attainment of maximum capacity in LOS scenarios. For V2I communications where the RX vehicle is moving, D and θ change with time. Hence, the receiver antenna array is not fixed at a specific position, but its location varies with the motion of the vehicle. The optimal LOS MIMO V2I operation depends on achieving the optimal angle θ opt and separation distance between TX and RX D opt.
1.2.4 Least square channel estimation
The least square channel estimation is needed by many pilot-based channel estimation for MIMO-OFDM as an initial estimation. It is the simplest approach to pilot-symbol-aided (PSA) channel estimation in OFDM systems, and no a priori information is assumed to be known about the statistics of the channel taps. However, it shows poor accuracy as it is performed on a frame-by-frame basis with no filtering of the estimate. The SISO-OFDM system model at the nth OFDM symbol and kth OFDM subcarrier is given as [29]
$$ Y_{n,k}= X_{n,k}H_{n,k}+W_{n,k} $$
Y n,k is the received signals, X n,k is the transmitted signal, and W n,k is the adaptive white Gaussian noise (AWGN). The LS channel estimation \(\hat {H}_{\text {\tiny {LS}}}\mathit {\text {\tiny {n,k}}}\) is given as
$$\begin{array}{@{}rcl@{}} \hat{H}_{\text{\tiny{LS}}}{\text{\tiny{n,k}}}=\frac{Y_{n,k}}{X_{n,k}}=H_{n,k}+\frac{W_{n,k}}{X_{n,k}} \end{array} $$
It is important to note that this simple LS estimate \(\hat {H}_{\text {\tiny {LS}}}\) does not exploit the correlation of channels across frequency carriers and across OFDM symbols. We utilized the LS estimation approach to get the initial MIMO channel estimates at the pilot subcarriers, which was then further improved using a 2D DCT filtering technique in both time and frequency [30–32]. The effectiveness of this technique is shown in the Section 1.4.
MIMO-OFDM V2I measurements
1.3.1 Measurement equipment
The 2×2 MIMO-OFDM V2I channel measurements were performed using an in-house built software-defined radio platform using AD9361 dual MIMO RF agile transceivers from Analog Devices, as TX and RX. The AD9361 is a wideband 2×2 MIMO transceiver. It combines an RF front-end with a flexible mixed-signal baseband section with a 12-bit analog-to-digital converter (ADC) and a digital-to-analog converter (DAC) and integrated frequency synthesizers. It provides a configurable digital interface to a processor. It operates from 70 MHz to 6.0 GHz, with tunable bandwidth from 200 kHz to 56 MHz. The equipment transmits channel sounding OFDM signals with three different subcarrier spacings, as shown in Table 1.
Table 1 OFDM packet parameters
A custom-built user interface software running on Windows operating system was used. This software captures ADC samples and records them on a PC's hard disk via USB connection. The transmitter sends OFDM signals to sound the channel, which are eventually recorded at the receiver unit. By post-processing, we then obtained the complex channel transfer function or frequency response as explained in Section 1.2.4. The transmitted OFDM signal waveforms (DAC samples) were generated off-line in the PC, using MATLAB. The in-house built platform is equipped with a field-programmable gate array (FPGA) that streams the off-line generated waveforms to the AD9361's DAC. The baseband-to-RF module amplifies, filters, and up-converts the signals to RF (920 MHz), and the antenna module processes the signals from the RF stage and sends them to the receiver through the MIMO channel. At the receiver side, the signals reach the antennas and pass to the low-noise amplifiers and down converters. The baseband waveform is sampled by the ADC, and the sampled baseband waveform data is transferred to the PC via USB and recorded on the hard disk. Each transmitted OFDM-based wireless packet has an approximate duration of 50 ms. The transmitted OFDM packets were sampled at 32.768 M samples per second. The measurement channel bandwidths are 8.2, 9.0, and 6.7 MHz for LSS, MSS, and SSS, respectively. The transmitted data are modulated by quadrature phase shift keying (QPSK). The OFDM-based wireless packets entirely consisted of the pilot symbols for the purpose of the 2×2 MIMO-OFDM channel measurement.
A 2×2 MIMO antenna system was implemented using two commercially available omnidirectional vertically polarized L-COM HGV-906 antennas and two off-the-shelf omnidirectional antennas as both TX and RX antenna array elements, respectively, for all measurements described in this paper. A 35-dBm high-power amplifier ZHL-1000-3W from Mini-Circuits was added at the transmitter to provide an output power of 23 dBm. With the use of a 6-dBi antenna, the maximum effective isotropic radiated power (EIRP) is 29 dBm. The spacing of the antenna elements is set to ≈ 6 λ at TX and ≈ 3 λ at RX where λ=32.5 cm, i.e., the spacing between the TX antennas TX1-TX2=2 m and between the RX antennas is RX1-RX2=1 m. The two transmitter antennas (TX) were mounted at a fixed position each at the same height of H TX=3.6 m and the two receiver antennas (RX) mounted at the rooftop of a vehicle at a height of H RX=1.8 m, as shown in Fig. 3. The measurement platform uses built-in Global Positioning System (GPS) receivers to ensure accurate synchronization of the TX and RX. In addition, the receiver system was equipped with an external EVK-6T-0 U-blox 6 GPS to provide measurement time stamp as well as the location and speed data for the RX vehicle.
We used a portable Rohde & Schwarz (R &S) FSH8 spectrum analyzer to measure the interference from other devices operating at the 920 MHz band. We observed that the transmitted signal from a nearby cellular base station was causing desensitization of our receiver. As a solution, we used channel filters to remove the effects of the interfering signals during the measurements. No other significant interference in the 920 MHz ISM band was observed during the measurements. All the measurement equipment was separately calibrated with Agilent signal generators and signal analyzers. The TX output power was measured with a power meter. The measurement parameters are summarized in Table 2.
Table 2 Measurement parameters
1.3.2 Measurement environments
The MIMO-OFDM V2I measurements were conducted on a suburban street located at CSIRO ICT Centre Laboratory in NSW, Australia. The environment can be characterized as suburban with a two-lane road, some office and residential buildings, cars, parking lots, concrete pillars, metallic sign posts and fences, base station mast, road signs, and high tree density on both sides of the street which provides a rich multipath scattering environment that favors MIMO channels. The measurements took place during working hours at the end of the winter season between 09:00 and 16:00 (GMT+10). The RX vehicle was following a rectangular route as illustrated in Fig. 4. Each measurement scenario was recorded over a duration of 10 min for five different rounds. The capturing of the ADC samples was not performed continuously, and as such, the acquired data was limited to 5 GB. The maximum separation distance and relative speed between the TX and RX antenna were derived from the GPS location, and speed data was 250 m and 55 km/h, respectively. The separation distance varied between 3 and 250 m. Occasional blockage of the LOS occurred due to high tree density and buildings. Both LOS and NLOS conditions existed during the measurements. Significant scatterers in the surrounding environment are trees, buildings, residential houses and offices, traffic signs, metallic fences, and moving and parked cars that can significantly contribute to a multipath propagation environment. In the measurement scenarios, we transmitted OFDM packets with the parameters shown in Table 1. The minimum guard interval of 1.6 μs was considered to be large enough to remove ISI in this short-range outdoor environment.
Google Earth map view of the measured location data from the GPS showing the location of the TX and RX route round the measurement environment. The blue line shows the route taken by RX, and and yellow dot shows the fixed location of TX. The indicated driving route outside roads are considered to be due to errors in estimating locations
Measurement results and analysis
The post-processing of measurement data was performed using MATLAB. The measured MIMO channels are estimated from the received signals. In particular, the channel coefficients were estimated by LS channel estimation by sending known channel training symbols alternately in time using the two transmitters (e.g., the first transmitter sent channel training symbols at odd OFDM symbol time index and the second transmitter sent channel training symbols at even OFDM symbol time index). The transmitted wireless packets have a duration of 50 ms and consist of all known channel training symbols.
The measurement system was equipped with GPS receivers at both the TX and the RX ends for accurate time reference (less than 100 ns ambiguity), which facilitated frame detection and time offset estimation. The GPS receivers also provided accurate frequency reference which removed the necessity of frequency offset estimation and correction. Residual frequency offset which was less than 100 Hz was included in the estimated channel. No additional ICI cancelation technique was applied, apart from the fact that the TX and the RX frequency references were synchronized to the GPS reference, which we expect to reduce the ICI problem significantly. We note that the Doppler shift expected in the measurement at 920 MHz with up to 55 km/h is approximately 47 Hz. Given the minimum subcarrier spacing of approximately 4 kHz (corresponding to SSS), we consider that the effects are negligible.
We used a high sampling rate of 32.768 Msps for the measurement to reduce the effect of noise. The use of oversampling to improve the SNR in OFDM transmission has been proposed in the literature [40, 41]. Random receiver noise is reduced by oversampling based on the assumption that the signal is coherent and that the noise is random. We extended our analysis to remove the effects of channel estimation errors by applying 2D DCT filtering in both time and frequency [30–32]. The 2D DCT has significantly reduced the effect of noise on our measurement results. In addition, we have included a simulation analysis to show that the effects of channel estimation error on MIMO channel capacity are small for high SNR up to 20 dB.
1.4.1 Effects of channel estimation error on MIMO channel capacity
We investigated the effects of channel estimation error on the MIMO capacity for i.i.d. Rayleigh fading channels by simulation. Figure 5 presents the deviation of the estimated capacity with channel estimation error for three different CE SNR (10, 20, and 30 dB) in the case of 2×2 MIMO. Figure 5 clearly shows that a significant deviation in the measured MIMO channel capacity may result where the CE SNR is 10 dB, while the 30 dB CE SNR gives insignificant deviation.
CDF of capacity deviation at signal SNR = 20 dB for different channel estimations
We applied 2D DCT-based filtering to reduce the effects of the channel estimation error and noise on our measurement results. Figures 6 and 7 present the measured 2×2 MIMO-OFDM doubly selective channel frequency response for the non-noise-reduced and noise-reduced MSS, respectively. The presented results correspond to NLOS propagation conditions. These are samples of the MIMO subchannels considered at a time when the RX vehicle is in motion at receiver position, d=200 m. Variations in frequency and in time are evident from the results. Figures 6 and 7 show that the effects of channel estimation error on the measured MIMO channel have been reasonably reduced by employing 2D DCT filtering in both time and frequency. Figure 8 shows an example time variation of the MIMO subchannel at one OFDM subcarrier in a LOS environment with the MSS case. The blue dots show the original LS-estimated channel while the red curve shows the 2D DCT filtered version. Figure 9 shows an example for a NLOS case. It can be seen that the LS-estimated MIMO channel follows the 2D DCT noise-reduced MIMO channel.
Example of measured 2×2 MIMO-OFDM non-noise-reduced doubly selective channel for the MSS case
Example of measured 2×2 MIMO-OFDM 2D DCT noise-reduced doubly selective channel in NLOS for the MSS case
Time variation of MIMO channel at one frequency for 2D DCT noise-reduced estimated channel and the LS-estimated channel at LOS D=34.3 m
Time variation of MIMO channel at one frequency for 2D DCT noise-reduced estimated channel and the LS-estimated channel at NLOS D=200 m
We approximately estimated the measured CE SNR as the difference between the noise-reduced MIMO channel and the LS-estimated MIMO channel, assuming that the noise-reduced MIMO channel is ideal. The results are averaged over time and frequency. The same method was used for the three different subcarrier spacing scenarios. Figure 10 presents the measured CE SNR as a function of the receiver position for the three different subcarrier spacings considered. The figure shows that the CE SNR indeed varies as a function of the receiver position and the separation distance. It shows the mean CE SNR for all locations having at least 15 dB. The measured CE SNR information as shown in Fig. 10 has practical value in understanding what distance can be achieved with a practical transmitting power value. However, we consider that it is important to separate the effects of CE SNR and the effects of the physical environment in analyzing the MIMO channel capacity. For this reason, we have chosen to use what we call fixed signal SNR criteria (transmitting power can be unlimitedly adjusted to provide fixed signal SNR at all receiver locations within the coverage) in analyzing the MIMO channel capacity in this paper. An example value of signal SNR = 20 dB was chosen for our MIMO capacity analysis, as this is typical for a wireless communication using 16-QAM or 64-QAM depending on the forward correction coding rate. In our actual measurement, automatic gain control (AGC) was used to adjust the gain at the receiver but not to the extent to assure the CE SNR of 20 dB at all measurement location, causing the channel estimation error corresponding to the CE SNR.
Example of mean measured channel estimation SNR as a function of TX-RX separation distance (m)
1.4.2 Measured MIMO-OFDM channel capacity variation as a function of the measurement time
From the collected data, the MIMO channel capacity was calculated based on Eq. (2) for the three different subcarrier spacing values considered. It is well known that MIMO channels depend on the values of the channel response matrix, the number of TX and RX antenna arrays, and the signal SNR at the receiver. The capacity evaluated at a fixed signal SNR is a measure of scattering richness of a MIMO channel and allows the analysis of capacity degradation due to the influence of spatial correlation when changing from LOS to NLOS conditions. The TX and RX antenna arrays are both individually separated at distances of approximately 6 λ and 3 λ, respectively, to reduce the influence of correlation. Figure 11 shows significant variations in mean capacity for different measurement times, ranging from a maximum of 13.6 bps/Hz to a minimum of 8.4 bps/Hz for all subcarrier spacing scenarios considered.
2×2 measured MIMO-V2I channel capacity variation as a function of the measurement time
1.4.3 Cumulative distribution function of MIMO capacity
Table 3 presents the cumulative distribution function (CDF) of the MIMO channel capacity for the three different subcarrier spacings considered: LSS, MSS, and SSS. Figure 12 presents the measured CDF of MIMO-OFDM channel capacity. Our analysis reveals that there is no significant variation in the MIMO capacity for the different OFDM standard subcarrier spacings considered for LSS, MSS, and SSS scenarios. This implies that any subcarrier spacing could be used in MIMO-OFDM systems to achieve optimal performance in terms of capacity. The dynamic range is defined in [3] as the difference between the maximum and the minimum values of the MIMO channel capacity. In this paper, we consider the 90 % capacity dynamic range C DR(90 %) as the difference between the 95 % (C 95 % ) and 5 % (C 5 % ) mean capacity values evaluated from the CDF of mean MIMO capacity. The measured C DR(90 %) are 3.3, 3.6, and 4.4 bps/Hz, respectively, for LSS, MSS, and SSS scenarios. Table 3 summarizes the dynamic range of mean MIMO-OFDM capacity for the three different subcarrier spacing scenarios considered. The CDF plots show there is a 10 % probability of the mean 2×2 MIMO channel capacity being below 9.7 bps/Hz and a 50 % probability of the mean capacity being below (or above) 11.4 bps/Hz for all the three scenarios considered. Furthermore, there is a 90 % probability of the mean capacity being below 12.2 bps/Hz for LSS cases, 12.5 bps/Hz for MSS cases, and 13.1 bps/Hz for SSS cases. In conclusion, lower capacity values in the range of 8 to 11 bps/Hz are achieved equally by all the subcarrier standards studied. However, higher capacity values on the order of 13 to 14 bps/Hz are achieved by the shorter subcarrier spacing standard (SSS) with higher probability. Currently, we do not have any theoretical reason for the variation of mean MIMO capacity values for different OFDM subcarrier spacings, and we encourage further investigation by researchers.
CDF of MIMO-OFDM channel capacity for three different subcarrier spacings: LSS, MSS, and SSS channel at signal SNR = 20 dB
Table 3 Comparison of the CDF of MIMO capacity for all the three subcarrier spacing scenarios, C (in bps/Hz)
1.4.4 MIMO capacity as a function of time and frequency
Figures 13, 14, 15, 16, 17 and 18 show the MIMO channel capacity as a function of time and frequency for LSS, MSS, and SSS cases, each with LOS and NLOS scenarios. For the LSS case, it can be seen that channel capacity C varies from 7.8 to 15.0 bps/Hz for the LOS scenario in Fig. 13. Larger variations within 50 ms can be observed (from 6.0 to 15.2 bps/Hz) when the vehicle is moving at higher speed and is under NLOS conditions, as shown in Fig. 14. For the MSS case, the capacity varies from 9.5 to 13.5 bps/Hz for LOS in Fig. 15 and from 6.0 to 15 bps/Hz for NLOS scenario in Fig. 16. And for the SSS case, the capacity varies from 10.5 to 14.0 bps/Hz and from 5.5 to 15.0 bps/Hz for LOS and NLOS scenarios, respectively. Generally, greater channel capacity dynamic range could be observed for NLOS conditions that involve larger TX-RX separation distances compared to the near-range LOS scenarios.
Example of the measured 2×2 LOS MIMO-OFDM V2I channel capacity at D=29.8 m for the LSS scenario
Example of a measured 2×2 NLOS MIMO-OFDM V2I channel capacity at D=213.8 m for the LSS scenario
Example of a measured 2×2 LOS MIMO-OFDM V2I channel capacity at D=19.98 m for the MSS scenario
Example of a measured 2×2 NLOS MIMO-OFDM V2I channel capacity at D=200 m when the RX vehicle is moving for the MSS scenario
Example of a measured LOS 2×2 MIMO-OFDM V2I channel capacity at D=40 m for the SSS scenario
Example of a measured NLOS 2×2 MIMO-OFDM V2I channel capacity at D=206 m when the RX vehicle is moving for the SSS scenario
Figures 19, 20, and 21 present the measured maximum MIMO channel capacity for the LSS, MSS, and SSS scenarios, respectively. Interestingly, the maximum capacity C max for each scenario was achieved when there was a LOS path between RX and TX. Generally, it can be seen that for some subcarriers, the capacity is high, up to C max=15.0 bps/Hz (resulting in better performance in terms of data rate link), while for other subcarriers, it is low, at C min=4.0 bps/Hz (resulting in lower data rate link). These results show that the MIMO capacity significantly varies with different subcarrier indexes and OFDM symbols. In this measurement, the number of scatterers increases when the TX-RX separation distance increases, leading to the de-correlation of the MIMO subchannels and therefore an increase in capacity. When the TX-RX separation distance is small and a strong LOS component exists, the MIMO channel capacity is still large due to the larger angular spacing between the different MIMO subchannels which occur at small TX-RX separation distances. This large angular spacing between the different MIMO propagation paths causes orthogonality, which in turn increases the LOS MIMO channel capacity.
Example of a measured LOS 2×2 MIMO-OFDM V2I maximum channel capacity, C max=13.37 bps/Hz, D=76.7 m for the LSS scenario
Example of a measured LOS 2×2 MIMO-OFDM V2I maximum channel capacity, C max≈13.0 bps/Hz, D=13.1 m for the MSS scenario
Example of a measured LOS 2×2 MIMO-OFDM V2I maximum channel capacity, C max=13.57 bps/Hz, D=59 m for the SSS scenario
1.4.5 MIMO channel capacity against separation distance
Figure 22 presents the (mean) MIMO-OFDM channel capacity averaged over 50 ms packet time as a function of separation distance between TX and RX. For the LSS case, the maximum MIMO channel capacity is 13.37 bps/Hz, followed by 13.17 bps/Hz at separation distances of 76.7 and 15 m and at a relative speeds of 31.7 and 8.8 km/h, respectively. For the MSS case, the maximum mean MIMO capacity is ≈13.0 bps/Hz, followed by 13.0 bps/Hz at separation distances of 13.1 and 29.6 m and at the relative speeds of 29.3 and 15 km/h, respectively. While for the SSS case, the maximum mean MIMO capacity is 13.57 bps/Hz, followed by 12.8 bps/Hz at separation distances of 62 and 18.4 m and at the relative speeds of 21.6 and 28.3 km/h, respectively. The reason for the measured maximum mean MIMO-OFDM capacity being higher than the theoretical maximum 2×2 MIMO capacity of 13.32 bps/Hz is considered to be due to the averaging over the OFDM subcarriers.
2×2 MIMO capacity for 2D DCT filtered MIMO channel as a function of TX-RX separation distance
From the observation of the environment, we approximately categorized the scenarios as follows. The TX-RX separation distances of less than 100 m were categorized as the LOS path. The TX-RX separation distances of more than 100 m were categorized as NLOS scenario. These assumptions match with the geometry of the measurement environment. It is evident that there is an increase in the local minima of the mean capacity, as the separation distance between TX and RX links increases for all the three subcarrier spacings. This is possible due to the expected lower spatial correlation between the different MIMO subchannels caused by the presence of NLOS and a larger number of scatterers (trees, vegetation, metallic fence, buildings, and sometimes moving vehicles) as TX and RX separation distance increases. The lower capacity values in LOS may be due to the degenerated channels typical in the LOS case, while the higher capacity values in LOS compared to NLOS can be explained by the greater angular spacing of the MIMO subchannels. Figures 23 and 24 present the histogram and CDF representations of the mean capacity results for the LOS (in blue curve) and the NLOS (in red curve) MSS scenarios, respectively. The histogram and CDF figures clearly show that the highest mean capacity are achieved for the LOS and also the variance of the mean capacity is much higher for the LOS than for the NLOS scenario. Similar results were obtained for the other two subcarrier spacings.
Histogram of 2×2 MIMO channel capacity for LOS and NLOS MSS scenarios
CDF of 2×2 MIMO channel capacity for LOS and NLOS MSS scenarios
In general, Fig. 22 clearly shows a correlation between the MIMO channel capacity variation and the TX-RX separation distance, i.e., MIMO channel capacity has a larger variation in near-range LOS conditions and a smaller capacity variation in far-range NLOS conditions. The possible cause of the larger capacity variations for near-range LOS propagation is greater angular spacing between the MIMO subchannels. There is greater angular spacing between the MIMO subchannels for LOS links and less angular spacing between the MIMO subchannels for far-range NLOS links, as illustrated in Fig. 2. Figure 2 shows that as the degree of angular spacing between the MIMO subchannels becomes smaller, the correlation between the received signals becomes larger, as the differences in the length of the MIMO subchannel paths decrease. This greater angular spacing causes the different MIMO propagation paths d 11 and d 12, as well as d 21 and d 22, to be orthogonal, therefore increasing the expected channel capacity. The figure explains our rationale to relate the degree of angular spacing between the MIMO subchannels and the correlation between the received signals. Most LOS conditions in our measurements met the criteria for maximum capacity as stated in Eq. (6), which indicates that the propagation paths should meet the conditions d 11 and d 12 and d 21 and d 22 that occur when both the TX and RX arrays are parallel. These conditions are met in most of the measured LOS scenarios discussed in this paper. The results presented in this section are consistent with what is expected, as they show that the mean MIMO capacity varies according to the degree of angular spacing between the MIMO subchannels and the correlation between the received signals.
It is observed that the maximum capacities for all subcarrier spacing scenarios were achieved by the LOS case despite the assumed highly correlated channel. This is a surprising result, because signal SNR is not a function of location in our analysis since we considered a fixed signal SNR = 20 dB in order to separate the effects of signal SNR variations. In this measurement, TX-RX antenna arrays are separated far apart on each side of the link (TX =6λ and RX =3λ) which is considered to contribute to the de-correlation of the MIMO subchannels in near-range LOS scenarios and to result in a full-rank MIMO channel and lead to the achievement of maximum MIMO capacity in LOS scenarios. The results are consistent with the phenomenon known as LOS MIMO in the literature [14, 24, 25, 42].
Based on the maximum LOS MIMO capacity criteria presented in the previous sections, we investigated a 2×2 MIMO V2I system operating in LOS and NLOS conditions. The TX-RX separation distance D varies between 3 and 250 m as the RX vehicle drives away from TX at a carrier frequency f=920 MHz and fixed antenna array spacing s 1=2 m at TX and s 2=1 m at RX; θ and D opt varies as the RX vehicle drives around the vicinity. At a TX height of 3.6 m and a RX height of 1.8 m, the calculated optimal LOS TX-RX separation distance D opt that will satisfy (9) for r=0 and θ=40° is D opt=7.8 m.
In our MIMO-OFDM MSS case, the measured C max≈ 13.0 bps/Hz which is the same as the theoretical maximum LOS-MIMO capacity C max=13.0 bps/Hz that was achieved at a LOS path corresponding to the theoretical D opt=13.1 m, which satisfies (9) for fixed f=920 MHz, s 1=2 m, s 2=1 m, θ=0, and r=0. While for the case of LSS and SSS, a higher value than the theoretical maximum capacity of C max=13.37 bps/Hz and C max=13.57 bps/Hz, respectively, was achieved at LOS distances of D=76.7 m and D=62 m, respectively. This shows that our antenna spacing satisfied the criterion for achieving the maximum MIMO capacity in LOS situations by proper positioning or spacing of the antenna elements in such a way to achieve orthogonality between spatially multiplexed signals of MIMO systems, resulting in a high-rank channel response matrix. In conclusion, all the maximum capacity values were achieved at near-range LOS and the measured capacity shows very close agreement with the theoretical values of optimal LOS MIMO capacity with very small variations which we attribute to minor inaccuracies in the positioning and orientation of the antenna elements. This validates the LOS maximum capacity criterion for achieving orthogonality between spatially multiplexed signals of MIMO systems in LOS channels by employing specifically designed antenna arrays.
A novel 2×2 MIMO testbed has been designed and implemented for three different subcarrier spacings. The MIMO testbed is based on a software-defined radio platform where flexible signal processing can be implemented. The testbed has been used to carry out measurements at a frequency of 920 MHz. We have investigated the channel capacity of a MIMO-OFDM V2I channel under both LOS and NLOS propagation scenarios. We examined a method to achieve orthogonality between spatially multiplexed signals in MIMO V2I communication systems operating in a LOS channel. Our analytical results show that the maximum capacity can be achieved under LOS scenario with proper antenna element spacing despite the effects of higher correlation and reduced rank of the channel response matrix which can be counterbalanced by deliberate separation of antenna elements that preserves orthogonality; this results in a full-rank MIMO channel matrix, and thus, high MIMO capacity is achieved in LOS cases. As anticipated, when the vehicle is driving closer to the TX roadside infrastructure unit, the greatest capacity values are observed in LOS conditions. We observed that even with some deviation from optimal design, the LOS MIMO case outperforms the theoretical i.i.d. Rayleigh performance (11.5 bps/Hz) in terms of Shannon capacity. We investigated the MIMO channel capacity for different positions of the receiver vehicle. Interestingly, since the maximum capacity of LOS-MIMO systems does not depend on the existence of scatterers, there is no particular limit on the linear increase of the system capacity. The presented results demonstrate that the MIMO channel benefits from the movement of the receiver from NLOS to LOS conditions, as the detrimental effect of increased correlation between the received signal is outweighed by the advantageous effect of high angular spacing of the sub-paths. These results show the strong dependence of the MIMO capacity on antenna array spacing and the correlation of the channel response matrix. This measurement demonstrates the significance of using optimally interspaced MIMO antenna array elements at each of the radio links to increase the capacity of MIMO systems over SISO systems.
G Foschini, M Gans, On limits of wireless communications in a fading environment when using multiple antennas. Wirel. Pers. Commun. 10(6), 311–335 (1998).
H Cheng, H Shan, W Zhuang, Infotainment and road safety service support in vehicular networking. Elsevier J. Mech. Syst. Signal Process. 2010, 1–19 (2010).
W K Ziri-Castro, N Scanlon, Evans, Prediction of variation in MIMO channel capacity for the populated indoor environment using a radar cross-section-based pedestrian model. IEEE Trans. Wirel. Commun. 4(3), 1186–1194 (2005).
H Suzuki, T Van, I Collings, Characteristics of MIMO-OFDM channels in indoor environments. EURASIP J. Wirel. Commun. Netw. 2007(19728), 1–9 (2007).
A Goldsmith, S Jafar, N Jindal, S Vishwanath, Capacity limits of MIMO channels. IEEE J. Sel. Areas Commun. 21(5), 684–702 (2003).
A Molisch, M Steinbauer, M Toeltsch, E Bonek, R Thoma, Capacity of MIMO systems based on measured wireless channels. IEEE J. Sel. Areas Commun. 20(3), 561–569 (2002).
GXu Y Yang, H Ling, An experimental investigation of wideband MIMO channel characteristics based on outdoor non-LOS measurements at 1.8 GHz. IEEE Trans. Antennas Propag. 54(11), 3274–3284 (2006).
J Karedal, F Tufvesson, N Czink, A Paier, C Dumard, T Zemen, C Mecklenbrauker, A Molisch, A geometry-based stochastic MIMO model for vehicle-to-vehicle communications. IEEE Trans. Wirel. Commun. 8(7), 3646–3657 (2009).
J Ling, D Chizhik, D Samardzija, R Valenzuela, Peer-to-peer MIMO radio channel measurements in a rural area. 6(9), 3229–3237 (2007).
J Karedal, F Tufvesson, N Czink, A Paier, C Dumard, T Zemen, C Mecklenbrauker, A Molisch, in Communications, 2009. ICC '09. IEEE International Conference on. Measurement-based modeling of vehicle-to-vehicle MIMO channels (Dresden, 14–18 June 2009), pp. 1–6.
O Renaudin, V Kolmonen, P Vainikainen, C Oestges, Non-stationary narrowband MIMO inter-vehicle channel characterization in the 5 GHz band. IEEE Trans. Veh. Technol. 59(4), 2007–2015 (2010).
A Alonso, A Paier, T Zemen, N Czink, F Tufvesson, in Communications Workshops (ICC), 2010 IEEE International Conference on. Capacity evaluation of measured vehicle-to-vehicle radio channels at 5.2 GHz (Cape Town, 23–27 May 2010), pp. 1–5.
H Mousavi, B Khalighinejad, B Khalaj, in Wireless Communications and Mobile Computing Conference (IWCMC), 2013 9th International. Capacity maximization in MIMO vehicular communication using a novel antenna selection algorithm (Sardinia, 1–5 July 2013), pp. 1246–1251.
M Matthaiou, D Laurenson, C-X Wang, in Wireless Communications and Networking Conference, 2008. WCNC 2008 IEEE. Capacity study of vehicle-to-roadside MIMO channels with a line-of-sight component (Las Vegas, NV, 3 April 2008 31 March), pp. 775–779.
A Paier, J Karedal, N Czink, H Hofstetter, C Dumard, T Zemen, F Tufvesson, C Mecklenbrauker, A Molisch, in Personal, Indoor and Mobile Radio Communications, 2007. PIMRC 2007. IEEE 18th International Symposium on. First results from car-to-car and car-to-infrastructure radio channel measurements at 5.2 GHz (Athens, 3–7 September 2007), pp. 1–5.
L Cheng, B Henty, D Stancil, F Bai, P Mudalige, Mobile vehicle-to-vehicle narrow-band channel measurement and characterization of the 5.9 GHz dedicated short range communication (DSRC) frequency band. IEEE J. Sel. Areas Commun. 25(8), 1501–1516 (2007).
G Grant, Rayleigh fading multi-antenna channels. J. Appl. Signal Process. 2002(3), 316–329 (2002).
H Shin, JH Lee, Capacity of multiple-antenna fading channels: spatial fading correlation, double scattering, and keyhole. IEEE Trans. Inf. Theory. 49(10), 2636–2647 (2003).
M Chiani, M Win, A Zanella, On the capacity of spatially correlated MIMO Rayleigh-fading channels. IEEE Trans. Inf. Theory. 49(10), 2363–2371 (2003).
S Jin, X Gao, X You, On the ergodic capacity of rank-1 Ricean-fading MIMO channels. IEEE Trans. Inf. Theory. 53(2), 502–517 (2007).
M Kang, M-S Alouini, Capacity of MIMO Rician channels. IEEE Trans. Wirel. Commun. 5(1), 112–122 (2006).
P Driessen, G Foschini, On the capacity formula for multiple input-multiple output wireless channels: a geometric interpretation. IEEE Trans. Commun. 47(2), 173–176 (1999).
D Gesbert, DGORE H Bolcskei, A Paulraj, Outdoor MIMO wireless channels: models and performance prediction. IEEE Trans. Commun. 50(12), 1926–1934 (2002).
M Matthaiou, P de Kerret, G Karagiannidis, J Nossek, Mutual information statistics and beamforming performance analysis of optimized LOS MIMO systems. IEEE Trans. Commun. 58(11), 3316–3329 (2010).
I Sarris, A Nix, Design and performance assessment of maximum capacity MIMO architectures in line-of-sight. IEEE Proc. Commun. 153(4), 482–488 (2006).
F Bohagen, P Orten, G Oien, in Wireless Communications and Networking Conference, 2005 IEEE, 1. Construction and capacity analysis of high-rank line-of-sight MIMO channels (New Orleans, Louisiana, USA, 13–17 March 2005), pp. 432–437.
I Sarris, A Nix, Design and performance assessment of high-capacity MIMO architectures in the presence of a line-of-sight component. IEEE Trans. Veh. Technol. 56(4), 2194–2202 (2007).
F Bohagen, P Orten, G Oien, Design of optimal high-rank line-of-sight MIMO channels. IEEE Trans. Wirel. Commun. 6(4), 1420–1425 (2007).
M Ozdemir, H Arslan, Channel estimation for wireless OFDM systems. IEEE Commun. Surv. Tutorials. 9(2), 18–48 (2007).
P Tan, N Beaulieu, A comparison of DCT-based FDM and DFT-based OFDM in frequency offset and fading channels. IEEE Trans. Commun. 54(11), 2113–2125 (2006).
H N Al-Dhahir, S Minn, Satish, Optimum DCT-based multicarrier transceivers for frequency-selective channels. IEEE Trans. Commun. 54(5), 911–921 (2006).
L Han, B Wu, in Wireless Communications and Signal Processing (WCSP), 2010 International Conference on. DCT-based channel estimation for wireless OFDM systems with the hexagonal pilot pattern (Suzhou, China, 21–23 October 2010), pp. 1–5.
F Farrokhi, A Lozano, G Foschini, in Personal, Indoor and Mobile Radio Communications, 2000. PIMRC 2000. The 11th IEEE International Symposium on, 1. Spectral efficiency of wireless systems with multiple transmit and receive antennas (London, 373).
F Farrokhi, A Lozano, G Foschini, R Valenzuela, Spectral efficiency of FDMA/TDMA wireless systems with transmit and receive antenna arrays. IEEE Trans. Wirel. Commun. 1(4), 591–599 (2002).
C Shannon, A mathematical theory of communication. Bell Syst. Technol. 27, 379–423 (1948).
J Winters, On the capacity of radio communication systems with diversity in a Rayleigh fading environment. IEEE J. Sel. Areas Commun. 5(5), 871–878 (1987).
A Toding, M Khandaker, Y Rong, Joint source and relay design for MIMO multi-relays systems using projected gradient approach. EURASIP J. Wirel. Commun. Netw. 2014(155), 1–22 (2014).
J Kermoal, L Schumacher, K Pedersen, P Mogensen, F Frederiksen, A stochastic MIMO radio channel model with experimental validation. IEEE J. Selected Areas Commun. 20(6), 1211–1226 (2002).
P Kyritsi, D Chizhik, Capacity of multiple antenna systems in free space and above perfect ground. IEEE Commun. Lett. 6(8), 325–327 (2002).
J Wu, Y Zheng, Oversampled orthogonal frequency division multiplexing in doubly selective fading channels. IEEE Trans. Commun. 59(3), 815–822 (2011).
B-S Y-S Lee, OFDM Seo, receivers using oversampling with rational sampling ratios. IEEE Trans. Consum. Electron. 55(4), 1765–1770 (2009).
D McNamara, M Beach, P Fletcher, P Karlsson, Capacity variation of indoor multiple-input multiple-output channels. Electron. Lett. 36(24), 2037–2038 (2000).
The authors would like to thank the QUT High Performance Computing (HPC) and the Research Support services for their technical assistance during the data analysis. The authors thank Alex Grancea and Joseph Pathikulangara from CSIRO ICT Centre NSW, Australia, for the development of the SDR platform used for the measurements.
School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, QLD, 4001, Australia
Okechukwu J. Onubogu
, Karla Ziri-Castro
& Dhammika Jayalath
CSIRO ICT Centre, Epping, 1710, NSW, Australia
Hajime Suzuki
Search for Okechukwu J. Onubogu in:
Search for Karla Ziri-Castro in:
Search for Dhammika Jayalath in:
Search for Hajime Suzuki in:
Correspondence to Okechukwu J. Onubogu.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Onubogu, O.J., Ziri-Castro, K., Jayalath, D. et al. Experimental evaluation of theperformance of 2×2 MIMO-OFDM for vehicle-to-infrastructure communications. J Wireless Com Network 2015, 183 (2015) doi:10.1186/s13638-015-0411-5
MIMO-OFDM
Experimental Evaluation in Wireless Communications
|
CommonCrawl
|
Home > Journals > Differential Integral Equations > Volume 34 > Issue 7/8 > Article
July/August 2021 Lyapunov-type inequalities for a Sturm-Liouville problem of the one-dimensional p-Laplacian
Shingo Takeuchi, Kohtaro Watanabe
Differential Integral Equations 34(7/8): 383-399 (July/August 2021). DOI: 10.57262/die034-0708-383
This paper considers the eigenvalue problem for the Sturm-Liouville problem including $p$-Laplacian \begin{align*} \begin{cases} \left ( \vert u'\vert^{p-2}u' \right ) '+ \left ( \lambda+r(x) \right ) \vert u\vert ^{p-2}u=0,\,\, x\in (0,\pi_{p}),\\ u(0)=u(\pi_{p})=0, \end{cases} \end{align*} where $1 < p < \infty$, $\lambda < p-1$, $\pi_{p}$ is the generalized $\pi$ given by $\pi_{p}=2\pi/ \left ( p\sin(\pi/p) \right ) $ and $r\in C[0,\pi_{p}]$. Sharp Lyapunov-type inequalities, which are necessary conditions for the existence of nontrivial solutions of the above problem are presented. Results are obtained through the analysis of variational problem related to a sharp Sobolev embedding and generalized trigonometric and hyperbolic functions.
Shingo Takeuchi. Kohtaro Watanabe. "Lyapunov-type inequalities for a Sturm-Liouville problem of the one-dimensional p-Laplacian." Differential Integral Equations 34 (7/8) 383 - 399, July/August 2021. https://doi.org/10.57262/die034-0708-383
Published: July/August 2021
First available in Project Euclid: 23 June 2021
Digital Object Identifier: 10.57262/die034-0708-383
Primary: 34L10 , 46E35
Rights: Copyright © 2021 Khayyam Publishing, Inc.
Differential Integral Equations
Vol.34 • No. 7/8 • July/August 2021
Khayyam Publishing, Inc.
Shingo Takeuchi, Kohtaro Watanabe "Lyapunov-type inequalities for a Sturm-Liouville problem of the one-dimensional p-Laplacian," Differential and Integral Equations, Differential Integral Equations 34(7/8), 383-399, (July/August 2021)
|
CommonCrawl
|
Research | Open | Published: 06 November 2017
Effects of long-term deforestation and remnant forests on rainfall and temperature in the Central Rift Valley of Ethiopia
Alemayehu Muluneh1,2,
Emiel van Loon5,
Woldeamlak Bewket4,
Saskia Keesstra1,
Leo Stroosnijder1 &
Ashenafi Burka3
Forest Ecosystemsvolume 4, Article number: 23 (2017) | Download Citation
Some evidence suggests that forests attract rain and that deforestation contributes to changes in rainfall and temperature. The evidence, however, is scant, particularly on smaller spatial scales. The specific objectives of the study were: (i) to evaluate long-term trends in rainfall (1970–2009) and temperature (1981–2009) and their relationships with change in forest cover, and (ii) to assess the influence of remnant forests and topographical factors on the spatial variability of annual rainfall.
This study investigated the forest-rainfall relationships in the Central Rift Valley of Ethiopia. The study used 16 long-term (1970–2009) and 15 short-term (2012–2013) rainfall and six long term (1981–2009) temperature datasets. Forest and woodland cover decline over the past 40 years (1970–2009) and the measured distances between the remnant forests and rainfall stations were also used. The long-term trends in rainfall (1970–2009) and temperature (1981–2009) were determined using Mann-Kendall (MK) and Regional Kendall (RK) tests and their relationships with long-term deforestation were evaluated using simple linear regression. Influence of remnant forests and topographical variables on the spatial variability of rainfall were determined by stepwise multiple regression method. A continuous forest and woodland cover decline was estimated using exponential interpolation.
The forest and woodland cover declined from 44% in 1973 to less than 15% in 2009 in the Central Rift Valley. Annual rainfall on the valley floor showed an increase by 37.9 mm/decade while annual rainfall on the escarpments/highlands decreased by 29.8 mm/decade. The remnant forests had a significant effect (P-value <0.05, R 2 = 0.40) on the spatial variability of the number of rainy days observed over two years (2012–2013), but had little effect on the variability of rainfall distribution. For the total annual rainfall, slope was the best predictor which explained 29% of the rainfall variability in the Central Rift Valley. For the annual number of rainy days, both slope and elevation explained most of the variability (60%) of annual number of rainy days.
This study did not find a significant correlation between long-term rainfall trend and forest and woodland cover decline. The rift valley floor warmed significantly due to long-term deforestation in the Central Rift Valley. Topographic factors play a significant role than forest cover in explaining the spatial variability of annual rainfall in the long-term and short term time scale in the Central Rift Valley. But, the short-term rainfall data indicated that the remnant forest had a significant effect on the spatial variability of the number of rainy days.
The loss of vegetation in humid and dry tropical regions is believed to increase the incidence of droughts and floods (Nicholson, 1998) and to also contribute to climate change (e.g. de Sherbinin et al. 2002). For example, a study in Amazon basin suggested that land cover change has the potential to increase the impact of droughts (Bagley et al. 2014). Similarly, Lawrence and Vandecar (2015) indicated that tropical deforestation results in warmer, drier conditions at the local scale. The impacts of changes in land use may contribute more than the greenhouse effect to regional climate change, occurrence of droughts, and desertification (e.g. Savenije 1996). Forest protection and re-vegetation can mitigate drought and flood risks. The protection of tropical forests in Madagascar and Indonesia, for example, has benefited drought and flood mitigation (Kramer et al. 1997; Pattanayak and Kramer 2001). Makarieva et al. (2009) suggested the potential for forest-mediated solutions to the problems of global desertification and water security. The need for improving our understanding of the role of vegetative cover in climate is thus becoming more urgent due to the increasing magnitude of change that humans are imposing on vegetation (Sanderson et al. 2012). Several studies have already advocated for a more comprehensive assessment of the net climate effect of land cover change policies on climate, beyond the global warming potential (e.g. Castillo and Gurney 2013, Davies-Barnard et al. 2014, Bright et al. 2015).
In recent years, there has been an increasing amount of literature on the consequence of deforestation on the regional and global scales (Hanif et al. 2016). The long term effect of deforestation in the Amazonian climate showed 60% reduction in the rainfall (Nobre et al. 2009). Similarly, the deforestation in 2010 over the Amazonian basin showed up to 1.8% reduction in rainfall (Lawrence and Chase 2010). Additionally, modeling studies over the Asian region suggested that a large-scale deforestation can lead to reduced precipitation (Dallmeyer and Claussen 2011). For example, Cao et al. (2015) found that land use/land cover in Northern China has altered the regional climate over the past decade (between 2001 and 2010). Another study in Congo Basin, Africa has revealed that decreased evapotranspiration due to deforestation can reduce the rainfall up to 50% over the entire basin (Nogherotto et al. 2013). A global transect study by Makarieva et al. (2009) also found that precipitation increased inland for several thousand kilometres in forested regions such as the Amazon and Congo River basins whereas precipitation declined exponentially with distance from the ocean in non-forested regions. Such studies on regional/global/continental scales do not clearly show how forests affect rainfall on smaller scales, from a few to about one hundred kilometres in diameter. Because, the net local/regional impacts of forest cover/deforestation are dependent on the type and scale of land cover change and on local conditions (Pitman and Lorenz 2016, Lucia et al. 2017). Meso- and local-scale observational studies have also produced contradicting results. Some deforestation experiments suggest reduced precipitation (e.g. Lejeune et al., 2015, Badger and Dirmeyer 2016) while others suggest increases (e.g. Dirmeyer and Shukla 1994, Bonan 2008).
The role of deforestation in temperature change also has two competing effects: warming due to the reduction in evapotranspiration and cooling due to the increase in surface albedo. The albedo-induced decrease of temperature following deforestation can be locally offset by the warming effect due to a decrease of latent heat flux, with a resulting net warming effect of the surface, along with a decrease of precipitation (Spracklen and Garcia-Carreras 2015, Lawrence and Vandecar 2015). Tropical deforestation studies using climate models almost always simulate warming and drying (Badger and Dirmeyer 2016). Most recently Lucia et al. (2017) from synthesizes results of published modelling and observational studies focusing on changes in surface air temperature and precipitation due to biophysical effects of land cover change reported models indicate that large scale (extreme) land cover changes have a strong regional effect on temperature and precipitation while observational studies also find significant local/regional temperature effects of land cover change.
Small-scale spatial variability of rainfall could also be caused by various topographical parameters such as elevation, slope, and slope aspect (Agnew and Palutikof 2000; Marquínez et al. 2003). Rainfall often increases with elevation due to the orographic effect. Slope and slope aspect influence near-surface temperatures and water availability due to varying exposure to solar radiation and wind (Barry 1992; Bolstad et al. 1998). Such studies focus only on topographic factors without due consideration of vegetation cover and water bodies. Water bodies also commonly affect rainfall distribution by influencing local meteorological conditions (e.g. Ba and Nicholson 1998).
The studies reviewed here at best indicate that no scientific consensus exists on the meso- and local scales impacts of forest cover and deforestation on climate and remains a subject of ongoing research, indicating the need for region-specific empirical data and further research. The natural high forests of Ethiopia which were estimated to have once covered 40% of the country, declined to only 13.7% in the 1990's and to 11.5% in 2010 (FAO 2010). Today, Ethiopian forests disappear at a rate of 1.1% (140,000 ha) per year (FAO 2010). Thus, continuous deforestation in Ethiopia makes such a study crucial. The Central Rift Valley is, among other Ethiopian areas, affected by a continuous forest and woodland decline.
Thus, our hypothesis is that long term deforestation causes rainfall and temperature pattern change and even small scale remnant forests have a beneficial effect of rainfall and one of the recommended impact reducing strategies, therefore, is the protection and plantation of small forests. To test our hypothesis this study was conducted in two ways: First, the effect of existing remnant forests on rainfall at the landscape scale was studied by installing automatic rain gages in forests and open areas following a transect line. Second, long term relationship between the long term deforestation and rainfall and temperature pattern change was studied to determine the effect of deforestation on climate. The specific objectives of the study were: (i) to evaluate long-term rainfall (1970–2009) and temperature (1981–2009) trend and their relationships with long term forest and woodland decline(deforestation) and (ii) to assess the influence of remnant forests and topographical factors on the spatial variability of long-term (1970–2009) and short-term (2012–2013) annual rainfall and number of rain days.
The study area
The Central Rift Valley covers an area of about 13,000 km2 at approximately 38°00′-39°30′E, 7°00′-8°30′N (Fig. 1). It is a sub-basin of the Rift Valley Lakes Basin and is part of the Great East African Rift Valley, which covers the major dryland portion of the country, and has three landscape units (physiographic regions): the valley floor, escarpments, and highlands. The altitude is 1600 m above mean sea level (a.s.l) around the rift lakes and ranges from about 2000 to 3200 m a.s.l in the eastern and western highlands.
Location of the study area and the meteorological stations, lakes, and remnant forests in the Central Rift Valley of Ethiopia
The climate of the Central Rift Valley is classified as semi-arid, dry sub-humid and humid in different regions. Based on the Central Rift Valley climate data analysis (1970–2009) mean annual rainfall and mean annual temperature range from 737 to 955 mm and 17 to 20 °C, respectively (Muluneh et al. 2017). The region has three main seasons. A long rainy season (Kiremt) extends between June and September and represents 50–70% of the total annual rainfall. Kiremt rainfall is mostly controlled by the seasonal migration of the inter-tropical convergence zone (ITCZ). A dry period extends between October and February, with occasional rains that account for about 10–20% of the total annual rainfall. The dry period occurs when the ITCZ lies south of Ethiopia, during which time the north-easterly trade winds traversing Arabia dominate the region (Muchane 1996). A short rainy season (Belg) occurs during March to May, with 20–30% of the total annual rainfall. The Belg rainfall is caused by humid easterly and south-easterly winds from the Indian Ocean (Seleshi & Zanke 2004).The intense heating of the high plateau causes the convergence of the wet monsoonal currents from the southern Indian and Atlantic Oceans, bringing rain to the region (Griffiths 1972). The pattern of rainfall on the valley floor is mostly from relatively intense (up to 100 mm/h) storms compared to the highlands with highest intensities only up to 70 mm/h (Makin et al. 1975).
The soils of the Central Rift Valley are mainly derived from young volcanic rocks, with textures ranging from sandy loam to clay loam with varying levels of fertility and degradation.
The distribution of plants in the study area is highly influenced by elevation, which also dictates the rainfall pattern (Musein 2006). The floor of the valley is largely dominated by deciduous acacia woodland and wooded grassland that are increasingly becoming more open (Feoli and Zerihun 2000), whereas deciduous woodlands (Olea europaea, Celtis, Dodonaea viscosa, and Euclea) occupy the escarpments (Mohammed and Bonnefille 1991). Montane forests dominated by Podocarpus gracilior grow between 2000 and 3000 m a.s.l on the eastern plateaus bordering the rift (Abate 2004).
Land-cover change in the Central Rift Valley commenced before the 1970s (Makin et al. 1975), but a significant amount of forest cover lost between the 1970s and the early 1990s due to increasing pressure from the growing population and an unstable political system (Seifu 1998, Bekele 2003). Increasing and progressive settlement since the 1970s has replaced rangelands around the lakes and the montane forests on the escarpments and plateaus with small- to medium-scale farms, some of which are mechanised (Woldu and Tadesse 1990, Kindu et al. 2013). In the Central Rift Valley, woodlands, forested areas and water bodies decreased by 69, 66 and 15% respectively, between 1973 and 2006, mainly due to their conversion to agricultural land (Meshesha et al. 2012) (Fig. 2).
Changes in land use and cover in the Central Rift Valley, 1973–2006 (adapted from Meshesha et al. 2012)
The Munessa-Shashemene forest, a conspicuous remnant of the once dense dry tropical Afromontane vegetation, is considered as remnant forest for this study as well. It is located on the south-eastern escarpment of the Central Rift Valley (Fig. 1) and is comprised of natural woody vegetation such as podocarps, junipers, and forest plantations dominated by a few exotic species such as eucalyptus, cypresses, and pines. The forest is now designated as a High Priority Forest Area protected by the government. The Munessa-Shashemene forest has been increasingly deforested for a long time, a process that is still ongoing mainly due to commercial logging and agricultural expansion (Seifu 1998, Kindu et al. 2013). The natural forest cover declined from 21,723.3 ha in 1973 to 9588 ha in 2012, a loss of nearly 56% in four decades (Kindu et al. 2013). The woodland area around Munessa-Shashemene forest has also significantly decreased. Only 650.6 ha (5.4%) of 11,832.4 ha of woodlands in 1973 remained unchanged in 2012 (Kindu et al. 2013).
The Central Rift Valley encompasses the four major lakes Ziway, Abiyata, Langano, and Shala with areas of 440, 180, 230, and 370 km2, respectively (Ayenew 2003), i.e. the lakes together occupy an area of about 1220 km2 on the floor of the valley (Fig. 1). Also it consists of streams and wetlands with unique hydrological and ecological characteristics. For example, Lake Ziway receives most of its water from two tributaries (Meki and Ketar Rivers) of the western and eastern escarpments. Lake Ziway is connected with Lake Abiyata through Bulbula River. Lake Langano is mainly maintained by five major rivers (Huluka, Lepis, Gedemso, Kersa and Jirma rivers) and it is connected with Lake Abiyata through Horakelo River. The surface inflows to Lake Shala come from two main sources (Dadaba and Gidu Rivers) enters from the southeastern and western escarpments.
Rainfall and temperature data
Two sets of rainfall data were used. (i) Long-term (1970–2009) daily rainfall data were collected at 16 meteorological stations (five on the valley floor and 11 in the escarpments/highlands) by the National Meteorological Agency of Ethiopia (Table 1). (ii) Short-term (2012–2013) rainfall data were directly collected from a network of 15 watchdog tipping bucket rain gauges systematically installed along transects of approximately 60 km traversing both forested and open areas (Fig. 1). The distance between neighbouring rain gauges was <5 km, as suggested by Hubbard (1994) for explaining at least 90% of the variation between sites. Increasing the density of the monitoring network can also improve the quality of spatial rainfall estimation. Rainfall and the number of rainy days were the two important variables used for the subsequent analysis.
Table 1 Characteristics of the meteorological stations used in the study
The temperature trend in the region were analysed using data for the maximum and minimum temperatures from the six meteorological stations from which quality temperature data were obtained for 1981–2009 (Table 2).
Table 2 The six meteorological stations with temperature data set (maximum & minimum temperature) used in the study
Forest data
Two types of forest data were used. (i) The long-term changes in forest and woodland cover were required to assess the effect of deforestation on rainfall patterns. A continuous decline in forest and woodland cover, described as the annual percentage of remaining forest and woodland area for 40 years (1970–2009), was determined using exponential interpolation based on measurements of existing forest and woodland cover from an analysis of remotely sensed data for 1973, 1985 and 2006 (Landsat multiresolution and multispectral data for 1973 and Landsat thematic mapping for 1985 and 2006 were used to classify Land use and cover) (Fig. 2) (Meshesha et al. 2012). This type of interpolation has been used previously by Gebrehiwot et al. (2010). (ii) The distances between the remnant forests and each of the rainfall stations were used as independent variables to assess the influence of the remnant forests on rainfall distribution. Euclidean distances were computed to determine the distance of each rain gauge from the forest.
Topographical variables (elevation, slope, and slope aspect)
Data for elevation, slope, and slope aspect were collected at each of the meteorological stations. These topographic variables represent the explanatory variables in our analysis. The mean values of the topographical variables within a radius of 2 km were used rather than only the value at the station to normalize local effects (Daly et al. 1994). Large-scale topographical features at a resolution of 2–15 km yield a high correlation with precipitation (Daly et al. 1994). Aspect is a circular variable, so the vector was decomposed into two orthogonal components: sin (aspect) and cos (aspect). Sin (aspect) yields a measure of east/west exposure (+1 represents due east, −1 represents due west), and cos (aspect) yields a north/south exposure (+1 represents due north, −1 represents due south) (Hession and Moore 2011).
Long-term trends of rainfall and temperature
A long-term rainfall data set was used (i) to assess the long-term rainfall and temperature trend in the Central Rift Valley and comparing it to forest and woodland cover decline, ii) The long-term rainfall data were also used as dependent variables for analysing the effect of forest cover on local rainfall distribution.
The effects of forest and woodland cover decline (deforestation) on rainfall distribution were assessed by the following two approaches (Gadgil 1978; Meher-Homji 1980). (i) The rainfall pattern at the same station over a long period were analysed during which deforestation occurred. (ii) Rainfalls between areas were compared within the same climatic type, one area forested and the other without natural vegetation. The annual rainfall trends were spatially distinct between the valley floor and the adjoining highlands (Muluneh et al. 2017), so the relationship between forest depletion and rainfall pattern were determined separately for the valley floor and the escarpments/highlands.
The long-term temperature trend were analysed by categorising the meteorological stations with temperature data in their respective landscape units: valley floor and escarpments/highlands. Based on this categorisation, each landscape unit had three meteorological stations with temperature data for the trend analysis.
The trends of rainfall and temperature at station and regional levels were investigated using Mann-Kendall (MK) and Regional Kendall (RK) tests, respectively (Helsel and Frans 2006). MK tests have been used with Sen's Slope Estimator for the determination of trend magnitude. The MK test is especially suitable for non-normally distributed data, data containing outliers, and non-linear trends (Helsel and Hirsch 2002). The RK test is applicable to data from numerous locations, and one overall test can determine whether the same trend is evident across those locations (Helsel and Frans 2006). The station-level trend for rainfall was not analysed, because this analysis for most of the stations in this study has been published in the previous study (Muluneh et al. 2017).
Effect of deforestation on rainfall and temperature
A simple linear regression was used to determine the relationships between deforestation (described as percentage of remaining forest and woodland cover each year) and annual rainfall and number of rainy days. Similarly, the effect of deforestation on temperature was determined using linear regression model during the period of available temperature data (1981–2009).
Influence of remnant forests and topographical variables on the spatial variability of rainfall
Stepwise multiple regression was used for selecting significant predictive variables. Annual rainfall and number of rainy days as dependent variables and distances from forests, elevation, slope, and slope aspect as explanatory variables were used. These topographic data and geographic features were derived from Landsat Thematic Mapper (TM) satellite image (Data available from the U.S. Geological Survey) using UTM-WGS 1984-ZONE 37 N map projection. The slope is derived from ASTER global digital elevation model (GDEM, 2011) with a pixel size of 30 m (available at https://asterweb.jpl.nasa.gov/gdem.asp).
Sixteen rainfall datasets were used for the multiple regression. Distance to lakes was another predictor, but it was not included in the analysis because the stations on the valley floor were in similar proximities to the lakes, but the stations in the escarpments/highlands were in different climatic zones, and most stations in the highlands were far from the range of lake penetration distances of 15–45 km (e.g. Estoque et al. 1976; Ryznar and Touma 1981).
The multiple regression equations have the form:
$$ \mathrm{Y}=\alpha +{\beta}_1{X}_1+{\beta}_2{X}_2+\dots +{\beta}_p{X}_p $$
where Y (the dependent variable) is the annual rainfall and number of rainy days,
X is a selected subset of p explanatory variables, β is the slope of each explanatory variable, and α is the intercept. The confidence interval for multiple linear regression is 95%.
Many studies have used stepwise regression to examine the relationship between rainfall and topographical variables (Agnew and Palutikof 2000; Marquínez et al. 2003; Oettli and Camberlin, 2005; Moliba Bankanza 2013). The method applied here began by identifying the 'best' explanatory variable and incorporating it into the model and then iteratively identifying the next 'best' predictor until the model could no longer be improved. Two criteria were used to select the 'best' explanatory variables: statistical significance (at P < 0.05) and the tolerance criterion for evaluating the underlying assumption of independence between explanatory variables. If two variables were significantly alike, their contribution to the variance in the dependent variable becomes impossible to determine. The problem primarily occurs when predictor variables are more strongly correlated with each other than with the response variable.
The tolerance of a variable Xj, Tol j , with the other variables is defined as:
$$ {Tol}_j=1-{R}_j^2 $$
where Rj is the multiple correlation coefficient between variables Xj and X1, Xj-1, Xj + 1…, Xn. If the tolerance is close to 0, the variable Xj is a linear combination of the others and is removed from the equation, and tolerances close to 1 indicate independence. Tolerances and P-values were calculated for each independent variable at each step in the process. Independent variables with associated tolerances ≥0.1 and P-values ≤0.05 were entered stepwise into the model.
Rainfall and temperature trends
Rainfall trends
The Regional Kendall test indicated that the general trend of annual rainfall and number of rainy days tended to increase significantly on the valley floor and to decrease significantly in the escarpments/highlands (Table 3). The decadal increase in rainfall on the valley floor was approximately 38 mm, and the decrease in the escarpments/highlands was approximately 29 mm. Annual rainfall for the entire region decreased significantly, and the number of rainy days tended to decrease, although not significantly (Table 3).
Table 3 Trends in annual rainfall and number of rainy days for the rift valley floor and escarpments/highlands
The time-series analysis showed similar trends of increasing annual rainfall on the valley floor and decreasing annual rainfall and number of rainy days in the escarpments/highlands (Fig. 3).
Time series analysis of (a, c) annual rainfall and (b, d) number of rainy days for (a, b) the rift valley floor and (c, d) escarpments/highlands using simple linear regression from the mean of five stations on the valley floor and 11 stations in the escarpments/highlands. The lines indicate the linear fitting of the series for 1970–2009
Table 4 presents the trends of mean annual maximum and minimum temperatures for 1981–2009. The mean maximum temperature increased significantly at all three stations on the valley floor that recorded temperatures (Ziway, Langano, and Adami Tulu). All three stations recorded an increasing tendency in mean minimum temperature, but the increase was statistically significant at only one station (Langano). A significant increase in maximum temperature was recorded in the escarpments/highlands only at the Butajira station. The maximum temperature increased significantly both on the valley floor landscape unit and in the escarpments/highlands landscape units.
Table 4 Trends in annual maximum and minimum temperatures on the rift valley floor and in the escarpments/highlands
The mean minimum temperature decreased significantly at two of the three highland stations (Kulumsa and Butajira). The minimum temperature increased significantly on the valley floor landscape unit but not significantly in the escarpments/highlands units.
The mean maximum temperature in the Central Rift Valley increased by 0.4 °C/decade during 1981–2009, but the mean minimum temperature remained relatively stable.
Deforestation (Forest and woodland decline)
Figure 4 shows Percentage forest and woodland cover decline over time after exponential interpolation between three different years of forest and woodland cover data points obtained from satellite images. Based on three periods of deforestation data points from satellite images (1973, 1985 and 2006) and subsequent interpolation, the forest and woodland cover declined from 44% in 1973 to less than 15% in 2009 (Fig. 4).
Percentage forest and woodland decline over time (curved line) after exponential interpolation between three forest and woodland cover data points obtained from satellite images (percentage forest and wood land decline at three respective periods: 1973(44%), 1985 (34%) & 2006 (14%))
Effect of deforestation on long-term rainfall pattern
Figure 5 shows the linear relationship between forest and woodland cover decline and (a) annual rainfall and (b) number of rainy days over 40 years across the Ethiopian Central Rift Valley. The continuous decline of forest and woodland cover for four decades was weakly correlated with mean annual rainfall and number of rainy days across the 16 stations in the Central Rift Valley. However, despite their weak correlation, both annual rainfall and number of rainy days showed consistent decrease during the period of forest and woodland cover decline. But, forest and woodland decline looks better correlated with decreasing mean number of rainy days than mean annual rainfall (Fig. 5).
Simple linear regression between forest and woodland cover and (a) annual rainfall and (b) number of rainy days over 40 years across the Ethiopian Central Rift Valley
Effect of deforestation on temperature
Figure 6 presents the linear relationship between forest and woodland cover decline and maximum and minimum temperatures in the rift valley floor and escarpments/highlands for about three decades (1981–2009). The result showed that the increase in the maximum daily temperature in the rift valley floor was well correlated with forest and woodland cover decline (R2 = 0.62), whereas the maximum daily temperature in the escarpments/highlands was poorly correlated with forest and woodland cover decline (R2 = 0.14). However, the linear relationship between forest and woodland cover decline and the minimum daily temperature was poorly correlated both in the rift valley floor and escarpments/highlands.
The relationship between forest and woodland cover and (a, c) maximum temperature and (b, d) minimum temperature for (a, b) the rift valley floor and (c, d) escarpments/highlands using simple linear regression. The lines indicate the linear fitting of the forest and woodland cover and temperature for 1981–2009
The influence of forests and topographical variables on spatial rainfall distribution
Long-term rainfall (1970–2009)
For the total annual rainfall, slope was the best predictor which explained 29% of the rainfall variability in the Central Rift Valley (Table 5). However, for the annual number of rainy days, both slope and elevation explained most (60%) of the variability in the multiple regression model (Table 5). Elevation was not a significant factor for the spatial variability of total annual rainfall in the Central Rift Valley.
Table 5 Best regression models based on stepwise regression showing the relationships between mean annual rainfall and number of rainy days (1970–2009) as dependent variables and slope and elevation as explanatory variables in the Central Rift Valley
Short-term rainfall (2012–2013)
For a better understanding of the effect of remnant forests on local rainfall distribution, 15 tipping bucket rain gauges were installed systematically along a transect to the forest where the rainfall data were collected for two subsequent years. All gauges were in the same climatic zone (sub-humid) as that of the remnant forest. The annual rainfall and number of rainy days from the short-term data were explained by elevation and distance from the remnant forest (R 2 of 0.26 and 0.40, respectively, Table 5). Distance from the forest was not significantly correlated with total annual rainfall, but both total annual rainfall and number of rainy days were negatively correlated with distance from the forest (Fig. 7), indicating that both total annual rainfall and number of rainy days increased closer to the forest.
Simple linear relationships between distance from forest and (a) annual rainfall and (b) number of rainy days for the two-year rainfall data (2012–2013) in the Central Rift Valley
The previous analysis of the station-level rainfall trend (Muluneh et al. 2017) was consistent with our analysis of the regional trend where stations on the valley floor showed an increasing trend and stations in the escarpments/highlands showed a decreasing trend. The increased temperature due to high deforestation and a presence of a chain of lakes in the rift valley floor could also attribute for increased rainfall in the rift valley floor.
The warming trend indicated in this study is consistent with previous studies that reported a warming trend in the Central Rift Valley (Kassie et al. 2013; Mekasha et al. 2013). However, the highland stations Butajira and Kulumsa, showed a significant decrease in the annual minimum temperature. A similar result was also reported by Mekasha et al. (2013) for Kulumsa station where it showed a decreasing tendency in daily minimum temperature during a similar study period. Generally, most previous studies reported a warming trends in Ethiopia over the past few decades for both maximum and minimum temperatures (McSweeney et al. 2008; Taye and Zewdu 2012; Tesso et al. 2012; Jury and Funk 2013).
Regarding spatial difference in the rate of warming it can be inferred that the rate of warming was generally higher on the valley floor than in the highland areas. This higher rate of warming on the valley floor than in the highland areas could be attributed to persistent deforestation over the past four decades, mostly in the rift valley floor of the area.
From the observed data, the forest and woodland cover steadily decreased from 44% in 1973 to 14% in 2006, which increased degraded land by 200% in the Central Rift Valley (Meshesha et al. 2012). Most of the highly degraded areas due to deforestation are located in the rift valley floor (Fig. 2). Generally, there are strong evidences that indicate an alarming rate of deforestation in the Central Rift Valley. For example; a recent study by Kindu et al. (2013) reported a 56% natural forest decline between 1973 and 2012 in Munessa-Shashemene area, the major landscape of the Central Rift Valley. A land use land cover dynamics study in the Central Rift Valley by Garedew et al. (2009) documented dramatic trends in deforestation over time, associated with rapid population growth, recurrent drought, rainfall variability and declining crop productivity. Similarly, Dessie and Kleman (2007) also reported conversion of more than 82% of high forests in the south-central Rift Valley of Ethiopia in about 28 years (1972–2000).
Annual rainfall and number of rainy days over the 40 years period increased on the valley floor but the forest and woodland cover continuously decreased. If this change in land cover plays a negative role in rainfall distribution, then rainfall should have decreased on the valley floor as in the escarpments/highlands, because more of the degraded areas are on the valley floor (Fig. 2), and the valley floor has a lower actual annual evapotranspiration (656 mm) than the escarpments (892 mm) and highlands (917 mm), because it is mostly covered with bare lacustrine soils (Ayenew 2003). The potential evaporation, however, ranges from >2500 mm on the valley floor to <1000 mm in the highlands (Le Turdu et al. 1999). The continuous degradation of the vegetation, intensive cultivation, and low actual evapotranspiration pose a question: why is rainfall increasing on the valley floor? Three possible explanations could be offered.
(i) The increasing temperature on the valley floor over the last 40 years has increased evaporation, mainly from the four lakes (Ziway, Langano, Abiyata, and Shala) that occupy roughly 11% of the total area of the Central Rift Valley (Ayenew 2003), to the extent that the increased evaporation could significantly alter the water cycle and lead to an increase in rainfall. The five stations that recorded increasing rainfalls (Langano, Bulbula, Ziway, Adami Tulu, and Meki) are also close to the lakes (within 7 km, Fig. 1). Existing studies indicated that Mesoscale (1–30 km radius) and local scale (300 m to 2 km radius) climate is influenced by proximity and size of water surfaces (Aguilar et al. 2003). For example, Nieuwolt (1977) reported that Lakes Abaya and Chamo on the valley floor farther south in the rift valley produce large amounts of water vapour and also create local disturbances that are conducive to the production of rain. Similarly, Haile et al. (2009) reported the development of high and thick clouds over Lake Tana in north-western Ethiopia and frequent rains heavier than 10 mm/h at stations relatively close to the lake. Lauwaet et al. (2012) found differences in rainfall patterns with distance from Lake Chad, but large-scale atmospheric processes were not affected. There is also further evidence elsewhere that showed the construction of small artificial lakes augmented rainfall in semi-arid Mexico (Jauregui 1991). Generally, large inland lakes together with highly variable topography and vegetation can cause significant spatial variability in the rainfall pattern in eastern Africa (Nicholson 1998).
(ii) Deforestation in areas close to water bodies such as lakes leads to lake breezes that in turn are favourable for moisture transport and increased rainfall (Mawalagedara and Oglesby 2012).
(iii) The expansion of irrigation in the study area since 1973, particularly around the lakes, could likely have contributed to increased evapotranspiration, which may have contributed to the rainfall increase. Segal et al. (1998) found that irrigation did indeed alter rainfall in a mesoscale model.
In any case, our findings suggest that increasing temperatures and the presence of lakes affect rainfall distribution more strongly than change in vegetative cover on the valley floor. A similar argument was suggested by Meher-Homji (1980) in India, where coastal stations did not record declining rainfall despite high deforestation in the area.
A better relationship is observed between forest and woodland cover decline and maximum temperature in the rift valley floor than in the escarpment/highlands, which is consistent with the higher increase in the maximum daily temperature in the rift valley floor than the escarpment/highlands. Generally, there is a consensus on the idea that the day time temperature (maximum temperature) increase is always associated with local deforestation (Casitillo and Gurney 2013, Houspanossian et al. 2013). For example, the most recent study by Gourdij et al. (2015) found a day time temperature increase of 0.4 °C per decade in areas that have experienced rapid deforestation within 50 km radius since 1983 a rate which is about three times the global average increase, whereas night time minimum temperature increases 0.18 °C per decade, a rate consistent with global average temperature increase. Similarly, Lejeune et al. (2017) reported higher values of Tmax over open land compared to forests which indicates a daytime warming impact of deforestation by almost 1.5 °C. Zhang et al. (2014) have also found much higher Tmax increase (by about 2.4 °C) in the open land than that of the forested land, while Tmin of the open land was almost identical to that of the forested land. Alkama and Cescatti (2016) and Li et al. (2015) also found higher daytime temperature over open land than over surrounding forests. Modelling studies of deforestation have similarly predicted that reductions in evaporative cooling associated with the loss of vegetation will increase regional air temperatures (Snyder 2010). Thus, the decrease in transpiration combined with a reduction of surface roughness due to deforestation suppresses the flux of sensible heat from the surface that in turn will increase the surface temperature.
The influence of remnant forests and topographical variables on spatial rainfall distribution
The significant effect of slope in the spatial rainfall variability in the Central Rift Valley could be attributed to the role of steeper slopes in providing stronger orographic lifting and hence higher rainfall (Buytaert et al. 2006). The relatively less R2 value of 0.29 looks less satisfactory predictor, but other historical studies with such low R-squared values reported the regression model as the best predictor variable. For example, Basist et al. (1994) studied statistical relationship between topography and precipitation pattern, in Hawaii, using multiple regression and found that the model with R2 value of 0.31 (slope orientation independent variable) as statistically significant and best predictor of the annual precipitation pattern. The same study (Ibid) in Kenya reported the R2 value of 0.39 (elevation independent variable) as the only significant topographic predictor of mean annual precipitation. Similarly, with the R2 value of 0.22, they found statistically significant bivariate relationship between slopes and mean annual relationship in Norway.
Elevation greatly influences the climate of Ethiopia but explained less of the spatial variability of total annual rainfall in the Central Rift Valley. The pattern of increasing rainfall associated with increasing altitude in the Central Rift Valley is modified at high altitudes by the influence of the high mountains, which may cause either rain shadows or areas of heavy orographic rainfall (Makin et al. 1975). The orographic effect on the spatial distribution of rainfall over the area is substantial. Drier pockets occur in rain shadows. Areas close to the eastern highlands receive more rain annually than areas farther from the mountainous region even if the latter are higher (Makin et al. 1975). For example, Assela (2390 m a.s.l) receives a mean annual rainfall of 1118 mm, but Kulumsa (2200 m a.s.l), just 11 km to the north, receives only 810 mm, and Sagure (2480 m a.s.l) south of Assela receives only 776 mm. Topographical variables such as slope and aspect and characteristics of the dominant air masses in the highlands of Ethiopia are generally more important than elevation in explaining the variability of annual rainfall (Krauer 1988). The absence of significant correlations between remnant forest (stated as distance from forest) and long-term annual rainfall and number of rainy days may partly be attributable to the non-systematic location of the meteorological stations relative to the remnant forest (Munessa-Shashemene forest). Most of the stations are not near this remnant forest. Furthermore, the meteorological stations in this study are distributed across different climatic zones (semi-arid, sub-humid, and humid climates), but the remnant forest mostly has a sub-humid climate. Thus, it is worth exploring additional metrics to measure rainfall (and relate it to local forest cover) than a point-average - in part as a sensitivity analysis but also to find out more about possible relationships between the presence of forest and local rainfall. Therefore, further study using spatially averaged rainfall estimates in relation to (equally spatially explicit) land cover characteristics is important.
Unlike long term rainfall, in short term rainfall analysis, the annual rainfall was significantly explained by elevation (R 2 = 0.26). This could be attributed to the consistent difference of elevation amongst meteorological stations. Forests were good predictors of the short-term annual number of rainy days, consistent with the findings of other studies that found better correlations of rainy days with forests than with total rainfall (Meher-Homji 1980, 1991; Wilk et al. 2001). The presence of significant correlations between remnant forest (stated as distance from forest) and short-term number of rainy days may partly be attributable to the systematic location of the meteorological stations relative to the remnant forest (Munessa-Shashemene forest). Our tipping bucket rain gauges were systematically installed along transects of approximately 60 km traversing both forested and open areas (Fig. 1).
This study did not find a significant correlation between a long term decline in forest and woodland cover and long-term rainfall in the Central Rift Valley. However, there is a strong relationship between long term forest and woodland cover decline and maximum temperature in the rift valley floor. The remnant forests had a significant effect on the spatial variability of the number of rainy days as observed from short-term (two years) rainfall data. Topographic factors play a significant role than forest cover in explaining the spatial variability of annual rainfall in the long-term and short term time scale in the Central Rift Valley. Slope is the most important factor in explaining long-term rainfall spatial variability while both elevation and slope are the two most important topographic factors in explaining the variability in annual number of rainy days.
Generally, our hypothesis that long term deforestation affects rainfall and temperature pattern and remnant small scale forest has a beneficial effect on rainfall could not be fully confirmed because of the complication of the presence of a chain of lakes in the Central Rift Valley. The analysis of the role of lakes for increasing rainfall in the surrounding area, through moisture transport from the lakes to the nearby land surface (Lake Breeze), was carried out by indirect method and revising existing literature. However, the limits of our dataset prevent any further analysis to draw a robust conclusion about the role of these lakes in affecting the surrounding rainfall pattern. Despite such limitations, this study is an important step toward improving our understanding of the relationship between forests and rainfall variability on smaller spatial scales given Ethiopia's diverse topography and climate.
Abate A (2004) Biomass and nutrient studies of selected tree species of natural and plantation forests: Implications for a sustainable management of the Munesa- Shashemene forest, Ethiopia. Ph.D. Thesis. University of Bayreuth. pp150
Agnew MD, Palutikof JP (2000) GIS- based construction of baseline climatologies for the Mediterranean using terrain variables. Clim Res 14:115–127
Aguilar E, Auer I, Brunet M, Peterson TC, Wieringa J (2003) Guidelines on climate metadata and homogenization. WCDMP No. 53 – WMO/TD-No. 1186
Alkama R, Cescatti A (2016) Biophysical climate impacts of recent changes in global forest cover. Science 351:600–604
Ayenew T (2003) Evapotranspiration estimation using thematic mapper spectral satellite data in the Ethiopian rift and adjacent highlands. J Hydrol 279:83–93
Ba MB, Nicholson SE (1998) Analysis of convective activity and its relationship to the rainfall over the Rift Valley lakes of East Africa during 1983–1990 using Meteosat Infrared channel. J Appl Meteorol 37:1250–1264
Badger AM, Dirmeyer PA (2016) Remote tropical and subtropical responses to Amazon deforestation. Clim Dyn 46:3057–3066
Bagley JE, Desai AR, Harding KJ, Snyder PK, Foley JA (2014) Drought and deforestation: has land cover change influenced recent precipitation extremes in the Amazon? J Clim 27:345–361
Barry RG (1992) Mountain Weather and Climate. Routledge, London
Basist A, Bell GD, Meentemeyere V (1994) Statistical relationships between topography and precipitation patterns. J Clim 7:1305–1315
Bekele M (2003) Forest Property Rights, the Role of the State, and Institutional Exigency: The Ethiopian Experience. Swedish University of Agricultural Sciences, Department of Rural Development Studies, Uppsala, Sweden, Doctoral thesis
Bolstad PV, Swift L, Collins F, Regniere J (1998) Measured and predicted air temperatures at basin to regional scales in the southern Appalachian mountains. Agric For Meteorol 91:161–176
Bonan GB (2008) Forests and climate change: Forcing feedbacks and the climate benefits of forests. Science 320:1444–1449
Bright R M et al (2015) Metrics for biogeophysical climate forcings from land use and land cover changes and their inclusion in life cycle assessment: a critical review Environ. Sci.Technol. 49 3291–3303
Buytaert W, Celleri R, Willems P, De B'e B, Wyseure G (2006) Spatial and temporal rainfall variability in mountainous areas: a case study from the south Ecuadorian Andes. J Hydrol 329:413–421
Cao Q, Yu D, Georgescu M, Han Z, Wu J (2015) Impacts of land use and land cover change on regional climate: a case study in the agro-pastoral transitional zone of China. Environ Res Lett 10:124025. https://doi.org/10.1088/1748-9326/10/12/124025
Castillo CKG, Gurney KR (2013) A sensitivity analysis of surface biophysical, carbon, and climate impacts of tropical deforestation rates in CCSM4CNDV. J Clim 26:805–821
Dallmeyer A, Claussen M (2011) The influence of land cover change in the Asian monsoon region on present-day and mid-Holocene climate. Biogeosciences 8:1499–1519
Daly C, Neilson RP, Phillips DL (1994) A statistical topographic model for mapping climatological precipitation over mountainous terrain. J Appl Meteorol 33:140.158
Davies-Barnard T, Valdes PJ, Singarayer JS, Pacifico FM, Jones CD (2014) Full effects of land use change in the representative concentration pathways Environ. Res Lett 9:114014
de Sherbinin A, Balk D, Yager K, Jaiteh M, Pozzi F, Giri C. and Wannebo, A (2002) "Land-Use and Land-Cover Change," A CIESIN Thematic Guide, Palisades, NY: Center for International Earth Science Information Network of Columbia University. Available on-line at http://sedac.ciesin.columbia.edu/tg/guide_main.jsp
Dessie G, Kleman J (2007) Pattern and magnitude of deforestation in the South Central Rift Valley Region of Ethiopia. Mt Res Dev 27:162–168
Dirmeyer PA, Shukla J (1994) Albedo as a modulator of climate response to tropical deforestation. Geophys Res:20863–20877
Estoque MA, Gross J, Lai HW (1976) A lake breeze over southern Lake Ontario. Mon Weather Rev 104:386–396
FAO (2010) Global forest resources assessment main report. Food and Agriculture Organization of the United Nations, Rome
Feoli E, Zerihun W (2000) Fuzzy set analysis of the Ethiopian rift valley vegetation in relation to anthropogenic influences. Plant Ecol 1147:219–225
Gadgil S (1978) Lectures on Fundamentals of Climatology. UGC Sp1. Training Programme in Wildlife Biology. Indian Inst. Sci., Bangalore
Garedew E, Sandewall M, Soderberg U, Campbell BM (2009) Land-use and land cover dynamics in the Central Rift Valley of Ethiopia. Environ Manag 44:683–694
Gebrehiwot SG, Taye A, Bishop K (2010) Forest Cover and Stream Flow in a Headwater of the Blue Nile: Complementing Observational Data Analysis with Community Perception. Ambio 39:284–294
Gourdij S, Läderach P, Valle AM, Martinez CZ, Lobell DB (2015) Historical climate trends, deforestation, and maize and bean yields in Nicaragua. Agric For Meteorol 200:270–281
Griffiths JF (1972) Climates of Africa. World Survey of Climatology, 10. Elsevier
Haile AT, Rientjes THM, Gieske, ASM (2009) Rainfall Variability over Mountainous and Adjacent Lake Areas: The Case of Lake Tana Basin at the Source of the Blue Nile River. J Appl Meteorol Climatol. 48:1696–1717
Hanif M F, Mustafa M R, Hashim A M, Yusof K W (2016) Deforestation alters rainfall: a myth or reality. Earth and Environmental Science, 37, 012029. oi:https://doi.org/10.1088/1755-1315/37/1/012029
Helsel DR, Frans L (2006) Regional Kendall test for trend. Environ Sci Technol 40:4066–4073
Helsel DR, Hirsch RM (2002) Statistical Methods in Water Resources. US Department of the Interior. http://water.usgs.gov/pubs/twri/twri4a3/
Hession S, Moore N (2011) A spatial regression analysis of the influence of topography on monthly rainfall in East Africa. Int J Climatol 31:1440–1456
Houspanossian J, Nosetto M, Jobbagy EG (2013) Radiation budget changes with dry forest clearing in temperate Argentina. Glob Change Biol 19:1211–1222
Hubbard K (1994) Spatial variability of daily weather variables in the high plains of the USA. Agric For Meteorol 68:29–41
Jauregui E (1991) Effects of Revegetation and New Artificial Water Bodies on the Climate of Northeast Mexico City. Energy and Buildings, 15–16, 447–455
Jury MR, Funk C (2013) Climatic trends over Ethiopia: regional signals and drivers. Int J Climatol 33:1924–1935
Kassie BT, Rötter P, Hengsdijk H, Asseng S, Van Ittersum MK, Kahiluto H, Van Keulen H (2013) Climate variability and change in the Central Rift Valley of Ethiopia: challenges for rainfed crop production. J Agric Sci 152:58–74
Kindu M, Schneider T, Teketay D, Knoke T (2013) Land Use/Land Cover Change Analysis Using Object-Based Classification Approach in Munessa-Shashemene Landscape of the Ethiopian Highlands. Remote Sens 5:2411–2435. https://doi.org/10.3390/rs5052411
Kramer R, Richter D, Pattanayak S, Sharma N (1997) Economic and ecological analysis of watershed protection in eastern Madagascar. J Environ Manag 49:277–295
Krauer J (1988) Rainfall, erosivity and isoerodent map of Ethiopia. Soil Conservation Research Project, Research Report 15. University of Berne, Switzerland, p 132
Lauwaet D, van Lipzig NPM, Van Weverberg K, De Ridder K, Goyens C (2012) The precipitation response to the desiccation of Lake Chad. Q J R Meteorol Soc 138:707–719
Lawrence D, Vandecar K (2015) Effects of tropical deforestation on climate and agriculture. Nat Clim Chang 5:27–36
Lawrence PJ, Chase TN (2010) Investigating the climate impacts of global land cover change in the community climate system model. Int J Climatol 30:2066–2087
Le Turdu C, Coauthors (1999) The Ziway–Shala lake basin system, Main Ethiopian Rift: influence of volcanism, tectonics, and climatic forcing on basin formation and sedimentation. Palaeogeography, Palaeoclimatology and Palaeoecology, 150, 135–177
Lejeune Q, Davin EL, Guillod BP, Seneviratne SI (2015) Influence of Amazonian deforestation on the future evolution of regional surface fluxes, circulation, surface temperature and precipitation. Clim Dyn. 44:2769–2786
Lejeune Q, Sonia IS, Edouard LD (2017) Historical Land-Cover Change Impacts on Climate: Comparative Assessment of LUCID and CMIP5 Multi-model Experiments. J Clim. https://doi.org/10.1175/JCLI-D-16-0213.1.
Li Y, Zhao M, Motesharrei S, Mu Q, Kalnay E, Li S (2015) Local cooling and warming effects of forests based on satellite observations. Nat Commun 6:6603
Lucia et al (2017) Biophysical effects on temperature and precipitation due to land cover change. Environ Res Lett 12:053002
Makarieva AM, Gorshkov VG, Bai-Lian L (2009) Precipitation on land versus distance from the ocean: evidence for a forest pump of atmospheric moisture. Ecol Complex 6:302–307
Makin MJ, Kingham TJ, Waddams AE, Birchall CR, Teferra T (1975) Development prospects in the southern Rift Valley. Ethiopia. Land Resources Study 21, Land Resources Division, UK Min. Overseas Development, Tolworth, UK
Marquínez J, Lastra J, García P (2003) Estimation models for precipitation in mountainous regions: the use of GIS and multivariate analysis. J Hydrol 270:1–11
Mawalagedara R, Oglesby J (2012) The Climatic Effects of Deforestation in South and Southeast Asia, Deforestation Around the World, Moutinho, P (Ed.), ISBN: 978-953-51-0417-9. INTECH Available from: http://www.intechopen.com/books/deforestation-around-the-world/the-climatic-effects-ofdeforestation-in-south-and-southeast-asia
McSweeney C, New M, Lizcano G (2008) UNDP Climate Change Country Profiles: Ethiopia, Available at http://www.geog.ox.ac.uk/research/climate/projects/undp-cp/index.html?country=Ethiopia&d1=Reports. Accessed 5 Mar 2017.
Meher-Homji VM (1980) Repercussions of deforestation on precipitation in Western Karnatakum, (and Kerala) India. Archiv fur Meteorologie, Geophysik und Bioklimatologie 28B:385–400
Meher-homji VM (1991) Probable impact of deforestation on hydrological processes. Clim Chang 19:163–173
Mekasha A, Tesfaye K, Duncan AJ (2013) Trends in daily observed temperature and precipitation extremes over three Ethiopian eco-environments. Int J Climatol 34:1990–1999
Meshesha DT, Tsunekawa A, Tsubo M (2012) Continuing land degradation: cause–effect in Ethiopia's central rift valley. Land Degrad Develop 23:130–143. https://doi.org/10.1002/ldr.1061
Mi Z et al (2014) Response of surface air temperature to small-scale land clearing across latitudes Environ. Res. Lett 9:034002
Mohammed MU, Bonnefille R (1991) The recent history of vegetation and climate around Lake Langano. Paleoecology of. Africa 22:275–286
Moliba Bankanza JC (2013) Spatial Modeling of Summer Precipitation over the Czech Republic Using Physiographic Variables. Geogr Res 52:85–105
Muchane MW (1996) Comparison of the isotope record in Micrite, Lake Turkana, with the historical weather record over the last century. In: Johnson TC, Odada EO (eds) The Limnology, Climatology and Pale climatology of the East African Lakes, Gordon and Breach Publishers, OPA, The Netherlands, pp 431–441
Muluneh A, Bewket W, Keesstra SD, Stroosnijder L (2017) Searching for evidence of changes in extreme rainfall indices in the Central Rift Valley of Ethiopia. Theor Appl Climatol 128:795–809
Musein BS (2006) Remote sensing and GIS for land cover/land use change detection and analysis in the semi-natural ecosystems and agriculture landscapes of the central Ethiopian Rift Valley. Ph.D. Dissertation, Techniche Universität Dresden. Dresden, Germany, p 2006
Nicholson SE (1998) Historical Fluctuations of Lake Victoria and other lakes in the northern rift valley of east Africa. Environmental change and response in east African lakes:7–35
Nieuwolt S (1977) Tropical Climatology: An Introduction to the Climate of Low Latitude. John Wiley & Sons, Ltd 207 pp
Nobre P, Malagutti M, Urbano DF, de Almeida RA, Giarolla E (2009) Amazon deforestation and climate change in a coupled model simulation. J Clim 22:5686–5697
Nogherotto R, Coppola E, Giorgi F, Mariotti L (2013) Impact of Congo Basin deforestation on the African monsoon. Atmos Sci Lett 14:45–51
Oettli P, Camberlin P (2005) Influence of topography on monthly rainfall distribution over East Africa. Clim Res 28:199–212
Pattanayak S, Kramer R (2001) Worth of watersheds: a producer surplus approach for valuing drought mitigation in Eastern Indonesia. Environ Dev Econ 6(1):123–146
Pitman AJ, Lorenz R (2016) Scale dependence of the simulated impact of Amazonian deforestation on regional climate. Environ Res Lett 11:094025. https://doi.org/10.1088/1748-9326/11/9/094025
Ryznar E, Touma JS (1981) Characteristics of true lake breezes along the eastern shore of Lake Michigan. Atmos Environ 15:1201–1205
Sanderson M, Pope E, Santini M, Mercogliano P, Montesarchio M (2012) Influences of EU forests on weather patterns: Final report European Commission (DG Environment)
Savenije HHG (1996) Does moisture feedback affect rainfall significantly? Phys Chem Earth 20:507–513
Segal M, Pan Z, Turner RW, Takle ES (1998) On the potential impact of irrigated areas in North America summer rainfall caused by large-scale systems. J Appl Meteorol 37:325–331
Seifu K (1998) Estimating Land Cover/Land Use Changes in Munessa Forest Area using Remote Sensing Techniques. MSc thesis Report No. 1998: 32, Swedish University of Agricultural Sciences, Skinn skatteberg
Seleshi Y, Zanke U (2004) Recent changes in rainfall and rainy days in Ethiopia. Int J Climatol 24:973–983
Snyder PK (2010) The influence of tropical deforestation on the northern hemisphere climate by atmospheric teleconnections. Earth Interact 14. https://doi.org/10.1175/2010EI280.1
Spracklen DV, Garcia-Carreras L (2015) The impact of Amazonian deforestation on Amazon basin rainfall. Geophys Res Lett 42:9546–9552
Taye M, Zewdu F (2012) Spatio-temporal Variability and Trend of Rainfall and Temperature in Western Amhara, Ethiopia: a GIS approach. Glo Adv Res J Geogr Reg Plann 1:65–82
Tesso G, Emana B, Ketema M (2012) A time series analysis of climate variability and its impacts on food production in North Shewa zone in Ethiopia. Afr Crop Sci J 20:261–274
Wilk J, Andersson L, Plermkamon V (2001) Hydrological impacts of forest conversion to agriculture in a large river basin in northeast Thailand. Hydrol Process 15:2729–2748
Woldu Z, Tadesse M (1990) The vegetation in the lakes region of the rift valley of Ethiopia and the possibility of its recoveries. SINET: Ethiopian. Journal of Science 13:97–120
This study was funded by the Netherlands Organization for International Cooperation in Higher Education (Nuffic). The authors thank the Royal Dutch Embassy in Ethiopia for facilitating the deployment of the rain gauges to Ethiopia. We also acknowledge the primary schools in West Arsi zone in Ethiopia who willingly allowed us to install rain gauges in the school compounds.
Wageningen University, Soil Physics and Land Management Group, Droevendaalsesteeg 4, 6708, Wageningen, PB, Netherlands
Alemayehu Muluneh
, Saskia Keesstra
& Leo Stroosnijder
Hawassa University, School of Bio-systems and Environmental Engineering, P.O. Box, 05, Hawassa, Ethiopia
Hawassa University, Wondo Genet College of Forestry and Natural Resources, P.O. Box, 128, Shashemene, Ethiopia
Ashenafi Burka
Department of Geography & Environmental Studies, Addis Ababa University, P.O. Box, 150372, Addis Ababa, Ethiopia
Woldeamlak Bewket
Amsterdam University, P.O. Box, 94248, 1090, GE, Amsterdam, The Netherlands
Emiel van Loon
Search for Alemayehu Muluneh in:
Search for Emiel van Loon in:
Search for Woldeamlak Bewket in:
Search for Saskia Keesstra in:
Search for Leo Stroosnijder in:
Search for Ashenafi Burka in:
AM wrote most of the text and conducted most of the data gathering. AM, E v L and AB performed the data analysis. WB, SK and LS helped to draft manuscript. Each co-author provided their invaluable expert insights, opinions and recommendations for the text and wrote certain important paragraphs. All authors read and approved the final manuscript.
Correspondence to Alemayehu Muluneh.
|
CommonCrawl
|
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing (1708.01530)
DES Collaboration: T. M. C. Abbott, F. B. Abdalla, A. Alarcon, J. Aleksić, S. Allam, S. Allen, A. Amara, J. Annis, J. Asorey, S. Avila, D. Bacon, E. Balbinot, M. Banerji, N. Banik, W. Barkhouse, M. Baumer, E. Baxter, K. Bechtol, M. R. Becker, A. Benoit-Lévy, B. A. Benson, G. M. Bernstein, E. Bertin, J. Blazek, S. L. Bridle, D. Brooks, D. Brout, E. Buckley-Geer, D. L. Burke, M. T. Busha, D. Capozzi, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. Chang, N. Chen, M. Childress, A. Choi, C. Conselice, R. Crittenden, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, T. M. Davis, C. Davis, J. De Vicente, D. L. DePoy, J. DeRose, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, T. F. Eifler, A. E. Elliott, F. Elsner, J. Elvin-Poole, J. Estrada, A. E. Evrard, Y. Fang, E. Fernandez, A. Ferté, D. A. Finley, B. Flaugher, P. Fosalba, O. Friedrich, J. Frieman, J. García-Bellido, M. Garcia-Fernandez, M. Gatti, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, M. S. S. Gill, K. Glazebrook, D. A. Goldstein, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, S. Hamilton, W. G. Hartley, S. R. Hinton, K. Honscheid, B. Hoyle, D. Huterer, B. Jain, D. J. James, M. Jarvis, T. Jeltema, M. D. Johnson, M. W. G. Johnson, T. Kacprzak, S. Kent, A. G. Kim, A. King, D. Kirk, N. Kokron, A. Kovacs, E. Krause, C. Krawiec, A. Kremin, K. Kuehn, S. Kuhlmann, N. Kuropatkin, F. Lacasa, O. Lahav, T. S. Li, A. R. Liddle, C. Lidman, M. Lima, H. Lin, N. MacCrann, M. A. G. Maia, M. Makler, M. Manera, M. March, J. L. Marshall, P. Martini, R. G. McMahon, P. Melchior, F. Menanteau, R. Miquel, V. Miranda, D. Mudd, J. Muir, A. Möller, E. Neilsen, R. C. Nichol, B. Nord, P. Nugent, R. L. C. Ogando, A. Palmese, J. Peacock, H.V. Peiris, J. Peoples, W. J. Percival, D. Petravick, A. A. Plazas, A. Porredon, J. Prat, A. Pujol, M. M. Rau, A. Refregier, P. M. Ricker, N. Roe, R. P. Rollins, A. K. Romer, A. Roodman, R. Rosenfeld, A. J. Ross, E. Rozo, E. S. Rykoff, M. Sako, A. I. Salvador, S. Samuroff, C. Sánchez, E. Sanchez, B. Santiago, V. Scarpine, R. Schindler, D. Scolnic, L. F. Secco, S. Serrano, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Smith, J. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, G. Tarle, D. Thomas, M. A. Troxel, D. L. Tucker, B. E. Tucker, S. A. Uddin, T. N. Varga, P. Vielzeuf, V. Vikram, A. K. Vivas, A. R. Walker, M. Wang, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, B. Yanny, F. Yuan, A. Zenteno, B. Zhang, Y. Zhang, J. Zuntz
March 1, 2019 astro-ph.CO
We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ...
Dark Energy Survey Year 1 Results: Weak Lensing Shape Catalogues (1708.01533)
J. Zuntz, E. Sheldon, S. Samuroff, M. A. Troxel, M. Jarvis, N. MacCrann, D. Gruen, J. Prat, C. Sánchez, A. Choi, S. L. Bridle, G. M. Bernstein, S. Dodelson, A. Drlica-Wagner, Y. Fang, R. A. Gruendl, B. Hoyle, E. M. Huff, B. Jain, D. Kirk, T. Kacprzak, C. Krawiec, A. A. Plazas, R. P. Rollins, E. S. Rykoff, I. Sevilla-Noarbe, B. Soergel, T. N. Varga, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, T. F. Eifler, J. Estrada, A. E. Evrard, A. Fausti Neto, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, J. Gschwend, G. Gutierrez, W. G. Hartley, K. Honscheid, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, T. S. Li, M. Lima, M. A. G. Maia, M. March, P. Martini, P. Melchior, F. Menanteau, C. J. Miller, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, R. L. C. Ogando, N. Roe, A. K. Romer, A. Roodman, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, R. H. Wechsler, Y. Zhang
Sept. 7, 2018 astro-ph.CO
We present two galaxy shape catalogues from the Dark Energy Survey Year 1 data set, covering 1500 square degrees with a median redshift of $0.59$. The catalogues cover two main fields: Stripe 82, and an area overlapping the South Pole Telescope survey region. We describe our data analysis process and in particular our shape measurement using two independent shear measurement pipelines, METACALIBRATION and IM3SHAPE. The METACALIBRATION catalogue uses a Gaussian model with an innovative internal calibration scheme, and was applied to $riz$-bands, yielding 34.8M objects. The IM3SHAPE catalogue uses a maximum-likelihood bulge/disc model calibrated using simulations, and was applied to $r$-band data, yielding 21.9M objects. Both catalogues pass a suite of null tests that demonstrate their fitness for use in weak lensing science. We estimate the 1$\sigma$ uncertainties in multiplicative shear calibration to be $0.013$ and $0.025$ for the METACALIBRATION and IM3SHAPE catalogues, respectively.
Dark Energy Survey Year 1 Results: Galaxy-Galaxy Lensing (1708.01537)
J. Prat, C. Sánchez, Y. Fang, D. Gruen, J. Elvin-Poole, N. Kokron, L. F. Secco, B. Jain, R. Miquel, N. MacCrann, M. A. Troxel, A. Alarcon, D. Bacon, G. M. Bernstein, J. Blazek, R. Cawthon, C. Chang, M. Crocce, C. Davis, J. De Vicente, J. P. Dietrich, A. Drlica-Wagner, O. Friedrich, M. Gatti, W. G. Hartley, B. Hoyle, E. M. Huff, M. Jarvis, M. M. Rau, R. P. Rollins, A. J. Ross, E. Rozo, E. S. Rykoff, S. Samuroff, E. Sheldon, T. N. Varga, P. Vielzeuf, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, S. Desai, H. T. Diehl, S. Dodelson, T. F. Eifler, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, D. A. Goldstein, R. A. Gruendl, J. Gschwend, G. Gutierrez, K. Honscheid, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, D. Kirk, E. Krause, K. Kuehn, S. Kuhlmann, O. Lahav, T. S. Li, M. Lima, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, J. J. Mohr, R. C. Nichol, B. Nord, A. A. Plazas, A. K. Romer, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, R. H. Wechsler, B. Yanny, Y. Zhang
We present galaxy-galaxy lensing measurements from 1321 sq. deg. of the Dark Energy Survey (DES) Year 1 (Y1) data. The lens sample consists of a selection of 660,000 red galaxies with high-precision photometric redshifts, known as redMaGiC, split into five tomographic bins in the redshift range $0.15 < z < 0.9$. We use two different source samples, obtained from the Metacalibration (26 million galaxies) and Im3shape (18 million galaxies) shear estimation codes, which are split into four photometric redshift bins in the range $0.2 < z < 1.3$. We perform extensive testing of potential systematic effects that can bias the galaxy-galaxy lensing signal, including those from shear estimation, photometric redshifts, and observational properties. Covariances are obtained from jackknife subsamples of the data and validated with a suite of log-normal simulations. We use the shear-ratio geometric test to obtain independent constraints on the mean of the source redshift distributions, providing validation of those obtained from other photo-$z$ studies with the same data. We find consistency between the galaxy bias estimates obtained from our galaxy-galaxy lensing measurements and from galaxy clustering, therefore showing the galaxy-matter cross-correlation coefficient $r$ to be consistent with one, measured over the scales used for the cosmological analysis. The results in this work present one of the three two-point correlation functions, along with galaxy clustering and cosmic shear, used in the DES cosmological analysis of Y1 data, and hence the methodology and the systematics tests presented here provide a critical input for that study as well as for future cosmological analyses in DES and other photometric galaxy surveys.
Dark Energy Survey Year 1 Results: Galaxy clustering for combined probes (1708.01536)
J. Elvin-Poole, M. Crocce, A. J. Ross, T. Giannantonio, E. Rozo, E. S. Rykoff, S. Avila, N. Banik, J. Blazek, S. L. Bridle, R. Cawthon, A. Drlica-Wagner, O. Friedrich, N. Kokron, E. Krause, N. MacCrann, J. Prat, C. Sanchez, L. F. Secco, I. Sevilla-Noarbe, M. A. Troxel, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, J. Asorey, K. Bechtol, M. R. Becker, A. Benoit-Levy, G. M. Bernstein, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, D. Carollo, M. Carrasco Kind, J. Carretero, F. J. Castander, C. E. Cunha, C. B. DAndrea, L. N. da Costa, T. M. Davis, C. Davis, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, T. F. Eifler, A. E. Evrard, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, D. W. Gerdes, K. Glazebrook, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, W. G. Hartley, S. R. Hinton, K. Honscheid, J. K. Hoormann, B. Jain, D. J. James, M. Jarvis, T. Jeltema, M. W. G. Johnson, M. D. Johnson, A. King, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, G. Lewis, T. S. Li, C. Lidman, M. Lima, H. Lin, E. Macaulay, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, A. Moller, R. C. Nichol, B. Nord, C. R. ONeill, W.J. Percival, D. Petravick, A. A. Plazas, A. K. Romer, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, E. Sheldon, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, N. E. Sommer, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, B. E. Tucker, D. L. Tucker, S. A. Uddin, V. Vikram, A. R. Walker, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, F. Yuan, B. Zhang, J. Zuntz (DES Collaboration)
Aug. 28, 2018 astro-ph.CO
We measure the clustering of DES Year 1 galaxies that are intended to be combined with weak lensing samples in order to produce precise cosmological constraints from the joint analysis of large-scale structure and lensing correlations. Two-point correlation functions are measured for a sample of $6.6 \times 10^{5}$ luminous red galaxies selected using the \textsc{redMaGiC} algorithm over an area of $1321$ square degrees, in the redshift range $0.15 < z < 0.9$, split into five tomographic redshift bins. The sample has a mean redshift uncertainty of $\sigma_{z}/(1+z) = 0.017$. We quantify and correct spurious correlations induced by spatially variable survey properties, testing their impact on the clustering measurements and covariance. We demonstrate the sample's robustness by testing for stellar contamination, for potential biases that could arise from the systematic correction, and for the consistency between the two-point auto- and cross-correlation functions. We show that the corrections we apply have a significant impact on the resultant measurement of cosmological parameters, but that the results are robust against arbitrary choices in the correction method. We find the linear galaxy bias in each redshift bin in a fiducial cosmology to be $b(z$=$0.24)=1.40 \pm 0.08$, $b(z$=$0.38)=1.61 \pm 0.05$, $b(z$=$0.53)=1.60 \pm 0.04$ for galaxies with luminosities $L/L_*>$$0.5$, $b(z$=$0.68)=1.93 \pm 0.05$ for $L/L_*>$$1$ and $b(z$=$0.83)=1.99 \pm 0.07$ for $L/L_*$$>1.5$, broadly consistent with expectations for the redshift and luminosity dependence of the bias of red galaxies. We show these measurements to be consistent with the linear bias obtained from tangential shear measurements.
Dark Energy Survey Year 1 Results: Cosmological Constraints from Cosmic Shear (1708.01538)
M. A. Troxel, N. MacCrann, J. Zuntz, T. F. Eifler, E. Krause, S. Dodelson, D. Gruen, J. Blazek, O. Friedrich, S. Samuroff, J. Prat, L. F. Secco, C. Davis, A. Ferté, J. DeRose, A. Alarcon, A. Amara, E. Baxter, M. R. Becker, G. M. Bernstein, S. L. Bridle, R. Cawthon, C. Chang, A. Choi, J. De Vicente, A. Drlica-Wagner, J. Elvin-Poole, J. Frieman, M. Gatti, W. G. Hartley, K. Honscheid, B. Hoyle, E. M. Huff, D. Huterer, B. Jain, M. Jarvis, T. Kacprzak, D. Kirk, N. Kokron, C. Krawiec, O. Lahav, A. R. Liddle, J. Peacock, M. M. Rau, A. Refregier, R. P. Rollins, E. Rozo, E. S. Rykoff, C. Sánchez, I. Sevilla-Noarbe, E. Sheldon, A. Stebbins, T. N. Varga, P. Vielzeuf, M. Wang, R. H. Wechsler, B. Yanny, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, E. Fernandez, B. Flaugher, P. Fosalba, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, D. A. Goldstein, R. A. Gruendl, J. Gschwend, G. Gutierrez, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, S. Kent, K. Kuehn, S. Kuhlmann, N. Kuropatkin, T. S. Li, M. Lima, H. Lin, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, B. Nord, D. Petravick, A. A. Plazas, A. K. Romer, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, J. Weller, Y. Zhang
April 30, 2018 astro-ph.CO
We use 26 million galaxies from the Dark Energy Survey (DES) Year 1 shape catalogs over 1321 deg$^2$ of the sky to produce the most significant measurement of cosmic shear in a galaxy survey to date. We constrain cosmological parameters in both the flat $\Lambda$CDM and $w$CDM models, while also varying the neutrino mass density. These results are shown to be robust using two independent shape catalogs, two independent \photoz\ calibration methods, and two independent analysis pipelines in a blind analysis. We find a 3.5\% fractional uncertainty on $\sigma_8(\Omega_m/0.3)^{0.5} = 0.782^{+0.027}_{-0.027}$ at 68\% CL, which is a factor of 2.5 improvement over the fractional constraining power of our DES Science Verification results. In $w$CDM, we find a 4.8\% fractional uncertainty on $\sigma_8(\Omega_m/0.3)^{0.5} = 0.777^{+0.036}_{-0.038}$ and a dark energy equation-of-state $w=-0.95^{+0.33}_{-0.39}$. We find results that are consistent with previous cosmic shear constraints in $\sigma_8$ -- $\Omega_m$, and see no evidence for disagreement of our weak lensing data with data from the CMB. Finally, we find no evidence preferring a $w$CDM model allowing $w\ne -1$. We expect further significant improvements with subsequent years of DES data, which will more than triple the sky coverage of our shape catalogs and double the effective integrated exposure time per galaxy.
Survey geometry and the internal consistency of recent cosmic shear measurements (1804.10663)
M. A. Troxel, E. Krause, C. Chang, T. F. Eifler, O. Friedrich, D. Gruen, N. MacCrann, A. Chen, C. Davis, J. DeRose, S. Dodelson, M. Gatti, B. Hoyle, D. Huterer, M. Jarvis, F. Lacasa, H. V. Peiris, J. Prat, S. Samuroff, C. Sánchez, E. Sheldon, P. Vielzeuf, M. Wang, J. Zuntz, F. B. Abdalla, S. Allam, J. Annis, S. Avila, E. Bertin, D. Brooks, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, J. De Vicente, H. T. Diehl, P. Doel, A. E. Evrard, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, R. A. Gruendl, J. Gschwend, G. Gutierrez, W. G. Hartley, D. L. Hollowood, K. Honscheid, D. J. James, D. Kirk, K. Kuehn, N. Kuropatkin, T. S. Li, M. Lima, M. March, F. Menanteau, R. Miquel, J. J. Mohr, R. L. C. Ogando, A. A. Plazas, A. Roodman, E. Sanchez, V. Scarpine, R. Schindler, I. Sevilla-Noarbe, M. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, D. Thomas, A. R. Walker, R. H. Wechsler
We explore the impact of an update to the typical approximation for the shape noise term in the analytic covariance matrix for cosmic shear experiments that assumes the absence of survey boundary and mask effects. We present an exact expression for the number of galaxy pairs in this term based on the the survey mask, which leads to more than a factor of three increase in the shape noise on the largest measured scales for the Kilo-Degree Survey (KIDS-450) real-space cosmic shear data. We compare the result of this analytic expression to several alternative methods for measuring the shape noise from the data and find excellent agreement. This update to the covariance resolves any internal model tension evidenced by the previously large cosmological best-fit $\chi^2$ for the KiDS-450 cosmic shear data. The best-fit $\chi^2$ is reduced from 161 to 121 for 118 degrees of freedom. We also apply a correction to how the multiplicative shear calibration uncertainty is included in the covariance. This change, along with a previously known update to the reported effective angular values of the data vector, jointly shift the inferred amplitude of the correlation function to higher values. We find that this improves agreement of the KiDS-450 cosmic shear results with Dark Energy Survey Year 1 and Planck results.
DES Y1 Results: Validating cosmological parameter estimation using simulated Dark Energy Surveys (1803.09795)
N. MacCrann, J. DeRose, R. H. Wechsler, J. Blazek, E. Gaztanaga, M. Crocce, E. S. Rykoff, M. R. Becker, B. Jain, E. Krause, T. F. Eifler, D. Gruen, J. Zuntz, M. A. Troxel, J. Elvin-Poole, J. Prat, M. Wang, S. Dodelson, A. Kravtsov, P. Fosalba, M. T. Busha, A. E. Evrard, D. Huterer, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, S. Avila, G. M. Bernstein, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, J. De Vicente, H. T. Diehl, P. Doel, J. Frieman, J. García-Bellido, D. W. Gerdes, R. A. Gruendl, G. Gutierrez, W. G. Hartley, D. Hollowood, K. Honscheid, B. Hoyle, D. J. James, T. Jeltema, D. Kirk, K. Kuehn, N. Kuropatkin, M. Lima, M. A. G. Maia, J. L. Marshall, F. Menanteau, R. Miquel, A. A. Plazas, A. Roodman, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, A. R. Walker, J. Weller
March 26, 2018 astro-ph.CO
We use mock galaxy survey simulations designed to resemble the Dark Energy Survey Year 1 (DES Y1) data to validate and inform cosmological parameter estimation. When similar analysis tools are applied to both simulations and real survey data, they provide powerful validation tests of the DES Y1 cosmological analyses presented in companion papers. We use two suites of galaxy simulations produced using different methods, which therefore provide independent tests of our cosmological parameter inference. The cosmological analysis we aim to validate is presented in DES Collaboration et al. (2017) and uses angular two-point correlation functions of galaxy number counts and weak lensing shear, as well as their cross-correlation, in multiple redshift bins. While our constraints depend on the specific set of simulated realisations available, for both suites of simulations we find that the input cosmology is consistent with the combined constraints from multiple simulated DES Y1 realizations in the $\Omega_m-\sigma_8$ plane. For one of the suites, we are able to show with high confidence that any biases in the inferred $S_8=\sigma_8(\Omega_m/0.3)^{0.5}$ and $\Omega_m$ are smaller than the DES Y1 $1-\sigma$ uncertainties. For the other suite, for which we have fewer realizations, we are unable to be this conclusive; we infer a roughly 70% probability that systematic biases in the recovered $\Omega_m$ and $S_8$ are sub-dominant to the DES Y1 uncertainty. As cosmological analyses of this kind become increasingly more precise, validation of parameter inference using survey simulations will be essential to demonstrate robustness.
A Measurement of CMB Cluster Lensing with SPT and DES Year 1 Data (1708.01360)
E. J. Baxter, S. Raghunathan, T. M. Crawford, P. Fosalba, Z. Hou, G. P. Holder, Y. Omori, S. Patil, E. Rozo, T. M. C. Abbott, J. Annis, K. Aylor, A. Benoit-Lévy, B. A. Benson, E. Bertin, L. Bleem, E. Buckley-Geer, D. L. Burke, J. Carlstrom, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, C. L. Chang, H-M. Cho, A. T. Crites, M. Crocce, C. E. Cunha, L. N. da Costa, C. B. D'Andrea, C. Davis, T. de Haan, S. Desai, M. A. Dobbs, S. Dodelson, J. P. Dietrich, P. Doel, A. Drlica-Wagner, J. Estrada, W. B. Everett, A. Fausti Neto, B. Flaugher, J. Frieman, J. García-Bellido, E. M. George, E. Gaztanaga, T. Giannantonio, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, N. W. Halverson, N. L. Harrington, W. G. Hartley, W. L. Holzapfel, K. Honscheid, J. D. Hrubes, B. Jain, D. J. James, M. Jarvis, T. Jeltema, L. Knox, E. Krause, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, A. T. Lee, E. M. Leitch, T. S. Li, M. Lima, D. Luong-Van, A. Manzotti, M. March, D. P. Marrone, J. L. Marshall, P. Martini, J. J. McMahon, P. Melchior, F. Menanteau, S. S. Meyer, C. J. Miller, R. Miquel, L. M. Mocanu, J. J. Mohr, T. Natoli, B. Nord, R. L. C. Ogando, S. Padin, A. A. Plazas, C. Pryke, D. Rapetti, C. L. Reichardt, A. K. Romer, A. Roodman, J. E. Ruhl, E. Rykoff, M. Sako, E. Sanchez, J. T. Sayre, V. Scarpine, K. K. Schaffer, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, E. Shirokoff, M. Smith, R. C. Smith, M. Soares-Santos, F. Z. Staniszewski, A. Stark, K. Story, E. Suchyta, G. Tarle, D. Thomas, M. A. Troxel, K. Vanderlinde, J. D. Vieira, A. R. Walker, R. Williamson, Y. Zhang, J. Zuntz
Feb. 16, 2018 astro-ph.CO
Clusters of galaxies gravitationally lens the cosmic microwave background (CMB) radiation, resulting in a distinct imprint in the CMB on arcminute scales. Measurement of this effect offers a promising way to constrain the masses of galaxy clusters, particularly those at high redshift. We use CMB maps from the South Pole Telescope Sunyaev-Zel'dovich (SZ) survey to measure the CMB lensing signal around galaxy clusters identified in optical imaging from first year observations of the Dark Energy Survey. The cluster catalog used in this analysis contains 3697 members with mean redshift of $\bar{z} = 0.45$. We detect lensing of the CMB by the galaxy clusters at $8.1\sigma$ significance. Using the measured lensing signal, we constrain the amplitude of the relation between cluster mass and optical richness to roughly $17\%$ precision, finding good agreement with recent constraints obtained with galaxy lensing. The error budget is dominated by statistical noise but includes significant contributions from systematic biases due to the thermal SZ effect and cluster miscentering.
Dark Energy Survey Year 1 Results: Methodology and Projections for Joint Analysis of Galaxy Clustering, Galaxy Lensing, and CMB Lensing Two-point Functions (1802.05257)
E. J. Baxter, Y. Omori, C. Chang, T. Giannantonio, D. Kirk, E. Krause, J. Blazek, L. Bleem, A. Choi, T. M. Crawford, S. Dodelson, T. F. Eifler, O. Friedrich, D. Gruen, G. P. Holder, B. Jain, M. Jarvis, N. MacCrann, A. Nicola, S. Pandey, J. Prat, C. L. Reichardt, S. Samuroff, C. Sánchez, L. F. Secco, E. Sheldon, M. A. Troxel, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, J. Annis, S. Avila, K. Bechtol, B. A. Benson, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, J. De Vicente, D. L. DePoy, H. T. Diehl, P. Doel, J. Estrada, A. E. Evrard, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, R. A. Gruendl, J. Gschwend, G. Gutierrez, W. G. Hartley, D. Hollowood, B. Hoyle, D. J. James, S. Kent, K. Kuehn, N. Kuropatkin, O. Lahav, M. Lima, M. A. G. Maia, M. March, J. L. Marshall, P. Melchior, F. Menanteau, R. Miquel, A. A. Plazas, A. Roodman, E. S. Rykoff, E. Sanchez, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, A. R. Walker, W. L. K. Wu, J. Weller
Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-point functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.
The Dark Energy Survey Data Release 1 (1801.03181)
T. M. C. Abbott, F. B. Abdalla, S. Allam, A. Amara, J. Annis, J. Asorey, S. Avila, O. Ballester, M. Banerji, W. Barkhouse, L. Baruah, M. Baumer, K. Bechtol, M . R. Becker, A. Benoit-Lévy, G. M. Bernstein, E. Bertin, J. Blazek, S. Bocquet, D. Brooks, D. Brout, E. Buckley-Geer, D. L. Burke, V. Busti, R. Campisano, L. Cardiel-Sas, A. C arnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. Chang, C. Conselice, G. Costa, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, G. Daues, T. M. Davis, C. Davis, J. De Vicente, D. L. DePoy, J. DeRose, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, T. F. Eifler, A. E. Elliott, A. E. Evrard, A. Farahi, A. Fausti Neto, E. Fernandez, D. A. Finley, M. Fitzpatrick, B. Flaugher, R. J. Foley, P. Fosalba, D. N. Friedel, J. Frieman, J. García-Bellido, E. Gaz tanaga, D. W. Gerdes, T. Giannantonio, M. S. S. Gill, K. Glazebrook, D. A. Goldstein, M. Gower, D. Gruen, R. A. Gruendl, J. Gschwend, R. R. Gupta, G. Gutierrez, S. Hamilton, W. G. Hartley, S. R. Hinton, J. M. Hislop, D. Hollowood, K. Honscheid, B. Hoyle, D. Huterer, B. Jain, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, S. Juneau, T. Kacpr zak, S. Kent, G. Khullar, M. Klein, A. Kovacs, A. M. G. Koziol, E. Krause, A. Kremin, R. Kron, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, J. Lasker, T. S. Li, R. T. Li, A. R. Liddle, M. Lima, H. Lin, P. López-Reyes, N. MacCrann, M. A. G. Maia, J. D. Maloney, M. Manera, M. March, J. Marriner, J. L. Marshall, P. Martini, T. McClintock, T. McKay, R . G. McMahon, P. Melchior, F. Menanteau, C. J. Miller, R. Miquel, J. J. Mohr, E. Morganson, J. Mould, E. Neilsen, R. C. Nichol, D. Nidever, R. Nikutta, F. Nogueira, B. Nord, P. Nugent, L. Nunes, R. L. C. Ogando, L. Old, K. Olsen, A. B. Pace, A. Palmese, F. Paz-Chinchón, H. V. Peiris, W. J. Percival, D. Petravick, A. A. Plazas, J. Poh, C. Pond, A. Por redon, A. Pujol, A. Refregier, K. Reil, P. M. Ricker, R. P. Rollins, A. K. Romer, A. Roodman, P. Rooney, A. J. Ross, E. S. Rykoff, M. Sako, E. Sanchez, M. L. Sanchez, B. Santiago, A. Saro, V. Scarpine, D. Scolnic, A. Scott, S. Serrano, I. Sevilla-Noarbe, E. Sheldon, N. Shipp, M.L. Silveira, R. C. Smith, J. A. Smith, M. Smith, M. Soares-Santos, F. Sobre ira, J. Song, A. Stebbins, E. Suchyta, M. Sullivan, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, R. C. Thomas, M. A. Troxel, D. L. Tucker, V. Vikram, A. K. Vivas, A. R. Wal ker, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, H. Wu, B. Yanny, A. Zenteno, Y. Zhang, J. Zuntz
Jan. 9, 2018 astro-ph.CO, astro-ph.GA, astro-ph.SR, astro-ph.IM
We describe the first public data release of the Dark Energy Survey, DES DR1, consisting of reduced single epoch images, coadded images, coadded source catalogs, and associated products and services assembled over the first three years of DES science operations. DES DR1 is based on optical/near-infrared imaging from 345 distinct nights (August 2013 to February 2016) by the Dark Energy Camera mounted on the 4-m Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. We release data from the DES wide-area survey covering ~5,000 sq. deg. of the southern Galactic cap in five broad photometric bands, grizY. DES DR1 has a median delivered point-spread function of g = 1.12, r = 0.96, i = 0.88, z = 0.84, and Y = 0.90 arcsec FWHM, a photometric precision of < 1% in all bands, and an astrometric precision of 151 mas. The median coadded catalog depth for a 1.95" diameter aperture at S/N = 10 is g = 24.33, r = 24.08, i = 23.44, z = 22.69, and Y = 21.44 mag. DES DR1 includes nearly 400M distinct astronomical objects detected in ~10,000 coadd tiles of size 0.534 sq. deg. produced from ~39,000 individual exposures. Benchmark galaxy and stellar samples contain ~310M and ~ 80M objects, respectively, following a basic object quality selection. These data are accessible through a range of interfaces, including query web clients, image cutout servers, jupyter notebooks, and an interactive coadd image visualization tool. DES DR1 constitutes the largest photometric data set to date at the achieved depth and photometric precision.
CMB Polarization B-mode Delensing with SPTpol and Herschel (1701.04396)
A. Manzotti, K. T. Story, W. L. K. Wu, J. E. Austermann, J. A. Beall, A. N. Bender, B. A. Benson, L. E. Bleem, J. J. Bock, J. E. Carlstrom, C. L. Chang, H. C. Chiang, H-M. Cho, R. Citron, A. Conley, T. M. Crawford, A. T. Crites, T. de Haan, M. A. Dobbs, S. Dodelson, W. Everett, J. Gallicchio, E. M. George, A. Gilbert, N. W. Halverson, N. Harrington, J. W. Henning, G. C. Hilton, G. P. Holder, W. L. Holzapfel, S. Hoover, Z. Hou, J. D. Hrubes, N. Huang, J. Hubmayr, K. D. Irwin, R. Keisler, L. Knox, A. T. Lee, E. M. Leitch, D. Li, J. J. McMahon, S. S. Meyer, L. M. Mocanu, T. Natoli, J. P. Nibarger, V. Novosad, S. Padin, C. Pryke, C. L. Reichardt, J. E. Ruhl, B. R. Saliwanchik, J.T. Sayre, K. K. Schaffer, G. Smecher, A. A. Stark, K. Vanderlinde, J. D. Vieira, M. P. Viero, G. Wang, N. Whitehorn, V. Yefremenko, M. Zemcov
Sept. 18, 2017 astro-ph.CO
We present a demonstration of delensing the observed cosmic microwave background (CMB) B-mode polarization anisotropy. This process of reducing the gravitational-lensing generated B-mode component will become increasingly important for improving searches for the B modes produced by primordial gravitational waves. In this work, we delens B-mode maps constructed from multi-frequency SPTpol observations of a 90 deg$^2$ patch of sky by subtracting a B-mode template constructed from two inputs: SPTpol E-mode maps and a lensing potential map estimated from the $\textit{Herschel}$ $500\,\mu m$ map of the CIB. We find that our delensing procedure reduces the measured B-mode power spectrum by 28% in the multipole range $300 < \ell < 2300$; this is shown to be consistent with expectations from theory and simulations and to be robust against systematics. The null hypothesis of no delensing is rejected at $6.9 \sigma$. Furthermore, we build and use a suite of realistic simulations to study the general properties of the delensing process and find that the delensing efficiency achieved in this work is limited primarily by the noise in the lensing potential map. We demonstrate the importance of including realistic experimental non-idealities in the delensing forecasts used to inform instrument and survey-strategy planning of upcoming lower-noise experiments, such as CMB-S4.
Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses (1706.09359)
E. Krause, T. F. Eifler, J. Zuntz, O. Friedrich, M. A. Troxel, S. Dodelson, J. Blazek, L. F. Secco, N. MacCrann, E. Baxter, C. Chang, N. Chen, M. Crocce, J. DeRose, A. Ferte, N. Kokron, F. Lacasa, V. Miranda, Y. Omori, A. Porredon, R. Rosenfeld, S. Samuroff, M. Wang, R. H. Wechsler, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Levy, G. M. Bernstein, D. Brooks, D. L. Burke, D. Capozzi, M. Carrasco Kind, J. Carretero, C. B. D'Andrea, L. N. da Costa, C. Davis, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, A. E. Evrard, B. Flaugher, P. Fosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, T. Giannantonio, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, K. Honscheid, D. J. James, T. Jeltema, K. Kuehn, S. Kuhlmann, O. Lahav, M. Lima, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, F. Menanteau, R. Miquel, R. C. Nichol, A. A. Plazas, A. K. Romer, E. S. Rykoff, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, I. Sevilla-Noarbe, M. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. L. Tucker, V. Vikram, A. R. Walker, J. Weller
June 28, 2017 astro-ph.CO
We present the methodology for and detail the implementation of the Dark Energy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines configuration-space two-point statistics from three different cosmological probes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data from the first year of DES observations. We have developed two independent modeling pipelines and describe the code validation process. We derive expressions for analytical real-space multi-probe covariances, and describe their validation with numerical simulations. We stress-test the inference pipelines in simulated likelihood analyses that vary 6-7 cosmology parameters plus 20 nuisance parameters and precisely resemble the analysis to be presented in the DES 3x2pt analysis paper, using a variety of simulated input data vectors with varying assumptions. We find that any disagreement between pipelines leads to changes in assigned likelihood $\Delta \chi^2 \le 0.045$ with respect to the statistical error of the DES Y1 data vector. We also find that angular binning and survey mask do not impact our analytic covariance at a significant level. We determine lower bounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and galaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling uncertainties in the non-linear regime is well below statistical errors, and show that our analysis choices are robust against a variety of systematics. These tests demonstrate that we have a robust analysis pipeline that yields unbiased cosmological parameter inferences for the flagship 3x2pt DES Y1 analysis. We emphasize that the level of independent code development and subsequent code comparison as demonstrated in this paper is necessary to produce credible constraints from increasingly complex multi-probe analyses of current data.
Cosmology from Cosmic Shear with DES Science Verification Data (1507.05552)
The Dark Energy Survey Collaboration, T. Abbott, F. B. Abdalla, S. Allam, A. Amara, J. Annis, R. Armstrong, D. Bacon, M. Banerji, A. H. Bauer, E. Baxter, M. R. Becker, A. Benoit-Lévy, R. A. Bernstein, G. M. Bernstein, E. Bertin, J. Blazek, C. Bonnett, S. L. Bridle, D. Brooks, C. Bruderer, E. Buckley-Geer, D. L. Burke, M. T. Busha, D. Capozzi, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, C. Chang, J. Clampitt, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, G. Efstathiou, T. F. Eifler, B. Erickson, J. Estrada, A. E. Evrard, A. Fausti Neto, E. Fernandez, D. A. Finley, B. Flaugher, P. Fosalba, O. Friedrich, J. Frieman, C. Gangkofner, J. Garcia-Bellido, E. Gaztanaga, D. W. Gerdes, D. Gruen, R. A. Gruendl, G. Gutierrez, W. Hartley, M. Hirsch, K. Honscheid, E. M. Huff, B. Jain, D. J. James, M. Jarvis, T. Kacprzak, S. Kent, D. Kirk, E. Krause, A. Kravtsov, K. Kuehn, N. Kuropatkin, J. Kwan, O. Lahav, B. Leistedt, T. S. Li, M. Lima, H. Lin, N. MacCrann, M. March, J. L. Marshall, P. Martini, R. G. McMahon, P. Melchior, C. J. Miller, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, A. Nicola, B. Nord, R. Ogando, A. Palmese, H.V. Peiris, A. A. Plazas, A. Refregier, N. Roe, A. K. Romer, A. Roodman, B. Rowe, E. S. Rykoff, C. Sabiu, I. Sadeh, M. Sako, S. Samuroff, C. Sánchez, E. Sanchez, H. Seo, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, M. A. Troxel, V. Vikram, A. R. Walker, R. H. Wechsler, J. Weller, Y. Zhang, J. Zuntz
May 3, 2017 astro-ph.CO
We present the first constraints on cosmology from the Dark Energy Survey (DES), using weak lensing measurements from the preliminary Science Verification (SV) data. We use 139 square degrees of SV data, which is less than 3\% of the full DES survey area. Using cosmic shear 2-point measurements over three redshift bins we find $\sigma_8 (\Omega_{\rm m}/0.3)^{0.5} = 0.81 \pm 0.06$ (68\% confidence), after marginalising over 7 systematics parameters and 3 other cosmological parameters. We examine the robustness of our results to the choice of data vector and systematics assumed, and find them to be stable. About $20$\% of our error bar comes from marginalising over shear and photometric redshift calibration uncertainties. The current state-of-the-art cosmic shear measurements from CFHTLenS are mildly discrepant with the cosmological constraints from Planck CMB data; our results are consistent with both datasets. Our uncertainties are $\sim$30\% larger than those from CFHTLenS when we carry out a comparable analysis of the two datasets, which we attribute largely to the lower number density of our shear catalogue. We investigate constraints on dark energy and find that, with this small fraction of the full survey, the DES SV constraints make negligible impact on the Planck constraints. The moderate disagreement between the CFHTLenS and Planck values of $\sigma_8 (\Omega_{\rm m}/0.3)^{0.5}$ is present regardless of the value of $w$.
The Dark Energy Survey view of the Sagittarius stream: discovery of two faint stellar system candidates (1608.04033)
E. Luque, A. Pieres, B. Santiago, B. Yanny, A. K. Vivas, A. Queiroz, A. Drlica-Wagner, E. Morganson, E. Balbinot, J. L. Marshall, T. S. Li, A. Fausti Neto, L. N. da Costa, M. A. G. Maia, K. Bechtol, A. G. Kim, G. M. Bernstein, S. Dodelson, L. Whiteway, H. T. Diehl, D. A. Finley, T. Abbott, F. B. Abdalla, S. Allam, J. Annis, A. Benoit-Lévy, E. Bertin, D. Brooks, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, C. E. Cunha, C. B. D'Andrea, S. Desai, P. Doel, A. E. Evrard, B. Flaugher, P. Fosalba, D. W. Gerdes, D. A. Goldstein, D. Gruen, R. A. Gruendl, G. Gutierrez, D. J. James, K. Kuehn, N. Kuropatkin, O. Lahav, P. Martini, R. Miquel, B. Nord, R. Ogando, A. A. Plazas, A. K. Romer, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, A. R. Walker
April 11, 2017 astro-ph.GA, astro-ph.SR
We report the discovery of two new candidate stellar systems in the constellation of Cetus using the data from the first two years of the Dark Energy Survey (DES). The objects, DES J0111-1341 and DES J0225+0304, are located at a heliocentric distance of ~ 25 kpc and appear to have old and metal-poor populations. Their distances to the Sagittarius orbital plane, ~ 1.73 kpc (DES J0111-1341) and ~ 0.50 kpc (DES J0225+0304), indicate that they are possibly associated with the Sagittarius dwarf stream. The half-light radius (r_h ~ 4.55 pc) and luminosity (M_V ~ +0.3) of DES J0111-1341 are consistent with it being an ultrafaint stellar cluster, while the half-light radius (r_h ~ 18.55 pc) and luminosity (M_V ~ -1.1) of DES J0225+0304 place it in an ambiguous region of size-luminosity space between stellar clusters and dwarf galaxies. Determinations of the characteristic parameters of the Sagittarius stream, metallicity spread (-2.18 < [Fe/H] < -0.95) and distance gradient (23 kpc < D_sun < 29 kpc), within the DES footprint in the Southern hemisphere, using the same DES data, also indicate a possible association between these systems. If these objects are confirmed through spectroscopic follow-up to be gravitationally bound systems and to share a Galactic trajectory with the Sagittarius stream, DES J0111-1341 and DES J0225+0304 would be the first ultrafaint stellar systems associated with the Sagittarius stream. Furthermore, DES J0225+0304 would also be the first confirmed case of an ultrafaint satellite of a satellite.
Imprint of DES super-structures on the Cosmic Microwave Background (1610.00637)
A. Kovács, C. Sánchez, J. García-Bellido, S. Nadathur, R. Crittenden, D. Gruen, D. Huterer, D. Bacon, J. DeRose, S. Dodelson, E. Gaztañaga, D. Kirk, O. Lahav, R. Miquel, K. Naidoo, J. A. Peacock, B. Soergel, L. Whiteway, F. B. Abdalla, S. Allam, J. Annis, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, D. L. DePoy, S. Desai, T. F. Eifler, D. A. Finley, B. Flaugher, P. Fosalba, J. Frieman, T. Giannantonio, D. A. Goldstein, R. A. Gruendl, G. Gutierrez, D. J. James, K. Kuehn, N. Kuropatkin, J. L. Marshall, P. Melchior, F. Menanteau, B. Nord, R. Ogando, A. A. Plazas, A. K. Romer, E. Sanchez, V. Scarpine, I. Sevilla-Noarbe, F. Sobreira, E. Suchyta, M. Swanson, G. Tarle, D. Thomas, A. R. Walker
Nov. 15, 2016 astro-ph.CO
Small temperature anisotropies in the Cosmic Microwave Background can be sourced by density perturbations via the late-time integrated Sachs-Wolfe effect. Large voids and superclusters are excellent environments to make a localized measurement of this tiny imprint. In some cases excess signals have been reported. We probed these claims with an independent data set, using the first year data of the Dark Energy Survey in a different footprint, and using a different super-structure finding strategy. We identified 52 large voids and 102 superclusters at redshifts $0.2 < z < 0.65$. We used the Jubilee simulation to a priori evaluate the optimal ISW measurement configuration for our compensated top-hat filtering technique, and then performed a stacking measurement of the CMB temperature field based on the DES data. For optimal configurations, we detected a cumulative cold imprint of voids with $\Delta T_{f} \approx -5.0\pm3.7~\mu K$ and a hot imprint of superclusters $\Delta T_{f} \approx 5.1\pm3.2~\mu K$ ; this is $\sim1.2\sigma$ higher than the expected $|\Delta T_{f}| \approx 0.6~\mu K$ imprint of such super-structures in $\Lambda$CDM. If we instead use an a posteriori selected filter size ($R/R_{v}=0.6$), we can find a temperature decrement as large as $\Delta T_{f} \approx -9.8\pm4.7~\mu K$ for voids, which is $\sim2\sigma$ above $\Lambda$CDM expectations and is comparable to previous measurements made using SDSS super-structure data.
The Dark Energy Survey: more than dark energy - an overview (1601.00329)
Dark Energy Survey Collaboration: T. Abbott, F. B. Abdalla, J. Aleksic, S. Allam, A. Amara, D. Bacon, E. Balbinot, M. Banerji, K. Bechtol, A. Benoit-Levy, G. M. Bernstein, E. Bertin, J. Blazek, C. Bonnett, S. Bridle, D. Brooks, R. J. Brunner, E. Buckley-Geer, D. L. Burke, G. B. Caminha, D. Capozzi, J. Carlsen, A. Carnero-Rosell, M. Carollo, M. Carrasco-Kind, J. Carretero, F. J. Castander, L. Clerkin, T. Collett, C. Conselice, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, T. M. Davis, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, J. Estrada, J. Etherington, A. E. Evrard, J. Fabbri, D. A. Finley, B. Flaugher, R. J. Foley, P. Rosalba, J. Frieman, J. Garcia-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, D. A. Goldstein, D. Gruen, R. A. Gruendl, P. Guarnieri, G. Gutierrez, W. Hartley, K. Honscheid, B. Jain, D. J. James, T. Jeltema, S. Jouvel, R. Kessler, A. King, D. Kirk, R. Kron, K. Kuehn, N. Kuropatkin, O. Lahav, T. S. Li, M. Lima, H. Lin, M. A. G. Maia, M. Makler, M. Manera, C. Maraston, J. L. Marshall, P. Martini, R. G. McMahon, P. Melchior, A. Merson, C. J. Miller, R. Miquel, J. J. Mohr, X. Morice-Atkinson, K. Naidoo, E. Neilsen, R. C. Nichol, B. Nord, R. Ogando, F. Ostrovski, A. Palmese, A. Papadopoulos, H. Peiris, J. Peoples, W. J. Percival, A. A. Plazas, S. L. Reed, A. Refregier, A. K. Romer, A. Roodman, A. Ross, E. Rozo, E. S. Rykoff, I. Sadeh, M. Sako, C. Sanchez, E. Sanchez, B. Santiago, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, E. Sheldon, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, M. Soumagnac, E. Suchyta, M. Sullivan, M. Swanson, G. Tarle, J. Thaler, D. Thomas, R. C. Thomas, D. Tucker, J. D. Vieira, V. Vikram, A. R. Walker, R. H. Wechsler, J. Weller, W. Wester, L. Whiteway, H. Wilcox, B. Yanny, Y. Zhang, J. Zuntz
Aug. 19, 2016 astro-ph.CO, astro-ph.GA
This overview article describes the legacy prospect and discovery potential of the Dark Energy Survey (DES) beyond cosmological studies, illustrating it with examples from the DES early data. DES is using a wide-field camera (DECam) on the 4m Blanco Telescope in Chile to image 5000 sq deg of the sky in five filters (grizY). By its completion the survey is expected to have generated a catalogue of 300 million galaxies with photometric redshifts and 100 million stars. In addition, a time-domain survey search over 27 sq deg is expected to yield a sample of thousands of Type Ia supernovae and other transients. The main goals of DES are to characterise dark energy and dark matter, and to test alternative models of gravity; these goals will be pursued by studying large scale structure, cluster counts, weak gravitational lensing and Type Ia supernovae. However, DES also provides a rich data set which allows us to study many other aspects of astrophysics. In this paper we focus on additional science with DES, emphasizing areas where the survey makes a difference with respect to other current surveys. The paper illustrates, using early data (from `Science Verification', and from the first, second and third seasons of observations), what DES can tell us about the solar system, the Milky Way, galaxy evolution, quasars, and other topics. In addition, we show that if the cosmological model is assumed to be Lambda+ Cold Dark Matter (LCDM) then important astrophysics can be deduced from the primary DES probes. Highlights from DES early data include the discovery of 34 Trans Neptunian Objects, 17 dwarf satellites of the Milky Way, one published z > 6 quasar (and more confirmed) and two published superluminous supernovae (and more confirmed).
Inference from the small scales of cosmic shear with current and future Dark Energy Survey data (1608.01838)
N. MacCrann, J. Aleksić, A. Amara, S. L. Bridle, C. Bruderer, C. Chang, S. Dodelson, T. F. Eifler, E. M. Huff, D. Huterer, T. Kacprzak, A. Refregier, E. Suchyta, R. H. Wechsler, J. Zuntz, T. M. C. Abbott, S. Allam, J. Annis, R. Armstrong, A. Benoit-Lévy, D. Brooks, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, L. N. da Costa, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, A. E. Evrard, B. Flaugher, P. Fosalba, D. W. Gerdes, D. A. Goldstein, D. Gruen, R. A. Gruendl, G. Gutierrez, K. Honscheid, D. J. James, M. Jarvis, E. Krause, K. Kuehn, N. Kuropatkin, M. Lima, J. L. Marshall, P. Melchior, F. Menanteau, R. Miquel, A. A. Plazas, A. K. Romer, E. S. Rykoff, E. Sanchez, V. Scarpine, I. Sevilla-Noarbe, E. Sheldon, M. Soares-Santos, M. E. C. Swanson, G. Tarle, D. Thomas, V. Vikram
Aug. 5, 2016 astro-ph.CO
Cosmic shear is sensitive to fluctuations in the cosmological matter density field, including on small physical scales, where matter clustering is affected by baryonic physics in galaxies and galaxy clusters, such as star formation, supernovae feedback and AGN feedback. While muddying any cosmological information that is contained in small scale cosmic shear measurements, this does mean that cosmic shear has the potential to constrain baryonic physics and galaxy formation. We perform an analysis of the Dark Energy Survey (DES) Science Verification (SV) cosmic shear measurements, now extended to smaller scales, and using the Mead et al. 2015 halo model to account for baryonic feedback. While the SV data has limited statistical power, we demonstrate using a simulated likelihood analysis that the final DES data will have the statistical power to differentiate among baryonic feedback scenarios. We also explore some of the difficulties in interpreting the small scales in cosmic shear measurements, presenting estimates of the size of several other systematic effects that make inference from small scales difficult, including uncertainty in the modelling of intrinsic alignment on nonlinear scales, `lensing bias', and shape measurement selection effects. For the latter two, we make use of novel image simulations. While future cosmic shear datasets have the statistical power to constrain baryonic feedback scenarios, there are several systematic effects that require improved treatments, in order to make robust conclusions about baryonic feedback.
Cosmic Shear Measurements with DES Science Verification Data (1507.05598)
M. R. Becker, M. A. Troxel, N. MacCrann, E. Krause, T. F. Eifler, O. Friedrich, A. Nicola, A. Refregier, A. Amara, D. Bacon, G. M. Bernstein, C. Bonnett, S. L. Bridle, M. T. Busha, C. Chang, S. Dodelson, B. Erickson, A. E. Evrard, J. Frieman, E. Gaztanaga, D. Gruen, W. Hartley, B. Jain, M. Jarvis, T. Kacprzak, D. Kirk, A. Kravtsov, B. Leistedt, E. S. Rykoff, C. Sabiu, C. Sanchez, H. Seo, E. Sheldon, R. H. Wechsler, J. Zuntz, T. Abbott, F. B. Abdalla, S. Allam, R. Armstrong, M. Banerji, A. H. Bauer, A. Benoit-Levy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, D. Capozzi, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, A. Fausti Neto, E. Fernandez, D. A. Finley, B. Flaugher, P. Fosalba, D. W. Gerdes, R. A. Gruendl, G. Gutierrez, K. Honscheid, D. J. James, K. Kuehn, N. Kuropatkin, O. Lahav, T. S. Li, M. Lima, M. A. G. Maia, M. March, P. Martini, P. Melchior, C. J. Miller, R. Miquel, J. J. Mohr, R. C. Nichol, B. Nord, R. Ogando, A. A. Plazas, K. Reil, A. K. Romer, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, V. Vikram, A. R. Walker, The DES Collaboration
July 27, 2016 astro-ph.CO
We present measurements of weak gravitational lensing cosmic shear two-point statistics using Dark Energy Survey Science Verification data. We demonstrate that our results are robust to the choice of shear measurement pipeline, either ngmix or im3shape, and robust to the choice of two-point statistic, including both real and Fourier-space statistics. Our results pass a suite of null tests including tests for B-mode contamination and direct tests for any dependence of the two-point functions on a set of 16 observing conditions and galaxy properties, such as seeing, airmass, galaxy color, galaxy magnitude, etc. We furthermore use a large suite of simulations to compute the covariance matrix of the cosmic shear measurements and assign statistical significance to our null tests. We find that our covariance matrix is consistent with the halo model prediction, indicating that it has the appropriate level of halo sample variance. We compare the same jackknife procedure applied to the data and the simulations in order to search for additional sources of noise not captured by the simulations. We find no statistically significant extra sources of noise in the data. The overall detection significance with tomography for our highest source density catalog is 9.7sigma. Cosmological constraints from the measurements in this work are presented in a companion paper (DES et al. 2015).
Detection of the kinematic Sunyaev-Zel'dovich effect with DES Year 1 and SPT (1603.03904)
B. Soergel, S. Flender, K. T. Story, L. Bleem, T. Giannantonio, G. Efstathiou, E. Rykoff, B. A. Benson, T. Crawford, S. Dodelson, S. Habib, K. Heitmann, G. Holder, B. Jain, E. Rozo, A. Saro, J. Weller, F. B. Abdalla, S. Allam, J. Annis, R. Armstrong, A. Benoit-Lévy, G. M. Bernstein, J. E. Carlstrom, A. Carnero Rosell, M. Carrasco Kind, F. J. Castander, I. Chiu, R. Chown, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, T. de Haan, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, J. Estrada, A. E. Evrard, B. Flaugher, P. Fosalba, J. Frieman, E. Gaztanaga, D. Gruen, R. A. Gruendl, W. L. Holzapfel, K. Honscheid, D. J. James, R. Keisler, K. Kuehn, N. Kuropatkin, O. Lahav, M. Lima, J. L. Marshall, M. McDonald, P. Melchior, C. J. Miller, R. Miquel, B. Nord, R. Ogando, Y. Omori, A. A. Plazas, D. Rapetti, C. L. Reichardt, A. K. Romer, A. Roodman, B. R. Saliwanchik, E. Sanchez, M. Schubnell, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Soares-Santos, F. Sobreira, A. Stark, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, J. D. Vieira, A. R. Walker, N. Whitehorn
We detect the kinematic Sunyaev-Zel'dovich (kSZ) effect with a statistical significance of $4.2 \sigma$ by combining a cluster catalogue derived from the first year data of the Dark Energy Survey (DES) with CMB temperature maps from the South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) Survey. This measurement is performed with a differential statistic that isolates the pairwise kSZ signal, providing the first detection of the large-scale, pairwise motion of clusters using redshifts derived from photometric data. By fitting the pairwise kSZ signal to a theoretical template we measure the average central optical depth of the cluster sample, $\bar{\tau}_e = (3.75 \pm 0.89)\cdot 10^{-3}$. We compare the extracted signal to realistic simulations and find good agreement with respect to the signal-to-noise, the constraint on $\bar{\tau}_e$, and the corresponding gas fraction. High-precision measurements of the pairwise kSZ signal with future data will be able to place constraints on the baryonic physics of galaxy clusters, and could be used to probe gravity on scales $ \gtrsim 100$ Mpc.
Joint Measurement of Lensing-Galaxy Correlations Using SPT and DES SV Data (1602.07384)
E. J. Baxter, J. Clampitt, T. Giannantonio, S. Dodelson, B. Jain, D. Huterer, L. E. Bleem, T. M. Crawford, G. Efstathiou, P. Fosalba, D. Kirk, J. Kwan, C. Sánchez, K. T. Story, M. A. Troxel, T. M. C. Abbott, F. B. Abdalla, R. Armstrong, A. Benoit-Lévy, B. A. Benson, G. M. Bernstein, R. A. Bernstein, E. Bertin, D. Brooks, J. E. Carlstrom, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, R. Chown, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, A. E. Evrard, A. Fausti Neto, B. Flaugher, J. Frieman, D. Gruen, R. A. Gruendl, G. Gutierrez, T. de Haan, G. P. Holder, K. Honscheid, Z. Hou, D. J. James, K. Kuehn, N. Kuropatkin, M. Lima, M. March, J. L. Marshall, P. Martini, P. Melchior, C. J. Miller, R. Miquel, J. J. Mohr, B. Nord, Y. Omori, A. A. Plazas, C. L. Reichardt, A. K. Romer, E. S. Rykoff, E. Sanchez, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, A. A. Stark, M. E. C. Swanson, G. Tarle, D. Thomas, A. R. Walker, R. H. Wechsler
We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimeter-wave data from the 2500 square degree South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. The two lensing-galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy lensing measurements. We show that an attractive feature of these fits is that they are fairly insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favored $\Lambda$CDM cosmological model. It also demonstrates that joint lensing-galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.
Optical-SZE Scaling Relations for DES Optically Selected Clusters within the SPT-SZ Survey (1605.08770)
A. Saro, S. Bocquet, J. Mohr, E. Rozo, B. A. Benson, S. Dodelson, E. S. Rykoff, L. Bleem, T. M. C. Abbott, F. B. Abdalla, S. Allen, J. Annis, A. Benoit-Levy, D. Brooks, D. L. Burke, R. Capasso, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, I. Chiu, T. M. Crawford, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, S. Desai, J. P. Dietrich, A. E. Evrard, A. Fausti Neto, B. Flaugher, P. Fosalba, J. Frieman, C. Gangkofner, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, S. Grandis, D. Gruen, R. A. Gruendl, N. Gupta, G. Gutierrez, W. L. Holzapfel, D. J. James, K. Kuehn, N. Kuropatkin, M. Lima, J. L. Marshall, M. McDonald, P. Melchior, F. Menanteau, R. Miquel, R. Ogando, A. A. Plazas, D. Rapetti, C. L. Reichardt, K. Reil, A. K. Romer, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, R. C. Smith, M. Soares-Santos, B. Soergel, V. Strazzullo, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, V. Vikram, A. R. Walker, A. Zenteno
May 27, 2016 astro-ph.CO
We study the Sunyaev-Zel'dovich effect (SZE) signature in South Pole Telescope (SPT) data for an ensemble of 719 optically identified galaxy clusters selected from 124.6 deg$^2$ of the Dark Energy Survey (DES) science verification data, detecting a stacked SZE signal down to richness $\lambda\sim20$. The SZE signature is measured using matched-filtered maps of the 2500 deg$^2$ SPT-SZ survey at the positions of the DES clusters, and the degeneracy between SZE observable and matched-filter size is broken by adopting as priors SZE and optical mass-observable relations that are either calibrated using SPT selected clusters or through the Arnaud et al. (2010, A10) X-ray analysis. We measure the SPT signal to noise $\zeta$-$\lambda$, relation and two integrated Compton-$y$ $Y_\textrm{500}$-$\lambda$ relations for the DES-selected clusters and compare these to model expectations accounting for the SZE-optical center offset distribution. For clusters with $\lambda > 80$, the two SPT calibrated scaling relations are consistent with the measurements, while for the A10-calibrated relation the measured SZE signal is smaller by a factor of $0.61 \pm 0.12$ compared to the prediction. For clusters at $20 < \lambda < 80$, the measured SZE signal is smaller by a factor of $\sim$0.20-0.80 (between 2.3 and 10~$\sigma$ significance) compared to the prediction, with the SPT calibrated scaling relations and larger $\lambda$ clusters showing generally better agreement. We quantify the required corrections to achieve consistency, showing that there is a richness dependent bias that can be explained by some combination of contamination of the observables and biases in the estimated masses. We discuss possible physical effects, as contamination from line-of-sight projections or from point sources, larger offsets in the SZE-optical centering or larger scatter in the $\lambda$-mass relation at lower richnesses.
CMB lensing tomography with the DES Science Verification galaxies (1507.05551)
T. Giannantonio, P. Fosalba, R. Cawthon, Y. Omori, M. Crocce, F. Elsner, B. Leistedt, S. Dodelson, A. Benoit-Levy, E. Gaztanaga, G. Holder, H. V. Peiris, W. J. Percival, D. Kirk, A. H. Bauer, B. A. Benson, G. M. Bernstein, J. Carretero, T. M. Crawford, R. Crittenden, D. Huterer, B. Jain, E. Krause, C. L. Reichardt, A. J. Ross, G. Simard, B. Soergel, A. Stark, K. T. Story, J. D. Vieira, J. Weller, T. Abbott, F. B. Abdalla, S. Allam, R. Armstrong, M. Banerji, R. A. Bernstein, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, D. Capozzi, J. E. Carlstrom, A. Carnero Rosell, M. Carrasco Kind, F. J. Castander, C. L. Chang, C. E. Cunha, L. N. da Costa, C. B. D'Andrea, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, T. F. Eifler, A. E. Evrard, A. Fausti Neto, E. Fernandez, D. A. Finley, B. Flaugher, J. Frieman, D. Gerdes, D. Gruen, R. A. Gruendl, G. Gutierrez, W. L. Holzapfel, K. Honscheid, D. J. James, K. Kuehn, N. Kuropatkin, O. Lahav, T. S. Li, M. Lima, M. March, J. L. Marshall, P. Martini, P. Melchior, R. Miquel, J. J. Mohr, R. C. Nichol, B. Nord, R. Ogando, A. A. Plazas, A. K. Romer, A. Roodman, E. S. Rykoff, M. Sako, B. R. Saliwanchik, E. Sanchez, M. Schubnell, I. Sevilla-Noarbe, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, J. Thaler, D. Thomas, V. Vikram, A. R. Walker, R. H. Wechsler, J. Zuntz
Jan. 18, 2016 astro-ph.CO
We measure the cross-correlation between the galaxy density in the Dark Energy Survey (DES) Science Verification data and the lensing of the cosmic microwave background (CMB) as reconstructed with the Planck satellite and the South Pole Telescope (SPT). When using the DES main galaxy sample over the full redshift range $0.2 < z < 1.2$, a cross-correlation signal is detected at $6 \sigma$ and $4\sigma$ with SPT and Planck respectively. We then divide the DES galaxies into five photometric redshift bins, finding significant ($>$$2 \sigma$) detections in all bins. Comparing to the fiducial Planck cosmology, we find the redshift evolution of the signal matches expectations, although the amplitude is consistently lower than predicted across redshift bins. We test for possible systematics that could affect our result and find no evidence for significant contamination. Finally, we demonstrate how these measurements can be used to constrain the growth of structure across cosmic time. We find the data are fit by a model in which the amplitude of structure in the $z<1.2$ universe is $0.73 \pm 0.16$ times as large as predicted in the LCDM Planck cosmology, a $1.7\sigma$ deviation.
Cross-correlation of gravitational lensing from DES Science Verification data with SPT and Planck lensing (1512.04535)
D. Kirk, Y. Omori, A. Benoit-Lévy, R. Cawthon, C. Chang, P. Larsen, A. Amara, D. Bacon, T. M. Crawford, S. Dodelson, P. Fosalba, T. Giannantonio, G. Holder, B. Jain, T. Kacprzak, O. Lahav, N. MacCrann, A. Nicola, A. Refregier, E. Sheldon, K. T. Story, M. A. Troxel, J. D. Vieira, V. Vikram, J. Zuntz, T. M. C. Abbott, F. B. Abdalla, M. R. Becker, B. A. Benson, G. M. Bernstein, R. A. Bernstein, L. E. Bleem, C. Bonnett, S. L. Bridle, D. Brooks, E. Buckley-Geer, D. L. Burke, D. Capozzi, J. E. Carlstrom, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, T. F. Eifler, A. E. Evrard, B. Flaugher, J. Frieman, D. W. Gerdes, D. A. Goldstein, D. Gruen, R. A. Gruendl, K. Honscheid, D. J. James, M. Jarvis, S. Kent, K. Kuehn, N. Kuropatkin, M. Lima, M. March, P. Martini, P. Melchior, C. J. Miller, R. Miquel, R. C. Nichol, R. Ogando, A. A. Plazas, C. L. Reichardt, A. Roodman, E. Rozo, E. S. Rykoff, M. Sako, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, G. Simard, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, R. H. Wechsler, J. Weller
Dec. 14, 2015 astro-ph.CO
We measure the cross-correlation between weak lensing of galaxy images and of the cosmic microwave background (CMB). The effects of gravitational lensing on different sources will be correlated if the lensing is caused by the same mass fluctuations. We use galaxy shape measurements from 139 deg$^{2}$ of the Dark Energy Survey (DES) Science Verification data and overlapping CMB lensing from the South Pole Telescope (SPT) and Planck. The DES source galaxies have a median redshift of $z_{\rm med} {\sim} 0.7$, while the CMB lensing kernel is broad and peaks at $z{\sim}2$. The resulting cross-correlation is maximally sensitive to mass fluctuations at $z{\sim}0.44$. Assuming the Planck 2015 best-fit cosmology, the amplitude of the DES$\times$SPT cross-power is found to be $A = 0.88 \pm 0.30$ and that from DES$\times$Planck to be $A = 0.86 \pm 0.39$, where $A=1$ corresponds to the theoretical prediction. These are consistent with the expected signal and correspond to significances of $2.9 \sigma$ and $2.2 \sigma$ respectively. We demonstrate that our results are robust to a number of important systematic effects including the shear measurement method, estimator choice, photometric redshift uncertainty and CMB lensing systematics. Significant intrinsic alignment of galaxy shapes would increase the cross-correlation signal inferred from the data; we calculate a value of $A = 1.08 \pm 0.36$ for DES$\times$SPT when we correct the observations with a simple IA model. With three measurements of this cross-correlation now existing in the literature, there is not yet reliable evidence for any deviation from the expected LCDM level of cross-correlation, given the size of the statistical uncertainties and the significant impact of systematic errors, particularly IAs. We provide forecasts for the expected signal-to-noise of the combination of the five-year DES survey and SPT-3G.
Eight Ultra-faint Galaxy Candidates Discovered in Year Two of the Dark Energy Survey (1508.03622)
The DES Collaboration: A. Drlica-Wagner, K. Bechtol, E. S. Rykoff, E. Luque, A. Queiroz, Y.-Y. Mao, R. H. Wechsler, J. D. Simon, B. Santiago, B. Yanny, E. Balbinot, S. Dodelson, A. Fausti Neto, D. J. James, T. S. Li, M. A. G. Maia, J. L. Marshall, A. Pieres, K. Stringer, A. R. Walker, T. M. C. Abbott, F. B. Abdalla, S. Allam, A. Benoit-Levy, G. M. Bernstein, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, M. Crocce, L. N. da Costa, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, T. F. Eifler, A. E. Evrard, D. A. Finley, B. Flaugher, P. Fosalba, J. Frieman, E. Gaztanaga, D. W. Gerdes, D. Gruen, R. A. Gruendl, G. Gutierrez, K. Honscheid, K. Kuehn, N. Kuropatkin, O. Lahav, P. Martini, R. Miquel, B. Nord, R. Ogando, A. A. Plazas, K. Reil, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Tucker, V. Vikram, W. Wester, Y. Zhang, J. Zuntz
Nov. 6, 2015 hep-ph, astro-ph.GA, astro-ph.IM, astro-ph.HE
We report the discovery of eight new ultra-faint dwarf galaxy candidates in the second year of optical imaging data from the Dark Energy Survey (DES). Six of these candidates are detected at high confidence, while two lower-confidence candidates are identified in regions of non-uniform survey coverage. The new stellar systems are found by three independent automated search techniques and are identified as overdensities of stars, consistent with the isochrone and luminosity function of an old and metal-poor simple stellar population. The new systems are faint (Mv > -4.7 mag) and span a range of physical sizes (17 pc < $r_{1/2}$ < 181 pc) and heliocentric distances (25 kpc < D < 214 kpc). All of the new systems have central surface brightnesses consistent with known ultra-faint dwarf galaxies (\mu < 27.5 mag arcsec$^{-2}$). Roughly half of the DES candidates are more distant, less luminous, and/or have lower surface brightnesses than previously known Milky Way satellite galaxies. Most of the candidates are found in the southern part of the DES footprint close to the Magellanic Clouds. We find that the DES data alone exclude (p < 0.001) a spatially isotropic distribution of Milky Way satellites and that the observed distribution can be well, though not uniquely, described by an association between several of the DES satellites and the Magellanic system. Our model predicts that the full sky may hold ~100 ultra-faint galaxies with physical properties comparable to the DES satellites and that 20-30% of these would be spatially associated with the Magellanic Clouds.
Joint Analysis of Galaxy-Galaxy Lensing and Galaxy Clustering: Methodology and Forecasts for DES (1507.05353)
Y. Park, E. Krause, S. Dodelson, B. Jain, A. Amara, M. R. Becker, S. L. Bridle, J. Clampitt, M. Crocce, P. Fosalba, E. Gaztanaga, K. Honscheid, E. Rozo, F. Sobreira, C. Sánchez, R. H. Wechsler, T. Abbott, F. B Abdalla, S. Allam, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, L. N. da Costa, D. L. DePoy, S. Desai, J. P. Dietrich, P. Doel, T. F. Eifler, A. Fausti Neto, E. Fernandez, D. A. Finley, B. Flaugher, D. W. Gerdes, D. Gruen, R. A. Gruendl, G. Gutierrez, D. J. James, S. Kent, K. Kuehn, N. Kuropatkin, M. Lima, M. A. G. Maia, J. L. Marshall, P. Melchior, C. J. Miller, R. Miquel, R. C. Nichol, R. Ogando, A. A. Plazas, N. Roe, A. K. Romer, E. S. Rykoff, E. Sanchez, V. Scarpine, M. Schubnell, I. Sevilla-Noarbe, M. Soares-Santos, E. Suchyta, M. E. C. Swanson, G. Tarle, J. Thaler, V. Vikram, A. R. Walker, J. Weller, J. Zuntz
The joint analysis of galaxy-galaxy lensing and galaxy clustering is a promising method for inferring the growth function of large scale structure. This analysis will be carried out on data from the Dark Energy Survey (DES), with its measurements of both the distribution of galaxies and the tangential shears of background galaxies induced by these foreground lenses. We develop a practical approach to modeling the assumptions and systematic effects affecting small scale lensing, which provides halo masses, and large scale galaxy clustering. Introducing parameters that characterize the halo occupation distribution (HOD), photometric redshift uncertainties, and shear measurement errors, we study how external priors on different subsets of these parameters affect our growth constraints. Degeneracies within the HOD model, as well as between the HOD and the growth function, are identified as the dominant source of complication, with other systematic effects sub-dominant. The impact of HOD parameters and their degeneracies necessitate the detailed joint modeling of the galaxy sample that we employ. We conclude that DES data will provide powerful constraints on the evolution of structure growth in the universe, conservatively/optimistically constraining the growth function to 7.9\%/4.8\% with its first-year data that covered over 1000 square degrees, and to 3.9\%/2.3\% with its full five-year data that will survey 5000 square degrees, including both statistical and systematic uncertainties.
|
CommonCrawl
|
Antioxidant activity of unmodified kraft and organosolv lignins to be used as sustainable components for polyurethane coatings
Stephanie E. Klein1,2,
Jessica Rumpf1,
Abla Alzagameem1,3,
Matthias Rehahn2 &
Margit Schulze ORCID: orcid.org/0000-0002-8975-17531
Journal of Coatings Technology and Research volume 16, pages1543–1552(2019)Cite this article
The antioxidative capacity of four different kraft lignins (KL) and one additional organosolv lignin (OSL) was studied using the Folin–Ciocalteu (FC) assay. To do so, the corresponding FC assay procedure was adapted and optimized to be appropriate for lignin analysis. Different solvents were tested, and DMSO and saturated sodium carbonate (Na2CO3) for pH adjustment showed the best results. The absorption wavelength of 740 nm was chosen due to highest determination coefficients. Continuous calibration is recommended on a daily basis to guarantee accuracy. The antioxidant capacity and related radical scavenging activity of various lignins were correlated with the biomass nature (soft wood vs grasses) and pulping methods (kraft vs organosolv). The results show higher antioxidant activity for kraft vs organosolv lignins. First lignin-derived polyurethane coatings were prepared using the unmodified kraft lignin. The films, prepared via spin coating, show a high flexibility and transparency.
The use of lignocellulosic feedstock (LCF) in biorefineries has been recognized as a promising approach to produce valuable products such as fuels, power, and chemicals (Fig. 1).1,2 Today, there is a rapidly increasing interest in lignin as a substitute for fossil-based phenol derivatives in polyurethanes and phenol-based resins for applications in construction3 and packaging, but also as hydrogels for biomedicine.4 In addition to the utilization of unmodified lignins, various chemical modification methods are currently under investigation including oxidative and reductive depolymerization and fragmentation methods, comprehensively reviewed by Schutyser and Laurichesse.5,6
Schematic process of lignocellulosic feedstock exploitation (C: cellulose, HC: hemicellulose, L: lignin) Copyright 2018 MDPI International2
Due to their polyphenolic structure consisting of three monolignols and corresponding units (H, G, and S), lignins possess a distinct antioxidant activity (Fig. 2).
Structural monolignol-derived units H, G, and S of lignin
Thus, kraft lignin from wood sources in pulp industry was reported to be as efficient as vitamin E to protect the oxidation of corn oil.7 Most antioxidant effects of lignins are considered as derived from the scavenging action of their phenolic structures on oxygen containing reactive free radicals. As their free radical scavenging ability is facilitated by their OH group, the total phenolic content could be used as a basis for rapid screening of antioxidant activity.8,9 Very recently, we reported an antioxidative capacity study using the DPPH (2,2-diphenyl-1-picrylhydrazyl) assay showing correlations between antioxidant activity and minor structural differences of kraft lignins purified via selective extraction.10 The highest activity was found for lignin fractions with the most narrow molecular weight distribution according to SEC analysis. Antioxidant activity measured via DPPH inhibition of the unmodified kraft lignin fractions was above the values reported in the literature, including commercial BHT, confirming that technical black liquor can be used without further modification.10
Here, we present antioxidant activity studies of kraft lignins isolated at different pH values and one organosolv lignin using the Folin–Ciocalteu assay. Results are discussed regarding the biomass source (wood vs grass) and pulping process (kraft vs organosolv). Based on the studied kraft lignin, first polyurethane coatings were prepared showing high homogeneity and transparency.
Chemicals and reagents
Black liquor obtained from kraft pulping (according to the supplier using a mixture of soft and hard wood) was obtained from Zellstoff-und Papierfabrik Rosenthal GmbH (Blankenstein, Germany, MERCER group).
Sulfuric acid was purchased from Fisher Scientific in Loughborough, UK. Ethanol absolute and sodium carbonate were obtained from VWR Chemicals, Germany. The Folin–Ciocalteu reagent, gallic acid, DMSO, sodium hydroxide, and 4,4-diphenylmethanediisocyanate (MDI) were received from Merck in Darmstadt, Germany. Methanol, tetrahydrofuran (THF), triethylamine (TEA), and potassium bromide were obtained from Carl Roth GmbH in Karlsruhe, Germany. PEG425 was obtained from Sigma-Aldrich in Steinheim, Germany. All chemicals were of reagent grade without further purification.
Lignin isolation
Black liquor and kraft lignin: The kraft lignin (KL) was extracted through the acidic precipitation from black liquor. First, about 450 mL of black liquor was filtered with a vacuum filter. The filter cake was rejected. The 400 mL filtrate received was heated to 50–60°C. While stirring, 160 mL sulfuric acid (25 vol%) was added. The solution was stirred for another hour without the addition of heat and then vacuum filtrated. The filter cake was washed with distilled water and sulfuric acid (25 vol%) until the requested pH value was reached (pH 2 to pH 5). Finally, the precipitated lignin was dried in a freeze dryer for 48 h. The four kraft lignins to be used were precipitated at pH 2, pH 3, pH 4, and pH 5. The organosolv lignin sample (OSL) was prepared to be used for comparison studies according to an earlier published procedure.11,12 Shortly, approx. 50 g Miscanthus X giganteus passing a 0.5-mm sieve and 400 mL ethanol (80% υ/υ) were mixed and heated at 170°C for 90 min under continuous stirring in a Parr reactor with a Parr 4848 reactor controller. Afterward, the Miscanthus biomass was vacuum filtrated and washed five times with 50 mL ethanol (80% υ/υ). Three volumes of water were added to the filtrate to precipitate the organosolv lignin (OSL), which was collected by centrifugation at 3500 rpm for 5 min and washed three times with dist. water. Finally, the OSL was freeze-dried for 72 h.
Lignin–polyurethane synthesis
Lignin is used to partially replace the polyol component. Lignin–polyurethane (LPU) films were prepared by using different NCO/OH ratios (1.2–2.5) according to Pan and Saddler13 Lignin-based polyurethane coatings were prepared dissolving lignin in THF and PEG425 in an ultrasonic bath followed by the addition of 4,4-diphenylmethanediisocyanate (MDI) and triethylamine (TEA). Thus, 1 g lignin was dissolved in 6 mL THF, MDI was added, and the mixture was transferred on a PE transparency and dried for 1 h at room temperature. Finally, the prefilms were cured at 35°C for 3 h to obtain the final lignin PU films.14
FTIR and UV–Vis spectroscopy
FTIR spectra of lignin were recorded on a JASCO FTIR 410 spectrometer in a range of 4000–400 cm−1 using a KBr disk containing 1% finely grounded sample. The spectrum was recorded over 64 scans with a resolution of 4 cm−1. A PerkinElmer Lambda 35 UV–Vis spectrophotometer was used for recording the UV–Vis spectra. To obtain a concentration of 50 µL mL−1, 5 mg lignin was dissolved in 100 mL 0.1 mol NaOH solution.
The weight average (Mw) and number average (Mn) molecular weights of the lignins as well as their polydispersity (PDI) were determined by size exclusion chromatography (PSS SECurity2 GPC System). Tetrahydrofuran (THF) was used as the mobile phase with a run time of 30 min and an injection volume of 100 µL. Polystyrene standards were used for the calibration at different molecular weights.
OH number analysis
The determination of hydroxyl number was carried out with lignins following ISO 14900:2001(E) developed for polyether polyols with steric hindrance.15 Each lignin sample was boiled under reflux in 25 mL of acetylation reagent solution with a blank sample simultaneously under the same conditions.
After 3 h at reflux, the flasks were left to cool down to room temperature. Twenty-five milliliters of sample and blank, respectively, was filled up with water to 100 mL and was titrated with sodium hydroxide (0.5 mol L−1). The split up of the acetylated samples and blanks allowed a triple determination via titration. Different amounts of sample and blank were needed. The differences were used to determine the total hydroxyl content.
Folin–Ciocalteu (FC) assay
The FC assay was used to determine the total phenolic content based on a redox reaction. In our studies, the literature procedures reported by Garcia, dos Santos, and Kiewning, respectively, were optimized and adapted to obtain a suitable test method (Method 3) for kraft and organosolv lignin, see Table 1.16,17,–18 In detail, the methods differ in the amount of individual chemicals, the base used for pH adjustment, incubation temperatures, and absorption wavelengths. In all cases, gallic acid is used as reference.19
Table 1 Parameters used for the antioxidant activity measurements according to the FC assay
The measuring procedure was the following: Reagents and samples according to Table 1 were kept either in the drying oven at 40°C or at room temperature before measuring the absorbance with a UV–Vis DR6000 Hach spectrometer against a blank (the same reagents but solvent instead of sample). The results were expressed as gallic acid equivalents (GAE), calculated from a calibration curve with gallic acid in DMSO with six different concentrations. Afterward, the total phenol content (TPC) of the lignins was calculated according to equation (1):
$${\text{TPC}} (\% ) = \frac{{{\text{GAE}} \left( {{\text{mg}}\; {\text{L}}^{ - 1} } \right)}}{{c_{\text{Lignin}} \left( {{\text{mg}} \;{\text{L}}^{ - 1} } \right)}} * 100\%$$
Atomic force microscopy (AFM)
Imaging measurements were taken on the PU films by means of a scanning thermal microscopes (ThermoMicroscopes system) driven by SPM software (ThermoMicroscopes ProScan version 2.1) in tapping mode with a scan rate of 0.3 Hz and 0.5 Hz using commercially available silicon contact MPP-Rotated probes purchased from Bruker (Bruker AFM probes) Tip (radius: 8 mm, front angle: 15°, back angle 25°) conditions by room temperature in the 20–22°C range.
Transmission electron microscopy (TEM) experiments were performed using a Zeiss EM 10 electron microscope (Oberkochen, Germany) operating at 60 kV with a CCD camera (Tröndle, Moorenweis, Germany) with a bright field microscope (JEOL, Tokyo, Japan). The field emission gun was operated at a nominal acceleration voltage of 200 kV.
Characterization of lignins
FTIR spectra (Fig. 3) show significant differences in lignin structure for kraft (KL) vs organosolv-derived (OSL) lignins. In detail, kraft lignins display a stronger C=O stretch (1704 cm−1), while in OSL the O–CH3 stretching (2849 cm−1) is more pronounced, which supports the observation of more signals belonging to S units (1329, 1124, 833 cm−1) in the OSL (isolated from Miscanthus). KL consists mainly of G units (1271, 857, 815 cm−1), as expected for softwood.2,5,10 The assignment of the most important IR absorption bands is summarized in Table 2.
Comparison of the FTIR spectra of the investigated lignins with red lines indicating the major differences
Table 2 Summary of the most important FTIR signals. Comparison of kraft lignin (KL) isolated at pH 2 and organosolv lignin (OSL)
The UV–Vis spectra of the different lignins are very similar to each other showing two absorption maxima (Fig. 4). One significant band appears at 220–230 nm representing aromatic groups in general. The second maximum occurs at 280–300 nm indicating nonconjugated phenolic fragments such as sinapyl-, coniferyl-, and p-coumaryl alcohols. Moreover, there is a shoulder between 240 and 250 nm assigned to conjugated phenolic groups. In accordance with the literature results, the UV absorption bands in accordance with FTIR and SEC results indicated a softwood lignin containing mainly G units. In the case of hardwood lignins with a higher ratio of S units, the UV–Vis absorption bands were shifted to lower wavelengths. In principle, the results confirm the literature data for kraft lignin absorption.20,21,–22 Slight differences are obtained regarding the absorption curve due to different isolation procedures working in aqueous media.
UV–Vis spectra of lignins precipitated at different pH values
Weight average (Mw) and number average (Mn) molecular weights and corresponding polydispersity (Mw/Mn) determined via size exclusion chromatography (SEC) as well as the OH content according to ISO 14900 are summarized in Table 3.
Table 3 Characterization of the kraft lignins: molecular weight Mw, Mn and polydispersity (PDI), and OH content according to ISO 14900
The hydroxyl numbers for the four lignins varied between 2.67 mmol g−1 (150 mg KOH g−1) for KL pH 2 and 5.35 mmol g−1 (300 mg KOH g−1). In addition, Fig. 5 shows the relationship between hydroxyl number and pH value of the isolated lignins: The higher the pH value, the higher the hydroxyl number. We observed the optimal PU crosslinking for lignin precipitated at pH 4.
Dependency of the pH value (precipitated lignins) on the hydroxyl number (mmol g−1) of kraft lignin
Folin–Ciocalteu assay
According to published studies so far, the antioxidant capacity of lignins has mainly been investigated using the DPPH assay and the Folin–Ciocalteu method: The DPPH assay uses the redox reaction of 2,2-diphenyl-1-picrylhydrazyl (DPPH) with an antioxidant, resulting in reduced color intensity proportional to the antioxidant concentration.23 The Folin–Ciocalteu assay (or total phenolics assay) is used to measure the TPC of natural products. In detail, the Folin–Ciocalteu reagent (a mixture of phosphomolybdate and phosphotungstate) reacts with an antioxidant, changing the color intensity proportionally to the antioxidant concentration. Gallic acid is used as a reference compound, and results are expressed as GAE or TPC.24
Thus, Dizhbite et al. developed structure–property relationships regarding antioxidant activity, proposing that the π-conjugation systems of lignin operate as catalysts/activators in the interaction with DPPH radicals, and heterogeneity and polydispersity critically decrease the antioxidant efficiency.25 Using electron paramagnetic resonance (EPR) spectroscopy to characterize paramagnetic polyconjugated clusters in lignin samples, paramagnetic polyconjugated clusters were confirmed to result in a linear increase in antioxidant capacity, whereas aromatic OH and OCH3 contents were less influential. Santos et al. studied isolation and purification effects including solvent influence, comparing water and organic solvents vs alkaline solution. They found that lignins with a low percentage of phenols showed the highest elimination of DPPH radicals.26
Here, the antioxidant activity of kraft and organosolv lignins is studied using the FC assay, following a modified procedure in order to adapt the assay (originally developed for small phenolic derivatives) to rather complex macromolecules such as kraft and organosolv lignins.17,18,–19
According to solubility tests (Table 4), DMSO was chosen as suitable solvent for KL, OSL, and gallic acid. Furthermore, the absorption maxima reported in the literature were verified to find the most appropriate absorption wavelength.
Table 4 Solubility of lignins in suggested solvents
The results are shown in Table 5. For method 1, the curves are similar, but differ significantly in their slope (Fig. 6). For gallic acid, the absorption maximum is 742 nm, for KL it is between 737 and 746 nm, and the OSL has a maximum in the range of 751–768 nm. The resulting absorptions are in line with data reported for lignin studies using the FC assay.27
Table 5 Absorption maxima of lignins studied via different methods
Wavelength maxima of gallic acid, KL, and OSL according to Method 1
The recorded UV–Vis spectra from Method 2 are shown in Fig. 7. Significant changes were observed compared to Method 1: The absorption maxima of both lignins were shifted to lower wavelengths (450–460 nm for KL, 460–464 nm for OSL). For both lignins, the absorption increases at higher wavelengths (> 700 nm), resulting in a greenish color. Due to the different curves of gallic acid and the lignin, this method is not suitable for measuring the absorption, as there is no consistent maximum.
Thus, Method 3 was developed based on these preliminary results. The quantities of chemicals were kept identical to Method 2, as well as the order in which they were added. Instead of 0.1 molar NaOH, a saturated solution of Na2CO3 was added. The solutions obtained after incubation at room temperature are shown in Fig. 8, in comparison with solutions with NaOH addition: It can be seen that the blank is again colorless, while both KL and gallic acid have a dark blue color when adding Na2CO3. With NaOH, the blank is yellow, while gallic acid and KL have a green-yellowish color. This yellowish color indicates that no reaction of the FC reagent took place, as a blank solution quickly fades from yellow to colorless unless the reagent is partially reduced. However, if a reaction with a reducing agent takes place, the originally yellow color turns blue.28
Comparison of samples (from left to right): blank, gallic acid, and KL in NaOH (1) and Na2CO3 (2)
The recorded spectra from Method 2 and Method 3 are shown in Fig. 9. There is a certain similarity of the curves for Method 1 and Method 3: The absorption maxima are just slightly shifted (bathochromic shift). Both methods use Na2CO3 as base, but in different amounts. Since the absorption maxima are generally very broad, it can be concluded that the results are comparable despite the lower batch volume and the different proportions of the chemicals used.
Comparison of the UV–Vis absorption maxima of gallic acid and kraft lignin in DMSO in Method 2 (yellow and red) vs Method 3 (gray and blue) (Color figure online)
The curves of the solutions according to Method 2 with NaOH have a different course with no maximum in the expected range. Based on these observations, Na2CO3 appears to be the better choice resulting in absorption maxima for both gallic acid and lignins.
The differences in curves and colors are due to different bases and corresponding pH. The solutions with NaOH have a pH of approx. 12.8, while solutions with Na2CO3 have a pH of approx. 10.5. The optimum pH of 10 for the reaction is only achieved by the addition of Na2CO3.29,30 Since the reaction is faster in the alkaline than in the acidic medium, it can be assumed that a reaction time of 30 min is insufficient for Method 2.19,31
To determine the TPC of lignins, six calibration levels in the range of 10–100 mg L−1 gallic acid in DMSO and one sample solution each of the lignins with a concentration of 100 mg L−1 were measured, both after incubation at room temperature and at 40°C at 740 nm, as this wavelength displays highest values for R2 for the calibration curve. All measurements were taken in triplicate. Standard deviations of the TPC values for the different lignins are summarized in Table 6 and illustrated in Fig. 10. It is noticeable that the TPCs measured after incubation at 40°C show a lower standard deviation for the majority of the lignins, compared to the measurements after incubation at room temperature. One explanation might be that the samples in the drying cabinet are exposed to more constant temperatures, while the room temperature, however, varies depending on weather and daytime, respectively. Nevertheless, it can be stated that the same tendencies can be observed both at room temperature and at 40°C: KL pH 2 has the highest TPC, followed by KL pH 5, KL pH 4, and KL pH 3. Due to the standard deviations, however, statistically significant differences cannot be established between the KLs. The OSL has by far the lowest TPC and thus the weakest antioxidant capacity.
Table 6 Mean and standard deviation of the TPC of the investigated lignins for measurements at 40°C and at room temperature
Comparison of the TPC of the investigated lignins for measurement at 40°C and at room temperature
Furthermore, there is no correlation between the TPC and pH value for KL precipitation. Similar results are reported by dos Santos et al. and Faustino et al., who also could not specify any relationship between pH and TPC.17,32 Most obviously, the pH value does not affect the total phenol content or the antioxidant capacity of the lignin.
Comparing the TPCs with the literature values of dos Santos et al., whose original method is modified in this work, slight differences could be detected.17 In detail, the authors precipitated KL from black liquor by means of H2SO4 at pH 2, pH 4, and pH 6. TPC is (29.61 ± 1.61)% for KL pH 2 and (19.95 ± 0.74)% for KL pH 4, compared to (33.5 ± 1.9)% for KL pH 2 and (30.4 ± 2.6)% for KL pH 4. In both cases, the TPC of KL pH 2 is higher than the TPC of KL pH 4, but the difference in TPCs determined here is significantly lower. These differences might be caused by the raw materials used, as our biomass is a mixture of hard and soft wood due to the supplier, but also small differences in the black liquor treatment and further lignin processing steps influence the lignin purity and thus the TPC.
According to García et al., lignins precipitated under acidic conditions should have a higher percentage of total phenols, as they contain fewer impurities and a higher proportion of phenolic components resulting from cleavage of β–O–4 bonds during the kraft process.16 However, the delignification via the kraft process also causes the depolymerization of polysaccharides, which can lead to interferences by also reducing the FC reagent. Therefore, results should only be considered as relative values in order to compare the various lignins to each other.
Lignin–polyurethane coatings
Finally, first lignin-derived polyurethane coatings were successfully produced in an efficient one-step synthesis using unmodified KL as environmentally benign component in contents up to 80 wt% (Fig. 11).
Schematic urethane linkage formation for the reaction of 4,4-diphenylmethanediisocyanate (MDI) and lignin (with R = monolignol functionalities)
Transparent homogeneous films of high flexibility resulted from lignins isolated at pH 4, possessing a temperature resistance up to 160°C. Swelling tests revealed a resistance against water and THF. Swelling in DMSO depends on index, pH of precipitation, and catalyst utilization for PU preparation.14 According to AFM studies, surface roughness is between 10 and 28 nm (Fig. 12).
Left: AFM figure of the coating surface. Right: flexible LPU coating; due to scanning electron microscopy, the thickness of spin-coated films was determined to range between 150 and 160 µm14
Lignin-derived polyurethane coatings have been prepared using different material surfaces: metal, wood, and glass (Fig. 13). All samples have been prepared using lignin isolated at pH 4 (see Table 3). In addition, the LPU coatings were placed onto a structured surface (Fig. 14). Transmission electron microscopy confirmed the homogeneity of the lignin-derived films. In ongoing studies, the antioxidant and antimicrobial capacity of lignin-derived polyurethanes is investigated.
Lignin-derived polyurethane coatings on different surfaces: metal, wood, and glass (from left to right). Samples prepared with lignin isolated at pH 4 (see Table 3; NCO/OH 1.7 using Desmodur® and 20 wt% Lupranol®)
TEM figure of a LPU coating prepared on a structured surface (bar 2500 nm)
The structure of four kraft lignins (KL), isolated at different pH values, and one organosolv lignin (OSL) was analyzed and compared using FTIR und UV–Vis spectroscopy. Molar masses and polydispersities differ depending on biomass (wood vs grass) and pulping process (kraft vs organosolv). The total phenolic content was determined using an optimized Folin–Ciocalteu assay. The results reveal higher TPC values for kraft lignin compared to organosolv lignin supporting recently published results for lignins purified via selective solvent extraction. Additional studies using alternative test methods (i.e., DPPH) are ongoing for confirmation and verification of the obtained results. Using the kraft lignin, first polyurethane coatings of high transparency and flexibility were prepared. In contrast to published studies, all KLs were isolated at room temperature from aqueous solution. The lignin content in the LPU coatings could be increased up to 80 wt%. Ongoing studies focus the antioxidant behavior and mechanical properties of the LPU coatings.
AFM:
BHT:
Butyl hydroxy toluene
DPPH:
2,2-Diphenyl-1-picrylhydrazyl
FC:
Folin–Ciocalteu
GAE:
Gallic acid equivalent
KL:
Kraft lignin
LCF:
Lignocellulose feedstock
MDI:
4,4-Diphenylmethanediisocyanate
OSL:
Organosolv lignin
TEM:
TPC:
Total phenol content
Kamm, B, Kamm, M, Hirth, T, Schulze, M, "Lignocelluloses Based Chemical Products and Product Family Trees." In: Kamm, M, Kamm, B, Gruber, PC (eds.) Biorefineries-Industrial Processes and Products, pp. 97–150. Wiley-VCH, Weinheim, Germany (2006)
Alzagameem, A, El Khaldi-Hansen, B, Kamm, B, Schulze, M, "Lignocellulosic Biomass for Energy, Biofuels, Biomaterials, and Chemicals." In: Vaz, S, Jr (ed.) Biomass and Green Chemistry, pp. 95–132. Springer, Basel (2018) https://doi.org/10.1007/978-3-319-66736-2
Rinaldi, R, Jastrzebski, R, Clough, MT, "Paving the Way for Lignin Valorisation: Recent Advances in Bioengineering, Biorefining and Catalysis." Angew. Chem. Int. Ed., 55 2–54 (2016). https://doi.org/10.1002/anie.201510351org
Witzler, M, Alzagameem, A, Bergs, M, ElKhaldi-Hansen, B, Klein, SE, Hielscher, D, Kamm, B, Kreyenschmidt, J, Tobiasch, E, Schulze, M, "Lignin-Derived Biomaterials for Drug Release and Tissue Engineering." Molecules, 23 1885 (2018). https://doi.org/10.3390/molecules23081885
Schutyser, W, Renders, T, Van den Bosch, S, Koelewijn, SF, Beckham, GT, Sels, BF, "Chemicals from Lignin: An Interplay of Lignocellulose Fractionation, Depolymerisation, and Upgrading." Chem. Soc. Rev., 47 852–908 (2018). https://doi.org/10.1039/c7cs00566k
Laurichesse, S, Avérous, L, "Chemical Modification of Lignins: Towards Biobased Polymers." Prog. Polym. Sci., 39 1266–1290 (2014). https://doi.org/10.1016/j.progpolymsci.2013.11.004
Dong, X, Dong, M, Lu, Y, Turley, A, Jin, T, Wu, C, "Antimicrobial and Antioxidant Activities of Lignin from Residue of Corn Stover to Ethanol Production." Ind. Crop. Prod., 34 1629–1634 (2011). https://doi.org/10.1016/j.indcrop.2011.06.002
Baba, SA, Malik, SA, "Evaluation of Antioxidant and Antibacterial Activity of Methanolic Extracts of Gentiana Kurroo Royle." J. Biol. Sci., 21 (5) 493–498 (2014). https://doi.org/10.1016/j.sjbs.2014.06.004
Amzad Hossain, M, Shah, M, "A Study on the Total Phenols Content and Antioxidant Activity of Essential Oil and Different Solvent Extracts of Endemic Plant Merremia Borneensis." Arab. J. Chem., 8 (1) 66–71 (2015). https://doi.org/10.1016/j.arabjc.2011.01.007
Alzagameen, A, ElKhaldi-Hansen, B, Büchner, D, Larkins, M, Kamm, B, Witzleben, S, Schulze, M, "Lignocellulosic Biomass as Source for Lignin-Based Environmentally Benign Antioxidants." Molecules, 23 2664–2688 (2018). https://doi.org/10.3390/molecules23102664
Obama, P, Ricochon, G, Muniglia, L, Brosse, N, "Combination of Enzymatic Hydrolysis and Ethanol Organosolv Pretreatments: Effect of Lignin Structures, Delignification Yields and Cellulose-to-Glucose Conversion." Bioresource Technol., 112 156 (2012). https://doi.org/10.1016/j.biortech.2012.02.080
Hansen, B, Kamm, B, Schulze, M, "Qualitative and Quantitative Analysis of Lignin Produced from Beech Wood by Different Conditions of the Organosolv Process." J. Polym. Environ., 24 85–97 (2016). https://doi.org/10.1007/s10924-015-0746-3
Pan, X, Saddler, JN, "Effect of Replacing Polyol by Organosolv and Kraft Lignin on the Property and Structure of Rigid Polyurethane Foam." Biotechnol. Biofuels, 6 12–21 (2013). https://doi.org/10.1186/1754-6834-6-12
Klein, SE, Rumpf, J, Kusch, P, Albach, R, Rehahn, M, Witzleben, S, Schulze, M, "Unmodified Kraft Lignin isolated at room temperature from aqueous solution for Preparation of Highly Flexible Transparent Polyurethane Coatings." RSC Adv, 8 40765 (2018). https://doi.org/10.1039/c8ra08579j
ISO International Standard 14900 (2001), Plastics—Polyols for use in the Production of polyurethane—Determination of hydroxyl number, Reference number ISO 14900:2001 (E)
García, A, González Alriols, M, Spigno, G, Labidi, J, "Lignin as Natural Radical Scavenger. Effect of the Obtaining and Purification Processes on the Antioxidant Behaviour of Lignin." Biochem. Eng., 67 173–185 (2012). https://doi.org/10.1016/j.bej.2012.06.013
dos Santos, PSB, Erdocia, X, Gatto, DA, Labidi, J, "Characterisation of Kraft Lignin Separated by Gradient Acid Precipitation." Ind. Crops Prod., 55 149–154 (2014). https://doi.org/10.1016/j.indcrop.2014.01.023
Kiewning, D, Wollseifen, R, Schmitz-Eiberger, M, "The Impact of Catechin and Epicatechin, Total Phenols and PPO Activity on the Mal d 1 Content in Apple Fruit." Food Chem., 140 99–104 (2013). https://doi.org/10.1016/j.foodchem.2013.02.045
Apak, R, Capanoglu, E, Shahidi, F (eds.), Measurement of Antioxidant Activity & Capacity, 1st ed. Hoboken, Wiley (2017). https://doi.org/10.1002/9781119135388. ISBN 9781119135388
Vivekanand, V, Chawade, A, Larsson, M, Larsson, A, Olsson, O, "Identification and Qualitative Characterization of High and Low Lignin Lines from an Oat TILLING Population." Ind. Crops Prod., 59 1–8 (2014). https://doi.org/10.1016/j.indcrop.2014.04.019
Azadi, P, Inderwildi, OR, Farnood, R, King, DA, "Liquid Fuels, Hydrogen and Chemicals from Lignin: A Critical Review." Renew. Sustain. Energ. Rev., 21 506–523 (2013). https://doi.org/10.1016/j.rser.2012.12.022
Ponomarenko, J, Lauberts, M, Dizhbite, T, Lauberte, L, Jurkjane, V, Telysheva, G, "Antioxidant Activity of Various Lignins and Lignin-Related Phenylpropanoid Units with High and Low Molecular Weight." Holzforschung, 69 1–12 (2015). https://doi.org/10.1515/hf-2014-0280
Sanchez-Rangel, JC, Benavides, J, Heredia, JB, Cisneros-Zevallosc, L, Jacobo-Velázquez, DA, "The Folin–Ciocalteu Assay Revisited: Improvement of Its Specificity for Total Phenolic Content Determination." Anal. Methods, 5 5990–5999 (2013). https://doi.org/10.1039/C3AY41125G
Tai, A, Sawano, T, Yazama, F, Ito, H, "Evaluation of Antioxidant Activity of Vanillin by Using Multiple Antioxidant Assays." Biochim. Biophys. Acta, 1810 170–177 (2011). https://doi.org/10.1016/j.bbagen.2010.11.004
Dizhbite, T, Telysheva, G, Jurkjane, V, Viesturs, U, "Characterization of the Radical Scavenging Activity of Lignins—Natural Antioxidants." Bioresour. Technol., 95 309–317 (2004). https://doi.org/10.1016/j.biortech.2004.02.024
Agbor, GA, Vinson, JA, Donnelly, PE, "Folin–Ciocalteu Reagent for Polyphenolic Assay." IJFS, 3 (8) 147–156 (2014), (ISSN 2326-3350)
Rover, MR, Brown, RC, "Quantification of Total Phenols in Bio-oil Using Folin–Ciocalteu Method." J. Anal. Appl. Pyrol., 104 366–371 (2013). https://doi.org/10.1021/acs.energyfuels.6b01242
Duval, A, Lawoko, M, "A Review on Lignin-Based Polymeric, Micro- and Nano-structured Materials." Reactive Funct. Polym., 85 78–96 (2014), (ISSN 1381-5148, E-ISSN 1873-166X)
Patil, ND, Tanguy, NR, Yan, N, In: Sain, M, Faruk, O (eds.) Lignin in Polymer Composites, pp. 27-47, Elsevier, Kidlington, Oxford, UK, Waltham, MA (2016) ISBN: 978-0-08-096532-1
Huang, D, Ou, B, Prior, RL, "The Chemistry Behind Antioxidant Capacity Assays." J. Agricult. Food Chem., 53 1841–1856 (2005). https://doi.org/10.1021/jf030723c
Lupoi, JS, Singh, S, Parthasarathi, R, Simmons, BA, Henry, RJ, "Recent Innovations in Analytical Methods for the Qualitative and Quantitative Assessment of Lignin." Renew. Sust. Energ. Rev., 49 871–906 (2015). https://doi.org/10.1016/j.rser.2015.04.091
Faustino, H, Gil, N, Baptista, C, Duarte, AP, "Antioxidant Activity of Lignin Phenolic Compounds Extracted from Kraft and Sulphite Black Liquors." Molecules, 15 9308–9322 (2010). https://doi.org/10.3390/molecules15129308
This research was supported by the Federal Ministry of Education and Research (BMBF) program "IngenieurNachwuchs" project LignoBau (03FH013IX4). The authors gratefully acknowledge Zellstoff- und Papierfabrik Rosenthal GmbH (Blankenstein, Germany, MERCER group) for providing the black liquor. We gratefully thank Manuela Schebera, Bonn-Rhein-Sieg University of Applied Sciences, performing the AFM measurements, and Christian Rüttiger, Technical University of Darmstadt for TEM studies. Erasmus-Mundus Avempace-II and Bonn-Rhein-Sieg University/Graduate Institute for scholar ship (Abla Alzagameem).
Department of Natural Sciences, Bonn-Rhein-Sieg University of Applied Sciences, von-Liebig-Str. 20, 53359, Rheinbach, Germany
Stephanie E. Klein
, Jessica Rumpf
, Abla Alzagameem
& Margit Schulze
Technical University Darmstadt, Alarich-Weiß-Straße 4, 64287, Darmstadt, Germany
& Matthias Rehahn
Brandenburg University of Technology BTU Cottbus-Senftenberg, Faculty of Environment and Natural Sciences, Platz der Deutschen Einheit 1, 03046, Cottbus, Germany
Abla Alzagameem
Search for Stephanie E. Klein in:
Search for Jessica Rumpf in:
Search for Abla Alzagameem in:
Search for Matthias Rehahn in:
Search for Margit Schulze in:
Correspondence to Margit Schulze.
Klein, S.E., Rumpf, J., Alzagameem, A. et al. Antioxidant activity of unmodified kraft and organosolv lignins to be used as sustainable components for polyurethane coatings. J Coat Technol Res 16, 1543–1552 (2019) doi:10.1007/s11998-019-00201-w
Issue Date: November 2019
DOI: https://doi.org/10.1007/s11998-019-00201-w
|
CommonCrawl
|
Results for 'M. D'ambra'
1000+ found
Il mistero e la persona nell'opera di Edith Stein. Un itinerario alla ricerca della verità.M. D'ambra - 1991 - Aquinas 34 (3):581-591.details
Review of John R. Peteet and Michael N. D'Ambra, Eds., The Soul of Medicine: Spiritual Perspectives and Clinical Practice 1. [REVIEW]Christina M. Puchalski - 2012 - American Journal of Bioethics 12 (4):49-50.details
The American Journal of Bioethics, Volume 12, Issue 4, Page 49-50, April 2012.
Biomedical Ethics in Applied Ethics
Art in the Lives of Ordinary Romans: Visual Representation and Non-Elite Viewers in Italy 100 B.C.¿A.D. 315.Eve D'Ambra - 2005 - American Journal of Philology 126 (4):623-626.details
Spirit and Soul in Hedwig Conrad-Martius's Metaphysical Dialogues: From Nature to the Human Being. [REVIEW]Michele D'Ambra - 2008 - Axiomathes 18 (4):491-502.details
Through the analysis of Conrad-Martius Metaphysical Dialogues, our aim is show the relevance of the concept of spirit (Geist) and soul (Seele) to clarify the constitution of the human being. In order to understand Conrad-Martius' phenomenological description, it is necessary to explain Husserl's and Stein's approaches to the same argument. Briefly their position is described at the beginning of the essay and then the main points of Conrad-Martius' book are pinpointed. Human being is understandable in the complex of the degrees (...) of nature, that is, with reference to the organic life—plants and animals. Mental-spirit life is the distinguishing element regarding the human being. (shrink)
Human Nature in Metaphysics
Husserl and Other Philosophers in Continental Philosophy
Phenomenology in Continental Philosophy
Towards a Culture of Dialogue in the Philippines-Muslim-Christian Intercultural Communication in Mindanao.S. D'Ambra - 1999 - Journal of Dharma 24 (3):284-300.details
Pygmalion's Doll.Paul Barolsky & Eve D'Ambra - 2009 - Arion 17 (1):19-24.details
Lucian. Trans. M. D. Macleod. Vol. Vii. London: W. Heinemann. 1961. Pp. Xi + 476. 18s.B. J. Sims & M. D. MacLeod - 1963 - Journal of Hellenic Studies 83 (1):172-173.details
Hellenistic and Later Ancient Philosophy in Ancient Greek and Roman Philosophy
Hellenistic and Later Ancient Philosophy, Misc in Ancient Greek and Roman Philosophy
Luciani Opera. Tomus I. Libelli 1–25. Recognovit brevique adnotatione critica instruxit M. D. Macleod. Oxford: Clarendon Press. 1972. Pp. xxi + 336. £2·50. [REVIEW]B. P. Reardon & M. D. MacLeod - 1974 - Journal of Hellenic Studies 94:200-201.details
D.F.M. Strauss & M. Botting , Contemporary Reflections on the Philosophy of Herman Dooyeweerd, Lewiston N.Y. 2000: Edwin Mellen Press. 295 Pages. ISBN 077347644X. [REVIEW]M. D. Stafleu - 2004 - Philosophia Reformata 69 (1):105-108.details
Mind and Cosmos. Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False by M. D. Aeschliman.M. D. Aeschliman - 2013 - The Chesterton Review 39 (3/4):305-309.details
Anti-Darwinist Approaches, Misc in Philosophy of Biology
The Neurotic Personality. By R. G. Gordon M.D., D.Sc, F.R.C.P.Edin. (London: Kegan Paul, Trench, Trübner & Co., Ltd. 1927. Pp. X + 300. Price 10s. 6d. Net.). [REVIEW]M. D. Eder - 1928 - Philosophy 3 (10):255-.details
Philosophy of Neuroscience, Misc in Philosophy of Cognitive Science
Characterizing Attention with Predictive Network Models.M. D. Rosenberg, E. S. Finn, D. Scheinost, R. T. Constable & M. M. Chun - 2017 - Trends in Cognitive Sciences 21 (4):290-302.details
Ethics Training for Scientists: Effects on Ethical Decision-Making.M. D. Mumford, S. Connelly, R. P. Brown, S. T. Murphy, J. H. Hill, A. L. Antes, E. P. Waples & L. D. Devenport - 2008 - Ethics and Behavior 18 (4):315-339.details
In recent years, we have seen a new concern with ethics training for research and development professionals. Although ethics training has become more common, the effectiveness of the training being provided is open to question. In the present effort, a new ethics training course was developed that stresses the importance of the strategies people apply to make sense of ethical problems. The effectiveness of this training was assessed in a sample of 59 doctoral students working in the biological and social (...) sciences using a pre–post design with follow-up and a series of ethical decision-making measures serving as the outcome variable. Results showed not only that this training led to sizable gains in ethical decision making but also that these gains were maintained over time. The implications of these findings for ethics training in the sciences are discussed. (shrink)
Project Examining Effectiveness in Clinical Ethics (PEECE): Phase 1--Descriptive Analysis of Nine Clinical Ethics Services.M. D. Godkin - 2005 - Journal of Medical Ethics 31 (9):505-512.details
Objective: The field of clinical ethics is relatively new and expanding. Best practices in clinical ethics against which one can benchmark performance have not been clearly articulated. The first step in developing benchmarks of clinical ethics services is to identify and understand current practices.Design and setting: Using a retrospective case study approach, the structure, activities, and resources of nine clinical ethics services in a large metropolitan centre are described, compared, and contrasted.Results: The data yielded a unique and detailed account of (...) the nature and scope of clinical ethics services across a spectrum of facilities. General themes emerged in four areas—variability, visibility, accountability, and complexity. There was a high degree of variability in the structures, activities, and resources across the clinical ethics services. Increasing visibility was identified as a significant challenge within organisations and externally. Although each service had a formal system for maintaining accountability and measuring performance, differences in the type, frequency, and content of reporting impacted service delivery. One of the most salient findings was the complexity inherent in the provision of clinical ethics services, which requires of clinical ethicists a broad and varied skill set and knowledge base. Benchmarks including the average number of consults/ethicist per year and the hospital beds/ethicist ratio are presented.Conclusion: The findings will be of interest to clinical ethicists locally, nationally, and internationally as they provide a preliminary framework from which further benchmarking measures and best practices in clinical ethics can be identified, developed, and evaluated. (shrink)
Public Health, Misc in Applied Ethics
Philosophical Ethics and the so-Called Ethical Aspect.M. D. Stafleu - 2007 - Philosophia Reformata 72 (1):21-33.details
At the law side of the creation, the Philosophy of the Cosmonomic Idea distinguishes between natural laws, values and norms. Natural laws are coercive both for human beings and for any other subject or object. Like natural laws, values or normative principles belong to the creation, being universal and invariable. Both people and associations are subject to values, which they can obey or disobey. Values characterize the relation frames following the natural ones. Norms are man-made realizations of values, historically and (...) culturally different. Philosophical ethics investigates the normativity of human acts. This paper argues that ethics cannot be related to a single relation frame and that the designation 'ethical' or 'moral' modal aspect is a misnomer. (shrink)
Ethics in Value Theory, Miscellaneous
Constructibility and Mathematical Existence.M. D. Potter - 1991 - Philosophical Quarterly 41 (164):345-348.details
On Aesthetically Qualified Characters and Their Mutual Interlacements.M. D. Stafleu - 2003 - Philosophia Reformata 68 (2):137-147.details
Discussions about the aesthetic relation frame are often focused on subject-object relations, on objects of arts, their production and their perception.1 A Christian philosophical anthropology emphasizes human subject-subject relations and human acts, including more than the production of artefacts. According to the philosophy of the cosmonomic idea, any kind of human act has an aesthetic aspect. Yet, I shall restrict myself to types of characters that are aesthetically qualified. I shall discuss characters of acts, which objects are not typically aesthetic; (...) characters of acts, which objects are aesthetically qualified; characters of acts performed in subject-subject relations, and characters of aesthetically qualified associations. (shrink)
Fictional Characters in Aesthetics
Time and History in the Philosophy of the Cosmonomic Idea.M. D. Stafleu - 2008 - Philosophia Reformata 73 (2):154-169.details
This article identifies two trends in Dooyeweerd's conception of 'cosmic time', and elaborates their consequences for the philosophy of history. The first trend, connecting time to modal diversity and the order of the modal aspects, prevails in Dooyeweerd's analysis. The application of the second trend, emphasizing that in each relation frame the temporal order governs subject-subject relations and subjectobject relations, sheds a new light on the interpretation of history conceived of as development of the culture and civilization of mankind. The (...) distinction of faith and religion and the position of the aspect of faith in the order of the modal aspects play important parts in this discussion, in particular with respect to the possibility of transcending time. (shrink)
Diogenes of Sinope. A Study of Greek Cynicism.D. S. M. - 1939 - Journal of Philosophy 36 (16):433-434.details
Classical Greek Philosophy, Misc in Ancient Greek and Roman Philosophy
Tools for Reordering: Commonplacing and the Space of Words in Linnaeus's Philosophia Botanica.M. D. Eddy - 2010 - Intellectual History Review 20 (2):227-252.details
While much has been written on the cultural and intellectual antecedents that gave rise to Carolus Linnaeus?s herbarium and his Systema Naturae, the tools that he used to transform his raw observations into nomenclatural terms and categories have been neglected. Focusing on the Philosophia Botanica, the popular classification handbook that he published in 1751, it can be shown that Linnaeus cleverly ordered and reordered the work by employing commonplacing techniques that had been part of print culture since the Renaissance. Indeed, (...) the functional adaptability of commonplace heads allowed him to split and combine the book?s chapters and tables and played a notable conceptual role in the way in which he spatialized words and, to a certain extent, specimens. (shrink)
Research, Engagement and Public Bioethics: Promoting Socially Robust Science.M. D. Pickersgill - 2011 - Journal of Medical Ethics 37 (11):698-701.details
Citizens today are increasingly expected to be knowledgeable about and prepared to engage with biomedical knowledge. In this article, I wish to reframe this 'public understanding of science' project, and place fresh emphasis on public understandings of research: an engagement with the everyday laboratory practices of biomedicine and its associated ethics, rather than with specific scientific facts. This is not based on an assumption that non-scientists are 'ignorant' and are thus unable to 'appropriately' use or debate science; rather, it is (...) underpinned by an empirically-grounded observation that some individuals may be unfamiliar with certain specificities of particular modes of research and ethical frameworks, and, as a consequence, have their autonomy compromised when invited to participate in biomedical investigations. Drawing on the perspectives of participants in my own sociological research on the social and ethical dimensions of neuroscience, I argue that public understanding of biomedical research and its ethics should be developed both at the community level and within the research moment itself in order to enhance autonomy and promote more socially robust science. Public bioethics will have to play a key role in such an endeavour, and indeed will contribute in important ways to the opening up of new spaces of symmetrical engagement between bioethicists, scientists and wider publics—and hence to the democratisation of the bioethical enterprise. (shrink)
Autonomy in Applied Ethics in Applied Ethics
Knowledge and Social Structure.M. D. Shipman & Peter Hamilton - 1974 - British Journal of Educational Studies 22 (3):361.details
Philosophy of Education in Philosophy of Social Science
A Topological Model for Intuitionistic Analysis with Kripke's Scheme.M. D. Krol - 1978 - Zeitschrift fur mathematische Logik und Grundlagen der Mathematik 24 (25-30):427-436.details
Intuitionistic Logic in Logic and Philosophy of Logic
The Functions of Schemata in Perceiving.M. D. Vernon - 1955 - Psychological Review 62 (3):180-192.details
Aspects of Consciousness in Philosophy of Mind
Spatial Things and Kinematic Events.M. D. Stafleu - 1985 - Philosophia Reformata 50:9-20.details
Psr.M. D. Rocca - 2010 - Philosophers' Imprint 10.details
Theories as Logically Qualified Artefacts.M. D. Stafleu - 1982 - Philosophia Reformata 47 (1):20-40.details
Artifacts in Metaphysics
Socrates's Reply to Cebes in Plato's "Phaedo".M. D. Reeve - 1975 - Phronesis 20 (3):199 - 208.details
Classics in Arts and Humanities
Plato: Immortality of the Soul in Ancient Greek and Roman Philosophy
Plato: Phaedo in Ancient Greek and Roman Philosophy
Aphairesis, prosthesis, chorizein dans la philosophie d'Aristote.M. D. Philippe - 1948 - Revue Thomiste 48:461-479.details
Analysis of Time in Modern Physics.M. D. Stafleu - 1970 - Philosophia Reformata 35:1-24.details
Space and Time in Philosophy of Physical Science
Non-Heart Beating Organ Donation: Old Procurement Strategy--New Ethical Problems.M. D. D. Bell - 2003 - Journal of Medical Ethics 29 (3):176-181.details
The imbalance between supply of organs for transplantation and demand for them is widening. Although the current international drive to re-establish procurement via non-heart beating organ donation/donor is founded therefore on necessity, the process may constitute a desirable outcome for patient and family when progression to brain stem death does not occur and conventional organ retrieval from the beating heart donor is thereby prevented. The literature accounts of this practice, however, raise concerns that risk jeopardising professional and public confidence in (...) the broader transplant programme. This article focuses on these clinical, ethical, and legal issues in the context of other approaches aimed at increasing donor numbers. The feasibility of introducing such an initiative will hinge on the ability to reassure patients, families, attendant staff, professional bodies, the wider public, law enforcement agencies, and the media that practitioners are working within explicit guidelines which are both ethically and legally defensible. (shrink)
Organ Donation in Applied Ethics
Analysis of Time in Modern Physics.M. D. Stafleu - 1970 - Philosophia Reformata 35 (1-2):1-24.details
Theories as Logically Qualified Artifacts.M. D. Stafleu - 1981 - Philosophia Reformata 46 (2):164-189.details
The Psychological Roots of Religious Belief: Searching for Angels and the Parent-God.M. D. Faber - 2004 - Prometheus Books.details
The basic biological situation -- Credulity, and the skeptical tradition -- The early period -- Construction of the inner realm -- Brain, mind, religion -- Infantile amnesia -- Prayer and faith -- Angelic encounters -- Are we 'wired for God'?.
Epistemology of Religion in Philosophy of Religion
$1.26 used $13.70 new Amazon page
Iterative Set Theory.M. D. Potter - 1994 - Philosophical Quarterly 44 (171):178-193.details
Discusses the metaphysics of the iterative conception of set.
The Iterative Conception of Set in Philosophy of Mathematics
Naiumlve Realism and Extreme Disjunctivism.M. D. Conduct - 2010 - Philosophical Explorations 13 (3):201-221.details
Disjunctivism about sensory experience is frequently put forward in defence of a particular conception of perception and perceptual experience known as naiumlve realism. In this paper, I present an argument against naiumlve realism that proceeds through a rejection of disjunctivism. If the naiumlve realist must also be a disjunctivist about the phenomenal nature of experience, then naiumlve realism should be abandoned.
Disjunctivism in Philosophy of Mind
Naive and Direct Realism in Philosophy of Mind
Memory and Consciousness: A Selective Review of Issues and Data.M. D. Rugg - 1995 - Neuropsychologia 33:1131-1141.details
Conscious and Unconscious Memory in Philosophy of Cognitive Science
Visual Perception.M. D. Vernon - 1938 - Mind 47 (185):86-92.details
Professionalism in Medicine.M. D. Miettinen - 2003 - Journal of Evaluation in Clinical Practice 9 (3):353-356.details
ILSON, M. D.: "Descartes". [REVIEW]Harry M. Bracken - 1980 - British Journal for the Philosophy of Science 31:307.details
Metaphysics of Mind in Philosophy of Mind
The Relation Frame of Keeping Company: Reply to Andrew Basden.M. D. Stafleu - 2005 - Philosophia Reformata 70 (2):151-164.details
The Neuronal Excuse: One Can Lack Motivation and Want to Be Helped With It, While Remaining a Moral Perfectionist.M. D. Garasic & A. Lavazza - 2015 - American Journal of Bioethics Neuroscience 6 (1):20-22.details
Halo of Identity: The Significance of First Names and Naming.M. D. Tschaepe - 2003 - Janus Head 6 (1):67-78.details
On Vacuum Fluctuations and Particle Masses.M. D. Pollock - 2012 - Foundations of Physics 42 (10):1300-1328.details
The idea that the mass m of an elementary particle is explained in the semi-classical approximation by quantum-mechanical zero-point vacuum fluctuations has been applied previously to spin-1/2 fermions to yield a real and positive constant value for m, expressed through the spinorial connection Γ i in the curved-space Dirac equation for the wave function ψ due to Fock. This conjecture is extended here to bosonic particles of spin 0 and spin 1, starting from the basic assumption that all fundamental fields (...) must be conformally invariant. As a result, in curved space-time there is an effective scalar mass-squared term $m_{0}^{2}=-R/6=2\varLambda_{\mathrm{b}}/3$ , where R is the Ricci scalar and Λ b is the cosmological constant, corresponding to the bosonic zero-point energy-density, which is positive, implying a real and positive constant value for m 0, through the positive-energy theorem. The Maxwell Lagrangian density $\mathcal{L} =- \sqrt{-g}F_{ij}F^{ij}/4$ for the Abelian vector field F ij ≡A j,i −A i,j is conformally invariant without modification, however, and the equation of motion for the four-vector potential A i contains no mass-like term in curved space. Therefore, according to our hypothesis, the free photon field A i must be massless, in agreement with both terrestrial experiment and the notion of gauge invariance. (shrink)
Particle Physics in Philosophy of Physical Science
A Topological Model for Intuitionistic Analysis with Kripke's Scheme.M. D. Krol - 1978 - Mathematical Logic Quarterly 24 (25‐30):427-436.details
Plato's Earlier Dialectic. [REVIEW]D. S. M. - 1942 - Journal of Philosophy 39 (13):359.details
Plato: Dialectic in Ancient Greek and Roman Philosophy
Wolf Prolegomena to Homer, 1795. Trans, with Introd. And Notes by A. Grafton, G. W. Most, and J. E. G. Zetzcl. Princeton: University Press, 1985. Pp. Xiv + 265. £30.20. [REVIEW]M. D. Reeve, F. A. Wolf, A. Grafton, G. W. Most & J. E. G. Zetzel - 1988 - Journal of Hellenic Studies 108:219-221.details
On Gravitational Effects in the Schrödinger Equation.M. D. Pollock - 2014 - Foundations of Physics 44 (4):368-388.details
The Schrödinger equation for a particle of rest mass $m$ and electrical charge $ne$ interacting with a four-vector potential $A_i$ can be derived as the non-relativistic limit of the Klein–Gordon equation $\left( \Box '+m^2\right) \varPsi =0$ for the wave function $\varPsi $ , where $\Box '=\eta ^{jk}\partial '_j\partial '_k$ and $\partial '_j=\partial _j -\mathrm {i}n e A_j$ , or equivalently from the one-dimensional action $S_1=-\int m ds +\int neA_i dx^i$ for the corresponding point particle in the semi-classical approximation $\varPsi \sim (...) \exp {(\mathrm {i}S_1)}$ , both methods yielding the equation $\mathrm {i}\partial _0\varPsi \approx \left( \frac{1}{2m}\eta ^{\alpha \beta }\partial '_{\alpha }\partial '_{\beta } + m + n e\phi \right) \varPsi $ in Minkowski space–time , where $\alpha ,\beta =1,2,3$ and $\phi =-A_0$ . We show that these two methods generally yield equations that differ in a curved background space–time $g_{ij}$ , although they coincide when $g_{0\alpha }=0$ if $m$ is replaced by the effective mass $\mathcal{M}\equiv \sqrt{m^2-\xi R}$ in both the Klein–Gordon action $S$ and $S_1$ , allowing for non-minimal coupling to the gravitational field, where $R$ is the Ricci scalar and $\xi $ is a constant. In this case $\mathrm {i}\partial _0\varPsi \approx \left( \frac{1}{2\mathcal{M}'} g^{\alpha \beta }\partial '_{\alpha }\partial '_{\beta } + \mathcal{M}\phi ^{(\mathrm g)} + n e\phi \right) \varPsi $ , where $\phi ^{(\mathrm g)} =\sqrt{g_{00}}$ and $\mathcal{M}'=\mathcal{M}/\phi ^{(\mathrm g)} $ , the correctness of the gravitational contribution to the potential having been verified to linear order $m\phi ^{(\mathrm g)} $ in the thermal-neutron beam interferometry experiment due to Colella et al. Setting $n=2$ and regarding $\varPsi $ as the quasi-particle wave function, or order parameter, we obtain the generalization of the fundamental macroscopic Ginzburg-Landau equation of superconductivity to curved space–time. Conservation of probability and electrical current requires both electromagnetic gauge and space–time coordinate conditions to be imposed, which exemplifies the gravito-electromagnetic analogy, particularly in the stationary case, when div ${{\varvec{A}}}=\hbox {div}{{\varvec{A}}}^{(\mathrm g)}=0$ , where ${{\varvec{A}}}^{\alpha }=-A^{\alpha }$ and ${{\varvec{A}}}^{(\mathrm g)\alpha }=-\phi ^{(\mathrm g)}g^{0\alpha }$ . The quantum-cosmological Schrödinger (Wheeler–DeWitt) equation is also discussed in the $\mathcal{D}$ -dimensional mini-superspace idealization, with particular regard to the vacuum potential $\mathcal V$ and the characteristics of the ground state, assuming a gravitational Lagrangian $L_\mathcal{D}$ which contains higher-derivative terms up to order $\mathcal{R}^4$ . For the heterotic superstring theory , $L_\mathcal{D}$ consists of an infinite series in $\alpha '\mathcal{R}$ , where $\alpha '$ is the Regge slope parameter, and in the perturbative approximation $\alpha '|\mathcal{R}| \ll 1$ , $\mathcal V$ is positive semi-definite for $\mathcal{D} \ge 4$ . The maximally symmetric ground state satisfying the field equations is Minkowski space for $3\le {\mathcal {D}}\le 7$ and anti-de Sitter space for $8 \le \mathcal {D} \le 10$. (shrink)
Quantum Gravity in Philosophy of Physical Science
Clinical Ethics: NICE Guidelines, Clinical Practice and Antisocial Personality Disorder: The Ethical Implications of Ontological Uncertainty.M. D. Pickersgill - 2009 - Journal of Medical Ethics 35 (11):668-671.details
The British National Institute for Health and Clinical Excellence has recently released new guidelines for the diagnosis, treatment and prevention of the psychiatric category antisocial personality disorder. Evident in these recommendations is a broader ambiguity regarding the ontology of ASPD. Although, perhaps, a mundane feature of much of medicine, in this case, ontological uncertainty has significant ethical implications as a product of the profound consequences for an individual categorised with this disorder. This paper argues that in refraining from emphasising uncertainty, (...) NICE risks reifying a controversial category. This is particularly problematical given that the guidelines recommend the identification of individuals "at risk" of raising antisocial children. Although this paper does not argue that NICE is "wrong" in any of its recommendations, more emphasis should have been placed on discussions of the ethical implications of diagnosis and treatment, especially given the multiple uncertainties associated with ASPD. It is proposed that these important issues be examined in more detail in revisions of existing NICE recommendations, and be included in upcoming guidance. This paper thus raises key questions regarding the place and role of ethics within the current and future remit of NICE. (shrink)
Psychopathy and Treatment in Philosophy of Cognitive Science
Psychopathy as Mental illness in Philosophy of Cognitive Science
The Validity of Psychopathy in Philosophy of Cognitive Science
On the Character of Social Communities, the State and the Public Domain.M. D. Stafleu - 2004 - Philosophia Reformata 69 (2):125-139.details
The view that organized social communities or associations differ from unorganized communities by having a kind of government or management exerting authority over the community appears almost obvious. Nevertheless it contradicts Dooyeweerd's view, distinguishing organized communities from natural communities because of their being founded in the technical relation frame respectively the biotic one. This paper discusses the dual character of associations, requiring the introduction of a new relation frame. Determined by authority and discipline, the political relation frame succeeds the frames (...) of social intercourse and economic relations, and precedes the frames of justice and loving care. It qualifies the generic character of any association, founded in that of social intercourse. Besides, each association has a specific character, distinguishing various types of associations. These insights shed new light upon the dual character of a state as the guardian of the public domain. Constituting various networks of human relations, the public domain does not have the character of a community. (shrink)
1 — 50 / 1000
|
CommonCrawl
|
Ashoke Sen and tachyon condensation
Some CERN and FNAL news: The latest Higgs paper by ATLAS has the significance level of 5.9 sigma. I find it silly to write articles about every new increased number of this sort. The significance level will clearly keep on increasing with the square root of the number of collisions. You could have learned about this simple fact as well as the proportionality factor on TRF since December 2011.
More interestingly, the Tevatron has published evidence for the Higgs boson in events involving bottom quark-antiquark pairs, a rare discipline in which the Tevatron could have competed with the LHC (or beat it for a while). This signal pretty much erases speculations that the July 4th Higgs-like particle refuses to interact with bottom quarks etc. It does interact and the available data make its properties more or less identical to the Standard Model Higgs predictions.
Ashoke Sen (yup, I took this Wikipedia picture of him in front of ex-Glashow, ex-my, and now-Randall office) is one of the nine winners of the inaugural Milner fundamental physics prize. And he deserves it a great deal.
You will find something like 250 papers he has written and 20,000 citations those papers have accumulated. The list includes 22 papers with at least 250 citations – and his most famous papers talk about the counting of black hole microstates in many contexts, D-brane actions, Matrix theory, S-duality, subtleties of heterotic string theory, and others.
Still, despite the fact that strong-weak duality was quoted as the main justification of his $3 million award (Sen figured out that the Montonen-Olive duality had to apply e.g. to heterotic compactifications to 4 dimensions as well), there's another theme in string theory he is the true father and main stockholder of: tachyon condensation. He started this minirevolution by his visionary 1998 paper Tachyon condensation on the brane-antibrane system.
Lots of technology and detailed insights were later added by others as well as Sen himself but the change of the paradigm he brought in the late 1990s was a testimony of his ingenuity. Edward Witten, who later refined some of those insights, once stated that he has no idea about the divine interventions that led Sen to his prophetic insights. (OK, I improved Witten's words a bit.)
Tachyons: the ugly guys
To appreciate how much he changed our perceptions of tachyons, we must recall some of the history of tachyons in physics. Einstein's special theory of relativity allows us to divide directions in spacetime (and energy-momentum vectors) to timelike ones, spacelike ones, and the intermediate null i.e. lightlike ones.
A basic consequence of special relativity is that no particle can move faster than light. Equivalently, \[
p^\mu p_\mu = E^2 - p^2 c^2 \gt 0
\] and all world lines of actual particles must be timelike (or at least null). This requirement arises from causality and the Lorentz symmetry: if the world lines were spacelike, these particles would be moving forward in time according to some reference frames and backward in time according to others. In the latter reference frames, the effect would precede the cause which shouldn't happen in a logically consistent world.
The word "tachyon" is derived from Greek τάχος ("tachos") for speed. That refers to the huge, superluminal speed of those particles. Some laymen like to say that special relativity predicts these particles. Well, it introduces the concept of tachyons but it also instantly says something about the new concept and it is not flattering: they aren't allowed to exist.
Tachos is not directly related to tacos.
If we switch from classical mechanics to quantum physics, we need to work with relativistic quantum physics to see what happens with the tachyons. Relativistic quantum physics requires quantum field theory or its refinements (such as string theory). What happens with tachyons in quantum field theory? Well, consider spinless tachyons (and for some technical reasons, both string theory and informal consistency arguments rooted in effective field theory actually allow spinless tachyons only). Their Lagrangian is a Klein Gordon Lagrangian,\[
\LL = \frac 12 \partial_\mu T \cdot \partial^\mu T + \frac 12 \mu^2 T^2.
\] The coefficient in front of the mass term is positive which actually means that the potential is \(-\mu^2 T^2/2\): it is negatively definite. This unusual sign of the mass term is exactly what is needed to switch from timelike vectors \(p^\mu\) to spacelike ones, assuming the picture with classical world lines. The squared mass of the tachyon is negative; you could say that the mass itself is imaginary.
The point \(T=0\) of the potential \[
V(T) = -\frac 12 T^2
\] is a stationary point but it is no longer a minimum: it is a maximum. So if Nature sits there for a while, it won't last forever. It will prefer to roll down and cause a catastrophic instability. Needless to say, the Higgs field with \(H\equiv T\) behaves as a tachyon near the point \(H=0\). That's the ultimate reason why the electroweak symmetry is broken: the Higgs field rolls down to another place with a lower value of \(V(H)\) than the value \(V(0)\).
This Higgs-like analogy sounds natural and it could lead us to look for a better point where the tachyons could live. But that wasn't how string theorists would be thinking about tachyons until the late 1990s.
Tachyons as an illness
Instead, string theorists would immediately declare every theory with tachyons to be sick. They would never ask further questions. The existence of a tachyon in a theory meant that a good physicist should never study such a theory in detail.
The old, bosonic string theory required \(D=26\) dimensions. It didn't contain any fermions – which was already bad enough – but it had one more annoying feature. The lowest-squared-mass particle actually had a negative squared mass; it was a tachyon. The squared mass was\[
m_\text{open tachyon}^2 =-\frac{1}{\alpha'},\quad m_\text{closed tachyon}^2 = -\frac{4}{\alpha'}.
\] Note that due to the factor of four, the (not squared) mass of the closed string tachyon is exactly equal to twice the (not squared) mass of the open string tachyon. You may literally think that the closed string tachyon is a package with one open-string tachyon for the left-moving excitations on the closed string; and one open-string tachyon for the right-movers. The constant \(\alpha'\), the Regge slope, is inversely proportional to the string tension \(T\) via \(2\pi T\alpha'=1\).
For decades, it would seem that this tachyon was a pathology of the bosonic string theory. It manifested itself by horrible infrared divergences in the loop diagrams. Consequently, bosonic string theory was just a toy model to learn some techniques and conformal field theory. But the truly physically consistent theory was superstring theory which contained fermions but no tachyon and which cancelled all those infrared divergences.
String theorists would instinctively dismiss any theory or any vacuum with any tachyons, too. But Ashoke Sen was able to think different. In some sense, it is not surprising that his first "modern understanding of a stringy tachyon" occurred in the brane-antibrane system. It's because it's the situation in which it seems most obvious that we can't deny that the building blocks may be arranged in this way. Type II string theory implies the existence of D-branes – and of course the existence of D-antibranes or anti-D-branes (which one do you prefer?), too. They're separately stable and consistent. And by the clustering property, it must be possible to place them near one another. So we must have an answer to what happens if those D-branes and anti-D-branes are close to each other.
Tachyons in brane-antibrane systems
However, the brane-antibrane system looked exactly as ill as other systems with tachyons. Why?
If you derive the usual superstring theory in the RNS (Ramond-Neveu-Schwarz) variables, you must impose something known as the GSO projections to get a spacetime-supersymmetric spectrum. All physical states in the single-string Hilbert space have to obey\[
(-1)^{GSO_{\rm left}} \ket\psi = +\ket\psi,\quad
(-1)^{GSO_{\rm right}} \ket\psi = +\ket\psi.
\] In a suitable basis of eigenstates of these two parity-like operators, the two projections above reduce the number of basis vectors to one-half per projection i.e. to one quarter in total. Only one quarter of the basis vectors survive this GSO projection. The ground state tachyon state is filtered out, the first excited state is allowed, and every other excitation (if we only count the fermionic excitations) is kept. The overall, "diagonal" GSO projection is needed for the spacetime fields to obey the spin-statistics relation (integer spin states must respect the Bose-Einsteni statistics; half-integer spin states must respect the Fermi-Dirac statistics). Both projections are needed for the spacetime supersymmetry to exist.
There is also a world sheet consistency explanation why such projections exist. The reason is that we need R-R, R-NS, NS-R, and NS-NS sectors (the latter is the most natural one) in which the left-moving fermions (first label before the hyphen) and right-moving fermions (second label after the hyphen) may be both periodic or antiperiodic. To allow the world sheet to preserve the signs of the fermions or flip them (independently for left-movers and right-movers; in "type 0" theories, only the "synchronized" sign flips are allowed), you must declare the operator flipping the symmetry to be a gauge symmetry on the world sheet. But if it is a gauge symmetry, physical states have to be invariant under it, and that's where the GSO projections come from.
Amusingly enough, the GSO projections have reduced the number of basis vectors to one quarter because the two parities were random numbers from the set \(\{+1,-1\}\). This is beautifully compensated by the fact that we have four sectors, NS-NS and the three others. So in some rough counting, the combination of "many sectors" and the "corresponding GSO projections" doesn't change the number of states. This holds for any "orbifolds" and "compactifications producing winding numbers", too. The GSO projections may be understood as a special case of the orbifolding procedure.
Wrong signs
The spectrum of open-string excitations arising from open strings attached to an anti-D-brane by both end points is the same as the spectrum for a D-brane. After all, an anti-D-brane may be understood as a D-brane rotated by 180 degrees (although spacetime-filling D-branes don't leave you with transverse dimensions in which you could rotate them; and this picture of rotation also fails for D-instantons which are just points and can't be nontrivially rotated, either, because one needs at least one transverse and one longitudinal direction to perform such a rotation).
However, if you study open strings whose one endpoint is attached to a D-brane while the other endpoint is attached to an otherwise parallel anti-D-brane (in the intermediate cases, such an anti-D-brane may be described as an antiparallel D-brane with the opposite orientation than the first one), you will find out that the diagonal GSO projection is reverted upside down. For brane-antibrane systems, the states that were kept at brane-brane systems are killed, and the states that used to be killed and preserved.
In particular, the ground state tachyonic state \(\ket 0\) is preserved and it is a part of the physical spectrum. So if your D-brane and anti-D-brane are (anti)parallel and coincide, string theory predicts that the effective field theory will contain spinless fields that are tachyons arising from open strings stretched between a brane and an antibrane. (Well, the ground state is tachyonic only for small enough transverse separations. The transverse distance \(a\) adds \((Ta)^2\) to the squared mass \(m^2\); for high separation, the mass itself goes like \(Ta\). Relative angles and internal excitations along the open strings raise the squared mass, too.)
The fate of the tachyon
As discussed earlier, such a pathological, supersymmetry-breaking, tachyon-infected configuration of D-branes and anti-D-branes may occur in otherwise consistent type II string theories with otherwise stable, beautiful, and supersymmetric D-branes that happen to meet their antiobjects (yes, the "anti" is the same "anti" as the "anti" in "antimatter" – at least, it's the most straightforward generalization of antimatter from pointlike particles to extended objects – and it has almost nothing to do with "anti-Semites" and other things).
If we trust the perturbative string theory, and we surely should if the coupling constant is low, it seems obvious that there is a tachyon. Well, it's a complex one because it transforms as a bifundamental representation under \(U(M)\times U(N)\) if we deal with a stack of \(M\) D-branes and \(N\) coincident anti-D-branes. There is an instability. Ashoke Sen wasn't scared of such an instability. He wouldn't scream "Look at the instability, catastrophe, planetary or cosmic emergency: abandon all fossil fuels, don't look at this theory, it's scary" or "get used to it, natural catastrophes will be here constantly" as Michio Kaku, a co-father of string field theory.
The potential energy curve for the complex tachyon is nothing else than the Mexican hat potential of a sort. The minima correspond to a complete destruction of the D-brane and the anti-D-brane.
Instead, he calmly analyzed what the outcome of this instability could be and he realized that the final outcome had to be completely peaceful. After all, the D-brane and anti-D-brane carry the opposite charges. They're antimatter to each other. So they may annihilate with each other and the tachyon is just a sign of this looming annihilation. When they annihilate, however, the potential energy for the tachyon rolls down lower. How much lower? Well, the decrease of the potential energy is exactly what you would expect from the complete destruction of the latent energy \(E=mc^2\) carried by the brane and the anti-brane. Because this energy is uniform and its density is given by the tension, Sen realized that it had to be the case that the energy densities obeyed\[
V(T=0) - V(T=T_{\rm min}) = 2\cdot {\rm Tension}_\text{D-brane}.
\] Because the point \(T=T_{\rm min}\) where the potential is minimized was rather far from \(T=0\), the difference above couldn't have been calculated in a straightforward fashion. Nevertheless, it was in principle possible to calculate the energy difference from string theory and Sen's formula above was therefore a conjecture about a particular result.
Various formal proofs of the conjecture were soon given – in boundary string field theory, by informal string arguments etc. – but the most explicit and indisputable proof, one that led to interesting new mathematical identities, was given in the framework of Witten's cubic string field theory by Martin Schnabl in 2005, in a paper that turned lots of assorted numerical data from numerical string field theory (which had already pushed the validity of the conjecture beyond all reasonable doubts) into the final indisputable analytic proof.
Generalization to other branes, bosonic string theory, SFT
Sen's insight meant one of those "unification moments" in which previously separated things were found to be two states in the same theory. Previously, one would either have D-branes, or no D-branes. There was no relation between such configurations. Sen realized that one could annihilate or recreated brane-antibrane pairs and this process could have been described as the evolution of some totally standard fields that follow from string theory.
This realization had many consequences. First of all, one could discuss not just brane-antibrane tachyons but also other tachyons that were arising on a single, non-supersymmetric D-brane. Such D-branes exist in string theory as well. After all, all D-branes of bosonic string theory are examples. (Additional examples of such non-supersymmetric branes are even-dimensional D-branes in type IIB and odd-dimensional branes in type IIA superstring theory, the "wrong parities".) It meant that open string tachyons lost their "catastrophe status". Whenever people encountered an open string tachyon before 1998, they would say "phew, horrible, catastrophe, let's go away from this mess". After 1998, they would say "nice, what an interesting instability, let's look where the instability ends". Of course, it's some mundane annihilation of objects in which most of the latent energy is released.
Needless to say, all the comments about string field theory above referred primarily to bosonic open string field theory. That's where the processes predicted by Sen may be studied most explicitly. The love is mutual: tachyon condensation became the main process that may be more naturally studied by string field theory than by other approaches to string theory. Various previous expectations that string field theory would be the answer to all open questions in string theory turned out to be wrong – an unjustified wishful thinking. It has pretty much all the limitations we see in any perturbative formulation of string theory. However, tachyon condensation is a discipline in which string field theory continues to shine.
Lower-dimensional branes as solitons
Aside from the complete annihilation of a brane-antibrane pair or a complete destruction of an unstable D-brane in various theories, one could study more subtle processes. The complex tachyon field \(T\) may have many values and in fact, only something like \(|T|^2\) has to have the right value for the potential energy to be minimized. The phase isn't determined and it may be nontrivial.
So one may actually start with a brane-antibrane system in which \(V(T)=V_{\rm min}\) almost everywhere but there is a localized topological obstruction in the configuration of the tachyon field (some kind of a kink, vortex, monopole, or instanton) that prevents the branes from disappearing completely. If that is the case, what is the annihilation going to end up with? Besides the light string radiation that carry the energy away, we are left with a lower-dimensional D-brane (or many such D-branes).
It can be shown that the topologically nontrivial vortex-like configurations of the tachyon fields (together with the natural gauge field that lowers the configuration as much as possible by making it "nearly pure gauge") defined on coincident D-branes and anti-D-branes carry the same Ramond-Ramond charges as lower-dimensional D-branes. Because those charges are preserved, the lower-dimensional D-branes must survive the annihilation process. They may be identified with topological defects of the tachyon fields!
This general insight became a whole mathematical subindustry of string theory in the case of D-branes wrapped on complicated Calabi-Yau-like manifolds. Because the lower-dimensional D-branes are given by possible "knots" on the tachyon field, one may describe in a completely new way how the branes may be wrapped on the manifold. Instead of "homology" which would be the previously believed mathematical structure classifying "how branes may be wrapped on cycles" (homology directly talks about topologically nontrivial "infinitely thin" submanifolds wrapping the big manifold), people started to talk about K-theory (the set of topologically inequivalent knots on the complex tachyon fields). Edward Witten would be the main industrialist promoting this new mathematical description of the wrapped D-branes.
K-theory sounds much like M-theory and F-theory but unlike M-theory or F-theory, it isn't any theory in a physics sense at all. The similarity in the names is misleading. Instead, it is a notion analogous to homology which is in many situations fully equivalent to homology but it may differ by some discrete subtleties, "torsion" etc., the information saying that twice or \(n\) times wrapped D-brane of some kind may unwind, and so on. K-theory (which naturally leads you to think that the lower-dimensional D-branes are "fat" and spread in the transverse dimensions a bit) began to be considered a more accurate description of the allowed D-brane charges than homology. After K-theory, certain heavily mathematical string theorists started to work with a more detailed mathematical structure knowing about the D-brane charges (plus other things), the so-called derived categories. As far as I understand it, it's the classification of the allowed D-brane charges together with some part of the dynamical information, something that allows the folks to study the lines of stability in new ways, too.
It's cool that those physics processes match some math definitions but I still think that if a physicist didn't know the concept of K-theory at the beginning, he could still be able to discuss the general processes first envisioned by Sen and he may decide to never invent the term "K-theory". In some sense, we're only getting what we're inserting. K-theory and its concise notions are important if you want to "mass produce" results about D-branes wrapped in general ways on complicated and general manifolds. But if you only know the rules of the game that involve the complexified tachyons and Sen's processes, you could be satisfied, after all. So I believe that K-theory – and derived categories – were almost exclusively promoted by folks trained as mathematicians who had been brainwashed to think that things like categories and K-theory were cool and important; and they were hardwired to look for their physics applications which they did find. But if they hadn't been brainwashed at the beginning, they could very well agree that those new mathematical concepts were not needed to understand the "laws of physics".
I think that a more conceptually new result was the research of tachyon condensation in non-commutative spacetimes i.e. in the presence of a nonzero \(B\)-field. Gopakumar, Minwalla, and Strominger had the great idea to study the "unusual but cool" limit of a very large non-commutativity – in some sense, the analogue of \(\hbar\to \infty\) limit. Well, Seiberg and Witten and others studied the same limit. But GMS looked at the tachyon condensation in that limit and realized that lower-dimensional branes become nothing else than individual cells in the quantum phase space identified with a subspace of the D-brane dimensions. The annihilating higher-dimensional D-brane is literally a "phase space composed of many cells" and one may annihilate them one by one, if you wish. At least that's what the mathematical analogy says (the interpretation of the D-brane is of course different than the interpretation of a phase space).
In the new unusual limit, the tachyon field may jump between the individual stationary points, whether they are maxima or minima of the potential, kind of "directly" and "seemingly discretely", without complicated questions how the interpolating curves exactly look like (which normally depends on the precise shape of the potential energy in between the stationary points, too).
Twisted closed strings and closed string tachyons in general
The number of fantastic results that emerged thanked to Sen's visions was huge. One could perhaps say that the qualitative behavior of all open string tachyons in string theory has been understood.
This progress didn't immediately generalize to closed string tachyons. Why? Well, D-branes are objects whose tensions or masses scale like \(1/g\) where \(g\) is the closed-string coupling constant. However, the tree-level energy densities coming from closed strings – and therefore the tensions of more ordinary closed-string "solitons" – scale like \(1/g^2\) which is parameterically larger than \(1/g\) for a small \(g\). In this sense, D-branes are heavy because \(1/g\) diverges in the \(g\to 0\) limit but they're still infinitely lighter than what is needed for a significant change of the surrounding geometry.
If you study the gravitational acceleration around a D-brane, you know it is proportional to the mass which goes like \(1/g\) but also to Newton's constant which actually scales like \(g^2\). The latter wins so the "backrea**ion, **=ct" (no relation to the blog of that name) of a single D-brane is still negligible in the weakly coupled limit.
We may annihilate several D-branes and study the process as tachyon condensation. Enormous energies going like \(1/g\) for \(g\to 0\) are produced but in some sense, these are still minor processes. The energy densities needed to stabilize closed-string tachyons would have to scale like \(1/g^2\) and such huge changes affect the spacetime geometry by something of order 100 percent.
For this reason, closed string tachyons are still viewed almost in the same way as they were viewed before 1998, at least in practice. No canonical "stabilized minimum" of the potential energy curve is known and the closed string tachyon condensation is still a catastrophic process we would better avoid.
(Well, I was corrected by an expert that the previous paragraph is really no longer the case. Simeon Hellerman and Ian Swanson studied bulk closed-string tachyon condensation in various string theories, including supercritical ones, and found out it has an interesting and well-defined endpoint – in fact, these end points are apparently able to connect string vacua in different dimensionalities of spacetime in the same sense as the open string tachyon condensation merges configurations with and without D-branes. I won't go into these cool, relatively new, and advanced mechanisms in this introductory text.)
However, there's one class of exceptions pointed out and first studied by Adams, Polchinski, and Silverstein (APS, no relationship to the American Physical Society). They noticed that there can be closed-string tachyons in twisted sectors of various orbifold compactifications in string theory. Because states in the twisted sectors must be localized to the fixed points of the orbifold, the impact of these tachyons has to be spatially limited. In this respect, these particular closed-string tachyons have to resemble the open-string tachyons that had already been understood.
For example, if you study a simple \(\RR^2/\ZZ_n\) orbifold, the geometry looks like a cone with a deficit angle. Some tachyonic fields live at the tip of the cone and APS collected quite some evidence in favor of the proposition that the the condensation of the closed string tachyon is physically interpreted as the smoothening of the original sharp tip of the cone. The sharp iceberg is peacefully melting which means that there's no reason for fearmongering about the lethal twisted-sector tachyons. That's why APS named their paper Don't panic! Closed string tachyons in ALE spacetimes. I am sure that these three politically correct physicists were deeply sorry of this title a few years later, once the climate hysteria became a dominant symbol of their political allies and it turned the commandment "Do panic" into the most important insight of all of science. ;-)
OK, so open-string tachyons and closed-string tachyons in twisted sectors are more or less understood by now. Physicists no longer panic or abandon the theory when they see a tachyon. Instead, they calmly interpret these tachyons as sources of instabilities – instability that annihilate objects or whole chunks of spacetime and that may (but don't necessarily have to) lead to a new stable world with some interesting objects that may be left over.
For this reason, it was Ashoke Sen and his apprentices such as Edward Witten who unified nothingness and somethingness in physics – i.e. in string theory (because no other theory can achieve similar unifications) – and who discovered new perspectives on the somethingness in between and that's my explanation why he deserves his $3 million.
And that's the memo.
Gene Day Aug 1, 2012, 6:00:00 PM
In addition to the photo did you write the Wikipedika article on Sen?
Luboš Motl Aug 1, 2012, 6:19:00 PM
Dear Gene, dozens of users have edited the article in a hundred of edits
http://en.wikipedia.org/w/index.php?title=Ashoke_Sen&offset=&limit=500&action=history
or so, but yes, I founded the article on May 9th, 2004 (see the list above, the 128.* IP address at the bottom was my feynman.harvard.edu workstation, Lumidek is my Wikipedia user name). It started as a very short article.
Anon Aug 1, 2012, 6:35:00 PM
Quite a lot of money for all those science-fiction works with nothing or little to do with experimental reality. Also your posts are really well-written science-fiction stories.
As a proof of a concept, I approved your comment but a few seconds later, I also added your identity to the black list because your comment has been given quite a lot of space on my blog relatively to your intelligence and what you have to say.
Dilaton Aug 1, 2012, 7:00:00 PM
I hope the other eight fundamental physics prize winners will get such nice articles too in the course of time ... ;-)
acdc Aug 1, 2012, 10:52:00 PM
why isnt the GSO projection an indication of string theory being an artificial construct?
on the same line, the chan paton factors of open string also seem very artificial . one can argue that certain coincidences indicate string theory must be right, but there are quite a few ad-hoc constructions.
no susy . no dark matter, hep was hijacked by maths dorks like witten and you. answer my previous comment.
ps: why not double susy on worldsheet and specetime
there is little beauty and little phenomenological appeal to strings.
Physics Junkie Aug 2, 2012, 5:41:00 AM
These brain stems who think string theory is science fiction and a waste of time for talented physicists are full of HaterAid. Why so hateful? For one thing, they are trying to tell people how to think, sounds like socialism. Second and more importantly, the pace of scientific discovery is slowing down. In the early 20th century discoveries where made on the tabletop by one person with little money, contrasted to the Higgs Boson which took many nations, billions of dollars, thousands of people and 50 years to find. It is no wonder that the pace of theoretical thinking, mainly being constant, has far outstripped the pace of making discovery, ie string/M theory. It will take awhile to produce predictions that can be experimentally verified by today's technology. We must pity these people for their limited inherit intellectual abilities.
Luboš Motl Aug 2, 2012, 5:59:00 AM
"Why isnt the GSO projection an indication of string theory being an artificial construct?"
1) the same superstring theory may also be defined using the Green-Schwarz variables with spinor-like fermions on the world sheet. In this formulation, there is no GSO projection, spacetime SUSY is manifest, and this formulation is exactly equivalent to the RNS-GSO formulation.
2) another reason that there is nothing unnatural about the GSO projections is that they're just a reflection of a discrete symmetry's being a gauge symmetry. When something is a gauge symmetry, it means that Nature imposes that states have to be invariant under it - there's nothing artificial about it, it is mandatory - and one must also allow the fields to return to themselves after trips around noncontractible loops *up to* these gauge symmetries. The latter is why Nature forces one to include the sectors with different (a)periodicities on the closed string.
The very fact that Nature uses theories that are most comfortably described by formalisms with *various gauge symmetries* is an established fact. Gauge symmetries are therefore absolutely natural, GSO/sectors in the RNS superstring are an example, and your intuition on what is natural and what is artificial is therefore 100% upside down.
A huge percentage if not majority of the work by these scientists depend on SUSY and many of them have something to say about dark matter, too.
"hep was hijacked by maths dorks like witten and you"
The first "math dork" who "hijacked" physics was math dork named Isaac Newton. Since that time, physics was increasingly dependent of maths, and it is more and more complicated maths, and the fact that Edward Witten plays the role of Isaac Newton is called progress.
"ps: why not double susy on worldsheet and specetime"
N=2 in spacetime prohibits chiral, left-right interactions of fermions we know from the weak nuclear interactions; N=2 on the world sheet implies the critical spacetime dimension D=2 (complex dimensions) with severely constrained, almost topological, dynamics. More importantly, it's clear that the 10D superstring theory which may be written as an N=1 world sheet theory is both fully consistent and qualitatively agrees with everything in the real world and these are the two main reasons that decide whether a physicist studies a theory in detail or not. If you would prefer any other rules, then it is due to your mental indequacies.
"There is little beauty and little phenomenological appeal to strings."
A much more convincing explanation of your emotions is that there is very little functional material inside your skull.
Now I`ve read the second part of this tutorial and I really like these additional explanations about one of the cool higgs mechanism examples in string theory :-)
Of course concerning the details of some "technicalities" such as the GSO projection for example, I could understand only roughly how it works. But this made me pull out my most decorative demystified book at 00.15 today to see if it is contained therein (it is) :-D
Since this article is so well written I think (hope...) I got at least the gist of it.
Ashoke Sen really deserves his prize for doing such cool things :-)
The title of the APS paper made me LOL :-D, and it was my pleasure to help tame this kind of closed string tachyons ... :-P
(just joking from reading the abstract ...)
JollyJoker Aug 2, 2012, 3:37:00 PM
"maths dorks like witten and you"
How can something intended as an insult be such incredibly high praise? :D
Exactly, JollyJoker. I am SOOOO insulted of being a maths dork just like Witten. ;-)
Poor, poor Lumo ... :-P
acdc Aug 3, 2012, 10:36:00 AM
nice explanation of gso via discrete gauge symm. thanks. i would have liked to see the gauge symmetries of standard model emerging purely from the 10-dim geometry , and if one embraces susy&superspace why nothing comes out from embedding a superworldsheet in a supertargetspace.
Luboš Motl Aug 3, 2012, 10:45:00 AM
Dear acdc, in string theory, all gauge symmetries emerge from the string-generalized geometry. There are different ways how the gauge symmetries emerge in heterotic string theory, heterotic M-theory, braneworlds, and singularities in F-theory, among others.
The idea that the gauge symmetries "must" emerge as Kaluza-Klein isometries of a hidden manifold is just a preconception, a stupid dogma, a sign of a naive way of thinking about the matters. The world just doesn't work like that - it's easy to see that 6 extra dimensions is far from being enough to produce the right rank of the Standard Model gauge group - and there is nothing "uglier" about the stringy geometry that actually produces the gauge symmetry in the real world.
You suffer from the kind of fundamentally unscientific thinking that many non-scientists do. You declare "I would have liked something" where something is a demonstrably wrong proposition that is justified by some vague silly emotions based on complete ignorance of all the vital technical facts. And then you spend most of your time by assuring yourself that your wrong preconceptions aren't that wrong.
That's just not a rational approach to physics. The rational approach to physics is to be open-minded about how Nature works and look for the actual possible mechanisms - hypotheses - and actual evidence that favors one mechanism or another. In this process, one is learning tons of things he couldn't have preconceived in the first minute of thinking about these matters and one is liking lots of things he couldn't have thought about at all. You just apparently never want to learn anything new, you always want to be frozen with your kindergarten-level flawed intuition about how the world should work to be pretty. But your ideas about the world aren't pretty. They are obsolete and, as more modern perspectives on the space of ideas in physics show, unnatural and contrived.
Dilaton Aug 3, 2012, 11:09:00 AM
Nice explanations, for example about how the winding modes (which I know from Lenny Susskind) are needed to get the right gauge symmetries. I like that :-)
Mikael Aug 3, 2012, 4:05:00 PM
Dear Lubos,
what about a human in this brane-antibrane system? Will he be able to kill his grand mother?
Rob M. Aug 3, 2012, 4:36:00 PM
You have the most famous mid-2000s digital camera in the world!
It survived a minor accident on my moped but a year later, the LCD display stopped working. It still makes pictures but you see no image on the LCD. ;-)
I bought a new one and everyone may buy my FinePix for $200. Yes, I've seen about 7 different pictures taken by the camera on Google News recently. :-)
acdc Aug 3, 2012, 7:45:00 PM
yes, it is possible (even likely) what i mentioned is only "idiotic prejudices", but it is also possible that string theory has just too many ways to be modified to fit the desired outcome.
to make an analogy with statistics and forecasting : given enough degrees of freedom you can always fit the in-sample data perfectly , obviously such model would not be good at forecasting out of sample.
"fBefore I ban you because you have become a truly obnoxious crank, let me react to this nonsense of yours.
String theory offers absolutely no freedom to be modified. It is not possible to modify string theory at all without spoiling its consistency. What you wanted to refer to is that string theory has a large number of solutions. But the number of solutions is just a fact we must live with. It is absolutely irrational to "criticize" a theory for having one number of solutions or another. It's exactly like saying that there are infinitely many primes so the concept of primes and number theory - which is the discipline that studies them - is bad. It's not bad. It's you who sucks if you think that you are allowed to command Nature how many solutions her equations should have.
You comments about predictions are nonsensical at all levels. There isn't any other theory than string theory that is capable of producing consistent predictions of the fundamental interactions at all so it's obviously not as easy to "adjust" things as you suggest. In fact, it's been the most difficult problem in the history of science.
M87 Aug 4, 2012, 4:50:00 AM
Hi Lubos,
I want to thank you for your work in dissemination of Physics and fundamental Physics to the larger lay audience. I wish that there was a much larger Indian readership of this blog because in conservative societies like India, people tend to copy 'national heroes' a lot more. This might be a lamentable reason to decide to take Physics seriously but it's a strategy that works where parents/society makes most decisions for people. I cannot explain to you just how important therefore your work is in establishing who our intellectual heroes ought to be. In India everyone knows Stephen Hawking but not everyone has the courage to defy societal norms emulate Prof. Hawking and his career path. Parents will now take Ashoke Sen seriously because he is rich and so their children no longer need to justify their intellectual pursuits. You see, you may not have realized that making a wiki page for Ashoke Sen could have been signinficant but I can guarantee you that it had tens of millions of hits from India.
It might seem ironic to you because Physics is not a nationalistic arena but for cultures that can't look beyond the nation, people from outside that nation often have to establish and explain to the culture that "this is who you should be rewarding!".
Thanks, once again, your contributions are more than worth 3m.
P.S. Funny quote by Sen
But the big question the physicist is struggling with right now has nothing to do with string theory."I'm not sure what to do with this money," Sen said.http://goo.gl/DnM4Y
I think it's great if physics and Ashoke gain some kind of a peaceful cult in India.
And I completely understand if he has a trouble what to do with those $3M, having considerd Ashoke a physics-oriented Mahatma Gandhi for years. ;-)
Thank you for replying. Razib Khan discusses the issue here and I have a comment there as well, if you don't mind reading.
http://blogs.discovermagazine.com/gnxp/2012/08/irony-sky/#comments
James Aug 6, 2012, 10:54:00 PM
Does tachyonic condensation in QFT actually give rise to conventional tachyons (ie. superluminal particles)? A normal lagrangian has a harmonic potential and the interpretation is that particles correspond to oscillations of the field about the minimum. With a tachyonic mass, there are no oscillations because it's maximum, so we just get exponential decay of the field, don't we? So how do we get particles?
Hi James, around the unstable point, there are tachyonic excitations - particle, see below; around the point with a tachyon condensate, all the excitations are non-negative-mass particles.
Concerning the first point, you're half-right. The tachyonic wave equation has solutions
exp(i.px-i.omega.t)
for complex p, omega obeying omega^2-p^2 = negative number (squared tachyon mass). You mentioned that a solution is obtained for omega imaginary; one may get an exponentially decreasing wave. However, it's more important that there's also the opposite solution, the exponentially *increasing* wave (flip the sign of omega). This is the most "realistic" solution that will dictate the behavior of the system as its manifests its instability - it exponentially quickly deviates away from the unstable stationary point.
However, mathematically, one may also study solutions with a large enough p^2, the spatial momentum. For those waves, omega may be real and positive and omega^2-p^2 is still equal to the negative squared tachyon mass. It's possible whenever p is greater than or equal to the absolute value of the tachyon mass. In fact, these solutions are "equally good" - it's always possible for a field to have any momentum. Those solutions with real omega,p violate the superluminal bounds but they're the most direct counterparts of the usual de Broglie waves for massive particles.
James Aug 7, 2012, 2:37:00 PM
Thanks. On a similar note, what should be the interpretation of a negative mass for a Dirac mass? It's not a tachyon, since that requires imaginary mass, but it certainly seems like it should be an instability.
Absolutely, James. The negative-energy solutions to the Dirac equation, which is what you apparently meant by "negative mass for Dirac mass", is a sign of instability of the configuration in which those excitations occur. And just like in the tachyon case, Nature chooses to lower the energy as much as it can. It lowers the energy by filling all the states with negative energy by 1 electron per state; the Pauli exclusion principle doesn't allow you to place more electrons than 1 into the same state. Once it's done, the negative-energy solutions no longer proviide you with room for extra particles. This filled set of negative-energy states is known as the Dirac sea; it is analogous to the "tachyon condensation" from this article except that these things aren't tachyons. Any hole in the Dirac sea manifests itself as an antiparticle to the electron, i.e. positron whose energy as well as charge is positive.
Rosy Mota Mar 30, 2013, 7:13:00 PM
the tachyons must there is in the nature as real functions derived of both lorentz's transformation( orthocrous and antichrous).being the lorentz's invariance is generated in the substrctum of violation of symmetry PT,that does generate the speed of light as constant and limit speed of different metrics of differents spacetime continuos,each one with a speed velocity-constant to the velocity of light.the the universe contain
at itself an antisymmetric metric tensor generated with
the symmetric metric tensor.,the future and past are
the same interface of spacetime.causality in the universe
doesn't exist.there is just a flux of time to the universe in the total.
the topology of spacetime implies the existence of several curvatures of spacetime with equal metrics but with differents structures.the noncommutative geometry is contained in that process.
|
CommonCrawl
|
Skip to main content Skip to sections
Data-Driven Policy Impact Evaluation
Data-Driven Policy Impact Evaluation pp 237-248 | Cite as
Does the Road to Happiness Depend on the Retirement Decision? Evidence from Italy
Claudio Deiana
Gianluca Mazzarella
First Online: 03 October 2018
This study estimates the causal effect of retirement decision on well-being in Italy. To do so, the authors exploit the exogenous variation provided by the changes in the eligibility criteria for pensions that were enacted in Italy in 1995 (Dini's law) and in 1997 (Prodi's law, from the names of the prime ministers at the time of their introduction). A sizeable and positive impact of retirement decision is found on satisfaction with leisure time and on frequency of meeting friends. Furthermore, the results are generalized, allowing for the estimation of different moments from different data sources.
Retirement Decisions Retirement Probability Seniority Pensions First-stage Coefficients Involuntary Retirement
Opinions expressed herein are those of the authors only. They do not necessarily reflect the views of, or involve any responsibility for, the institutions with which they are affiliated. Any errors are the fault of the authors.
Retirement is a fundamental decision in the life-cycle of a person. For this reason many studies try to assess the effect of retirement on outcomes such as consumption, health, and well-being. Due to Italy's aging population, a trend which started in the second half of the twentieth century, it is fundamental to understand the effect of retirement on mental health and well-being. Charles (2004), among others, focuses on the effect of retirement in the United States (US), finding a positive effect of retirement on well-being. Using Canadian data, Gall et al. (1997) provide some empirical evidence in support of the theory first proposed by Atchley (1976), in which there is a positive short-term effect of retirement (defined as the honeymoon period) on well-being, but a negative mid- to long-term effect. Heller-Sahlgren (2017) identifies the short-term and longer-term effects of retirement on mental health across Europe. He shows that there is no effect in the short term, but that there is a strong negative effect in the long term. Bonsang and Klein (2012), Hershey and Henkens (2013), in Germany and in the Netherlands, respectively, try to disentangle the effects of voluntary retirement compared with those of involuntary retirement, finding that involuntary retirement has strong negative effects that are absent in the case of voluntary retirement. Börsch-Supan and Jürges (2006) analyze the German context and find a negative effect of early retirement on subjective well-being. Indeed, Coe and Lindeboom (2008) do not find any effect of early retirement on health in the US. The paper by Fonseca et al. (2015) shows a negative relationship between retirement and depression, but a positive relationship with life satisfaction. Finally, Bertoni and Brunello (2017) performed an analysis of the so-called Retired Husband Syndrome in Japan, finding a negative effect of the husband's retirement on the wife's health.
This chapter provides some new evidence of the effect of retirement on well-being, an effect which is characterized by self-reported satisfaction with the economic situation, health, family relationships, relationships with friends and leisure time, and by the probability of meeting friends at least once a week.
The remainder of the chapter is organized as follows: Sect. 2 illustrates the background of the literature on retirement. Section 3 details the data sources and provides some descriptive statistics. Section 4 illustrates the identification strategy and the empirical specification. Section 5 shows the effect of retirement on well-being, obtained using standard instrumental variables (IV) regression. Section 6 briefly discusses the Two-Sample Instrumental Variables estimator that will be applied to generalize result in Sect. 7. Conclusions follow.
2 Pension Reforms in Italy
Among developed economies, increasing life expectancy and reduced birth rates in the second half of the twentieth century have led to aging populations. In addition, empirical findings suggest an increase in anticipated retirement and consequently a reduction in the participation of elderly people at work (see, for example, Costa, 1998). These two trends have progressively unbalanced the ratio between retired and working people, compromising the financial sustainability of the social security system (see Galasso and Profeta, 2004).
This is primarily why policy-makers are typically deciding to increase the retirement age. Like many industrialized countries, Italy has experienced many pension reforms since the early 1990s. In Italy the first change in regulation was put in place in 1992, with the so-called Amato's law, which modified the eligibility criteria for the old age pension. Three years later, in 1995, a new regulation was introduced under the name of Dini's law. After a short period, the Italian government approved a new version (Prodi's law) in 1997. Finally, Maroni's law was implemented in 2004, which changed all of the eligibility criteria for the seniority pension.
This paper focuses on the changes made during the 1990s to the 'seniority pension' scheme. In particular, Dini's law introduced two alternative rules regulating pension eligibility, stating that the pension can be drawn either (1) at the age of 57, after 35 years of contributions, or (2) after 40 years of contributions regardless of age. As with Amato's law, the introduction of Dini's law was gradual, with age and contribution criteria increasing from 1996 to 2006, and with further evolution of the contribution criteria from 1996 to 2008. Prodi's law, in 1997, anticipated the changes of age and contribution criteria set by Dini's law. Table 1 summarizes the changes in the eligibility criteria provided by these laws.
Seniority pension: evolution of eligibility rules
Age and
contribution (years)
only (years)
52 and 35
The progressive tightening of the pension's requirements is associated with a decreasing retirement probability, which is evident comparing different cohorts given a certain age. However, neither law causes a drastic change in the retirement likelihood, and there is no expectation of a discontinuity at the threshold point, rather a gradual decrease provided by the progression of the law.
The individuals most likely to be affected by the reforms are those aged 52 so that we compare individuals at the same age but in different cohort. Table 1 summarizes the issue: before the reforms an individual was eligible to draw a pension after 35 years of contributions (having started work at 17, for example), but for the next 2 years would need to have 36 years' worth of contributions, and from 1999 would need 37 years' worth (which in the case of someone aged 52 would mean that they started work at 15, i.e. the minimum working age, and had no interruptions in their working career). Furthermore, workers cannot retire at any of the year because Dini's law also introduced the so-called retirement windows, fixed periods in which it is possible to stop working. For this reason most retirements are on 31 December and the first day of retirement is 1 January of the following year. So one would expect the first reduction in the number of retired workers to be in 1997. Due to differences in career paths, this study concentrates on male workers because females usually register more labor market interruptions to their working careers than men, and are automatically less affected by pension reforms.
This section introduces the data sources used to obtain the two sets of results that are shown in Sects. 5 and 7. Furthermore, it provides some descriptive statistics on the variables considered.
3.1 Survey Data: AVQ
The study exploits a survey called Aspetti della Vita Quotidiana ((Aspects of Daily Life') (AVQ) carried out by the Italian Bureau of Statistics (Istat). It is an annual survey and each year involves about 50,000 individuals belonging to about 20,000 households, and it is a part of an integrated system of social surveys called Indagine Multiscopo sulle Famiglie ("Multipurpose Surveys on Household"). The first wave of the survey took place in 1993 and it includes different information about individual quality of life, satisfaction with living conditions, financial situation, area of residence, and the functioning of all public utilities and services.
All males aged 52 in the waves between 1993 and 2000 are selected, to give four cohorts from the pre-reform period and four from the post-reform period, for a total sample of 3143 individuals.
Table 2 presents the descriptive statistics of the outcomes involved in the analysis. Five outcome variables related to individual satisfaction were extracted from the AVQ, across various surveys. Respondents could choose from a Likert scale of four values, where a value of 1 means Very much and a value of 4 means Not at all. The authors created a set of dummy variables that are equal to 1 if the original variable is equal to 1, and 0 otherwise. A final dummy variable relates to the frequency with which individuals meet their friends, and takes a value equal to 1 if the answer is at least once a week. It is observed that almost 3% and 17% of the sample are satisfied with their economic situation and their health, respectively. More than 37% and 24% of the individuals are satisfied with their relationships with family and friends, respectively. The percentage of people who report satisfaction with leisure and meeting friends is 11% and about 70%, respectively.
Descriptive statistics
Standard error
Satisfaction with the economic situation
Satisfaction with health
Satisfaction with family relationships
Satisfaction with relationships with friends
Satisfaction with leisure
Meet friends at least once a week
3.2 Administrative Data: WHIP
The Work Histories Italian Panel (WHIP) is a statistical database constructed from administrative data from the National Institute of Social Security (INPS). It includes the work histories of private sector workers. INPS has extracted all the records contained in its administrative archives relating to individuals born on 24 selected dates (irrespective of the year of birth), creating a sample size of about 1∕15 of the entire INPS population. The dataset is mainly structured in three different sections: the first relates to employment records, including yearly wage and type of contract; the second collects information on unemployment periods; and the third part is wholly dedicated to pensions, including first day of retirement, initial income from pension, etc.
The full sample initially included all male individuals aged 52 in the years covered by the AVQ survey, but the sample was not comparable with the survey data, mainly because the administrative data include individuals who cannot be included in the survey data (such as foreign citizens who have worked in Italy for just a few months, Italian citizens who have moved abroad and are therefore not represented in the survey data, etc.). For this reason, all individuals who worked less than 12 months in the 4 years between 1987 and 1990 were excluded from the sample (these years were selected to obtain a window that is removed from the years implemented in the analysis). The final sample includes 90,891 individuals.
4 Empirical Strategy
Retirement is a complex choice that involves multiple factors. This is an obvious reason why it is not possible simply to compare retired people with individuals who are not retired. These two groups are probably not comparable in terms of observed and unobserved characteristics. Indeed, one needs to look for an exogenous variation to help identify the effect of the retirement decision on well-being. In this context, this study exploits the changes to the pensions rules instigated by Dini's and Prodi's laws to instrument the retirement decision.
As summarized in Table 1, the progression provided by the two reforms does not allow identification of the retirement effects using a standard regression discontinuity design (see Hahn et al., 2001; Lee and Lemieux, 2010, for reviews). This is the reason why the effect of retirement is identified using the change of slope (kink) in the retirement probability. The identification strategy was first proposed by Dong (2016) and it mimics a binary treatment setting (where some individuals can be considered as treated and others as not treated) the Regression Kink Design (see Card et al., 2015; Nielsen et al., 2010; Simonsen et al., 2015). This allows the identification of the local average response for a continuous treatment setting (in which all the individuals can be considered as treated, but the amount of the treatment changes following certain predetermined rules). In this setting the change in slope at the threshold point becomes the additional instrument for the endogenous treatment decision (in this case the retirement choice). Then, the first-stage regression can be illustrated as follows:
$$\displaystyle \begin{aligned} D_i= \alpha_0 + \alpha_1 (X_i-1997) + \alpha_2 (X_i-1997)Z_i + \upsilon_i, \end{aligned} $$
where Di is a variable that is equal to 1 if the individual i is retired, 0 otherwise; X indicates the year in which the individual i reached age 52 and \(Z=\b 1_{\left \{X\geq 0\right \}}\). The structural equation becomes:
$$\displaystyle \begin{aligned} Y_i= \beta_0 + \beta_1 (X_i-1997) + \beta_2 D_i + \epsilon_i,\end{aligned} $$
where Y is the outcome of interest. The coefficient β2 that comes from this specification corresponds to the ratio γ2∕α2, where γ2 is the coefficient related to (X − 1997)Z in the intention to treat equation:
$$\displaystyle \begin{aligned} Y_i= \gamma_0 + \gamma_1 (X_i-1997) + \gamma_2 (X_i-1997)Z_i + \zeta_i,\end{aligned} $$
and α2 is as in Eq. (1) (see Appendix A in Mazzarella, 2015, for a formal proof). In this setting one can estimate Eq. (1) using both data sources, but the outcomes of interest Y are observed only in the survey data, so Eq. (2) can be computed only using AVQ data. The next sections present the results using the standard IV and TSIV estimators and then we compare the empirical evidence from the two. Specifically, we study the precision of the estimates born out from the survey and administrative data.
5 Results Using Survey Data: IV Estimates
This section discusses the main empirical results obtained using survey data. The first-stage coefficient (reported in the bottom row of Table 3) is equal to −0.0485 and is statistically significant at any level. This is consistent with the hypothesis that the reforms have progressively reduced the retirement probability. The F-statistic is equal to 18.60, which is larger than the threshold value of 10, so one can reject the hypothesis of weakness of the instrument.
IV (AVQ)
TSIV (AVQ + WHIP)
with the economic situation
with health
−0.0761
with family relationships
with relationships with friends
0.4402∗
0.4829∗∗
with leisure
− 0.0485∗∗∗
Stand. err.
Test F weak instrument
Standard errors in parentheses using bootstrap
∗p < 0.10; ∗∗p < 0.05; ∗∗∗p < 0.01
The discussion now turns to the second-stage results. The first row in Table 3 shows the main findings. Each row shows different outcomes. The results in the first two rows demonstrate that retirement decisions are positively associated with an increase in economic and health satisfaction, even though statistical significance is not reached at any conventional level. On the one hand, one can observe a decrease in satisfaction with family relationships, but here too the estimates are not significant (third row). On the other hand, there is a positive relationship between retirement decision and satisfaction with relationships with friends, but again this is not significant.
That the decision to retire is generally associated with increased quality of relationships with friends can be related to better use of time—leisure versus work. In fact, the fifth row shows a positive relationship between retirement and satisfaction with leisure, and the sixth row reveals that retirement is associated with a higher probability of meeting friends at least once a week. Both coefficients are significant at 10%.
6 The Two-Sample Instrumental Variables Estimator
This section explains how the two-sample instrumental variables (TSIV) estimator can be implemented, since it is used to improve the precision of the estimates presented in Sect. 5.
The TSIV estimator was first proposed by Angrist and Krueger (1992) and, more recently, improved by Inoue and Solon (2010). It allows estimation of different moments from diverse data sources which are representative of the same population but which cannot be directly merged due to the lack of a unique identifier cross-database. The idea behind the TSIV estimator is to estimate the first stage regression using one sample, then use the coefficients estimated from this sample to compute the fitted value of the endogenous variables in the second sample. Finally, it exploits the fitted values to estimate the structural equation in the second sample.
Here this is briefly discussed in formal terms. Y(s) is defined as the (n(s) × 1) outcome, \(\mathcal {X}^{(s)}\) as the (n(s) × p + k) matrix which includes the set of the endogenous (p) and exogenous variables (k), and lastly \(\mathcal {Z}^{(s)}\) as the (n(s) × q + k) (with q ≥ p) matrix which comprises the set of additional instruments (q) and the exogenous variables, where s = 1, 2 denotes whether they belong to the first or to the second sample. The first-stage equations estimated with the second sample are, in matrix form:
$$\displaystyle \begin{aligned}\mathcal{X}^{(2)}=\boldsymbol{\alpha}\mathcal{Z}^{(2)} + \boldsymbol{\upsilon},\end{aligned}$$
where υ is an (n(2) × p + k) matrix with the last k columns identically equal to zero. The previous equations could be estimated using standard ordinary least squares (OLS) to recover the value \(\hat {\boldsymbol {\alpha }}\), which serves to obtain the fitted values of the endogenous variables in the first sample as:
$$\displaystyle \begin{aligned}\hat{\mathcal{X}}^{(1)}=\hat{\boldsymbol{\alpha}}\mathcal{Z}^{(1)}.\end{aligned}$$
Finally, the structural equation could be estimated with the regression:
$$\displaystyle \begin{aligned}Y^{(1)}=\beta\hat{\mathcal{X}}^{(1)}+\epsilon.\end{aligned}$$
The previous equations show how it is necessary to observe \((Y;\mathcal {Z})\) in the first sample and \((\mathcal {X};\mathcal {Z})\) in the second sample, so \(\mathcal {Z}\) has to be observed in both samples.
The TSIV estimator was originally proposed to allow the estimation of an IV regression when it is missing the required information of interest in both samples. In contrast, this study sheds some light on how the TSIV estimator can be used to improve the efficiency of the IV coefficient in estimating the first-stage regression with administrative data, even though the investigator can obtain the same information from survey data.
7 Results Combining Administrative and Survey Data: TSIV Estimates
This section presents results obtained by combining the survey data and the administrative data, in comparison with the standard IV results. The first-stage equation is estimated using WHIP, and the coefficients obtained with WHIP are then exploited to predict the fitted values of the endogenous retirement probability in AVQ. Finally the structural equation is estimated using AVQ. Standard errors are computed using the bootstrap method.
In general the estimates in Table 3 show how the TSIV estimator works with respect to the standard IV strategy. The first-stage coefficient is equal to −0.042 and it is highly statistically significant, with an associated F-statistic of 369.13. This is roughly 20 times larger than the coefficient obtained with survey data. The improvement of the precision of the first-stage estimates is also shown in Fig. 1, which compares the fitted values of the two samples, and their confidence intervals. All the sizes of the coefficients are almost unchanged, and they fall within the estimated confidence intervals of those calculated using survey data. The effects of retirement on satisfaction with economic situation, health, and relationships with family and friends are still not sizeable, and indeed the effects on satisfaction with leisure and on the probability of meeting friends at least once a week increase their significance from 10 to 5%, due to the increase of first-stage precision.
Open image in new window
First-stage comparison. (a) First stage with survey data. (b) First stage with administrative data
This study analyzes the retirement effect using as an exogenous variation the pension reforms that took place in Italy in the mid-1990s. It explains how to integrate survey and administrative data that are not directly linkable, estimating different moments from different data sources to obtain the two-sample instrumental variables estimator. In the empirical analysis all the required information is available in the survey data, but administrative data guaranteed a considerable improvement in the precision of the first-stage regression. The results from survey data are compared with those obtained by integrating the two data sources. The study shows that men increase their life satisfaction when they retire, providing further evidence that some men were adjusting their retirement decision, and that pension regulations prevented some men from locating precisely at the kink.
These results also have important implications. Administrative data have the advantage of giving detailed and precise information on large sample characteristics—in this case, retired men—over repeated periods of time. This chapter provides relevant evidence that the estimates' precision strongly depends on big data availability. This implies that policy-makers and politicians in general should foster access to administrative data to make the policy evaluation more systematic and estimates more accurate.
Appendix: Asymptotic Variance Comparison Using Delta Method
This appendix describes the conditions under which the estimator based on TSIV is more efficient than the simple IV estimator.
The approach is intentionally simplified and is based on delta method. Furthermore, it is assumed that the two samples are representative of the same population, and so the estimators are both unbiased for the parameter of interest.
The IV estimator βIV could be defined as:
$$\displaystyle \begin{aligned}\beta_{\mathrm{IV}}=\frac{\gamma_S}{\alpha_S},\end{aligned}$$
where γS is the coefficient of the intention to treat regression and αS the coefficient of the first stage regression, both computed using survey data (represented by the subscript S). They are asymptotically distributed as:
$$\displaystyle \begin{aligned} \begin{pmatrix} \hat\gamma_S \\ \hat\alpha_S \end{pmatrix}\overset{.}{\sim} \mathcal{N} \left(\begin{pmatrix} \gamma \\ \alpha \end{pmatrix}; \begin{pmatrix} \frac{\sigma^2_{\gamma}}{n_S} & \\ \frac{\sigma_{\gamma,\alpha}}{n_S} & \frac{\sigma^2_{\alpha}}{n_S} \end{pmatrix} \right).\end{aligned}$$
So the asymptotic variance of \(\hat \beta _{\mathrm {IV}}\) is equal to:
$$\displaystyle \begin{aligned} \mathbb{A}\mathbb{V}\text{ar}\left[\hat\beta_{\mathrm{IV}}\right]=\mathbb{A}\mathbb{V}\text{ar}\left[\frac{\hat\gamma_S}{\hat\alpha_S}\right]=\frac{\sigma^2_\alpha/n_S}{\alpha^2}-\frac{\gamma^2 \sigma^2_\gamma/n_S}{\alpha^4} -\frac{\gamma \sigma_{\gamma,\alpha}/n_S}{\alpha^3} \end{aligned} $$
Similarly βTSIV could be defined as γS∕αA (where the subscript A denotes the fact that it is computed with admin data), and:
$$\displaystyle \begin{aligned} \begin{pmatrix} \hat\gamma_S \\ \hat\alpha_A \end{pmatrix}\overset{.}{\sim} \mathcal{N} \left(\begin{pmatrix} \gamma \\ \alpha \end{pmatrix}; \begin{pmatrix} \frac{\sigma^2_{\gamma}}{n_S} & \\ 0 & \frac{\sigma^2_{\alpha}}{n_A} \end{pmatrix} \right),\end{aligned}$$
where the correlation between the two estimates is equal to 0 because they come from different samples.
Using similar arguments one can establish that the asymptotic variance of \(\hat \beta _{\mathrm {TSIV}}\) is equal to:
$$\displaystyle \begin{aligned} \mathbb{A}\mathbb{V}\text{ar}\left[\hat\beta_{\mathrm{TSIV}}\right]=\mathbb{A}\mathbb{V}\text{ar}\left[\frac{\hat\gamma_S}{\hat\alpha_A}\right]=\frac{\sigma^2_\alpha/n_S}{\alpha^2}-\frac{\gamma^2 \sigma^2_{\gamma}/n_S}{\alpha^4}. \end{aligned} $$
From Eqs. (3) and (4) one can obtain that:
$$\displaystyle \begin{aligned} \mathbb{A}\mathbb{V}\text{ar}\left[\hat\beta_{\mathrm{TSIV}}\right]<\mathbb{A}\mathbb{V}\text{ar}\left[\hat\beta_{\mathrm{IV}}\right] \leftrightarrow \frac{\sigma^2_\alpha(n_A-n_S)}{\alpha^2 n_A}>2\frac{\gamma \sigma_{\gamma,\alpha}}{\alpha^3}. \end{aligned} $$
From Eq. (5) one can obtain the following conclusion:
If the policy has no effect (i.e., γ = 0), the TSIV estimator is even more efficient than the IV estimator (obviously if the sample size of the administrative data is bigger than that of survey data.)
Even if nA →∞ it does not imply that \(\mathbb {A}\mathbb {V}\text{ar}\left [\hat \beta _{\mathrm {TSIV}}\right ]<\mathbb {A}\mathbb {V}\text{ar}\left [\hat \beta _{\mathrm {IV}}\right ]\), and as a matter of fact Eq. (5) reduces to:
$$\displaystyle \begin{aligned}\frac{\sigma^2_\alpha}{\alpha^2}>2\frac{\gamma \sigma_{\gamma,\alpha}}{\alpha^3}, \end{aligned}$$
so the comparison still depends on quantities that could be both positive and negative (such as γ, α, σγ,α).
Angrist JD, Krueger AB (1992) The effect of age at school entry on educational attainment: an application of instrumental variables with moments from two samples. J Am Stat Assoc 87(418):328–336CrossRefGoogle Scholar
Atchley RC (1976) The sociology of retirement. Halsted Press, New YorkGoogle Scholar
Bertoni M, Brunello G (2017) Pappa ante portas: the effect of the husband's retirement on the wife's mental health in Japan. Soc Sci Med 175(2017):135–142CrossRefGoogle Scholar
Bonsang E, Klein TJ (2012) Retirement and subjective well-being. J Econ Behav Organ 83(3): 311–329CrossRefGoogle Scholar
Börsch-Supan A, Jürges H (2006) Early retirement, social security and well-being in Germany. NBER Technical Report No. 12303. National Bureau of Economic Research, CambridgeGoogle Scholar
Card D, Lee DS, Pei Z, Weber A (2015) Inference on causal effects in a generalized regression kink design. Econometrica 83(6):2453–2483CrossRefGoogle Scholar
Charles KK (2004) Is retirement depressing?: Labor force inactivity and psychological well-being in later life. Res Labor Econ 23:269–299CrossRefGoogle Scholar
Coe NB, Lindeboom M (2008) Does retirement kill you? Evidence from early retirement windows. IZA Discussion Papers No. 3817. Institute for the Study of Labor (IZA), BonnGoogle Scholar
Costa DL (1998) The evolution of retirement. In: Costa DL (ed) The evolution of retirement: an American economic history, 1880–1990. University of Chicago Press, Chicago, pp 6–31CrossRefGoogle Scholar
Dong Y (2016) Jump or kink? Regression probability jump and kink design for treatment effect evaluation. Technical report, Working paper, University of California, IrvineGoogle Scholar
Fonseca R, Kapteyn A, Lee J, Zamarro G (2015) Does retirement make you happy? A simultaneous equations approach. NBER Technical report No. 13641. National Bureau of Economic Research, CambridgeGoogle Scholar
Galasso V, Profeta P (2004) Lessons for an ageing society: the political sustainability of social security systems. Econ Policy 19(38):64–115CrossRefGoogle Scholar
Gall TL, Evans DR, Howard J (1997) The retirement adjustment process: changes in the well-being of male retirees across time. J Gerontol B Psychol Sci Soc Sci 52(3):P110–P117CrossRefGoogle Scholar
Hahn J, Todd P, Van der Klaauw W (2001) Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica 69(1):201–209CrossRefGoogle Scholar
Heller-Sahlgren G (2017) Retirement blues. J Health Econ 54:66–78CrossRefGoogle Scholar
Hershey DA, Henkens K (2013) Impact of different types of retirement transitions on perceived satisfaction with life. Gerontologist 54(2):232–244CrossRefGoogle Scholar
Inoue A, Solon G (2010) Two-sample instrumental variables estimators. Rev Econ Stat 92(3): 557–561CrossRefGoogle Scholar
Lee DS, Lemieux T (2010) Regression discontinuity designs in economics. J Econ Lit 48:281–355CrossRefGoogle Scholar
Mazzarella G (2015) Combining jump and kink ratio estimators in regression discontinuity designs, with an application to the causal effect of retirement on well-being. PhD thesis, University of PadovaGoogle Scholar
Nielsen HS, Sørensen T, Taber C (2010) Estimating the effect of student aid on college enrollment: evidence from a government grant policy reform. Am Econ J Econ Policy 2(2):185CrossRefGoogle Scholar
Simonsen M, Skipper L, Skipper N (2015) Price sensitivity of demand for prescription drugs: exploiting a regression kink design. J Appl Econ 2(32):320–337Google Scholar
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
1.European Commission, Joint Research Centre, Directorate I – CompetencesCompetence Centre on Microeconomic Evaluation (CC-ME)IspraItaly
Deiana C., Mazzarella G. (2019) Does the Road to Happiness Depend on the Retirement Decision? Evidence from Italy. In: Crato N., Paruolo P. (eds) Data-Driven Policy Impact Evaluation. Springer, Cham. https://doi.org/10.1007/978-3-319-78461-8_15
First Online 03 October 2018
DOI https://doi.org/10.1007/978-3-319-78461-8_15
Publisher Name Springer, Cham
eBook Packages Political Science and International Studies Political Science and International Studies (R0)
Pension Reforms in Italy
Empirical Strategy
Results Using Survey Data: IV Estimates
The Two-Sample Instrumental Variables Estimator
Results Combining Administrative and Survey Data: TSIV Estimates
|
CommonCrawl
|
Tagged: diagonalization of a matrix
Solve the Linear Dynamical System $\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} =A\mathbf{x}$ by Diagonalization
(a) Find all solutions of the linear dynamical system
\[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} =\begin{bmatrix}
1 & 0\\
0& 3
\end{bmatrix}\mathbf{x},\] where $\mathbf{x}(t)=\mathbf{x}=\begin{bmatrix}
x_1 \\
x_2
\end{bmatrix}$ is a function of the variable $t$.
(b) Solve the linear dynamical system
\[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}=\begin{bmatrix}
2 & -1\\
-1& 2
\end{bmatrix}\mathbf{x}\] with the initial value $\mathbf{x}(0)=\begin{bmatrix}
1 \\
\end{bmatrix}$.
Diagonalize a 2 by 2 Symmetric Matrix
Diagonalize the $2\times 2$ matrix $A=\begin{bmatrix}
\end{bmatrix}$ by finding a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
Diagonalize the $2\times 2$ Hermitian Matrix by a Unitary Matrix
Consider the Hermitian matrix
\[A=\begin{bmatrix}
1 & i\\
-i& 1
\end{bmatrix}.\]
(a) Find the eigenvalues of $A$.
(b) For each eigenvalue of $A$, find the eigenvectors.
(c) Diagonalize the Hermitian matrix $A$ by a unitary matrix. Namely, find a diagonal matrix $D$ and a unitary matrix $U$ such that $U^{-1}AU=D$.
A Diagonalizable Matrix which is Not Diagonalized by a Real Nonsingular Matrix
Prove that the matrix
\end{bmatrix}\] is diagonalizable.
Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.
That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
Diagonalize the Complex Symmetric 3 by 3 Matrix with $\sin x$ and $\cos x$
Consider the complex matrix
\sqrt{2}\cos x & i \sin x & 0 \\
i \sin x &0 &-i \sin x \\
0 & -i \sin x & -\sqrt{2} \cos x
\end{bmatrix},\] where $x$ is a real number between $0$ and $2\pi$.
Determine for which values of $x$ the matrix $A$ is diagonalizable.
When $A$ is diagonalizable, find a diagonal matrix $D$ so that $P^{-1}AP=D$ for some nonsingular matrix $P$.
Top 10 Popular Math Problems in 2016-2017
It's been a year since I started this math blog!!
More than 500 problems were posted during a year (July 19th 2016-July 19th 2017).
I made a list of the 10 math problems on this blog that have the most views.
Can you solve all of them?
The level of difficulty among the top 10 problems.
【★★★】 Difficult (Final Exam Level)
【★★☆】 Standard(Midterm Exam Level)
【★☆☆】 Easy (Homework Level)
A Positive Definite Matrix Has a Unique Positive Definite Square Root
Prove that a positive definite matrix has a unique positive definite square root.
Find All the Square Roots of a Given 2 by 2 Matrix
Let $A$ be a square matrix. A matrix $B$ satisfying $B^2=A$ is call a square root of $A$.
Find all the square roots of the matrix
Diagonalize a 2 by 2 Matrix $A$ and Calculate the Power $A^{100}$
(a) Find eigenvalues of the matrix $A$.
(b) Find eigenvectors for each eigenvalue of $A$.
(c) Diagonalize the matrix $A$. That is, find an invertible matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
(d) Diagonalize the matrix $A^3-5A^2+3A+I$, where $I$ is the $2\times 2$ identity matrix.
(e) Calculate $A^{100}$. (You do not have to compute $5^{100}$.)
(f) Calculate
\[(A^3-5A^2+3A+I)^{100}.\] Let $w=2^{100}$. Express the solution in terms of $w$.
Diagonalize the 3 by 3 Matrix if it is Diagonalizable
Determine whether the matrix
0 & 1 & 0 \\
-1 &0 &0 \\
0 & 0 & 2
If it is diagonalizable, then find the invertible matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
Click here if solved 101
If Two Matrices Have the Same Eigenvalues with Linearly Independent Eigenvectors, then They Are Equal
Let $A$ and $B$ be $n\times n$ matrices.
Suppose that $A$ and $B$ have the same eigenvalues $\lambda_1, \dots, \lambda_n$ with the same corresponding eigenvectors $\mathbf{x}_1, \dots, \mathbf{x}_n$.
Prove that if the eigenvectors $\mathbf{x}_1, \dots, \mathbf{x}_n$ are linearly independent, then $A=B$.
Determinant of Matrix whose Diagonal Entries are 6 and 2 Elsewhere
What is the Probability that All Coins Land Heads When Four Coins are Tossed If…?
Find the Rank of a Matrix with a Parameter
Quiz 12. Find Eigenvalues and their Algebraic and Geometric Multiplicities
Vector Space of Polynomials and a Basis of Its Subspace
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.