text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Brandts, Jan ; Korotov, Sergey ; Křížek, Michal
Simplicial finite elements in higher dimensions. (English). Applications of Mathematics, vol. 52 (2007), issue 3, pp. 251-265
MSC: 51M20, 65N12, 65N30 | MR 2316155 | Zbl 1164.65493 | DOI: 10.1007/s10492-007-0013-6
$n$-simplex; finite element method; superconvergence; strengthened Cauchy-Schwarz inequality; discrete maximum principle
Over the past fifty years, finite element methods for the approximation of solutions of partial differential equations (PDEs) have become a powerful and reliable tool. Theoretically, these methods are not restricted to PDEs formulated on physical domains up to dimension three. Although at present there does not seem to be a very high practical demand for finite element methods that use higher dimensional simplicial partitions, there are some advantages in studying the methods independent of the dimension. For instance, it provides additional insights into the structure and essence of proofs of results in one, two and three dimensions. In this survey paper we review some recent progress in this direction.
[1] B. Achchab, S. Achchab, O. Axelsson, and A. Souissi: Upper bound of the constant in strengthened C.B.S. inequality for systems of linear partial differential equations. Numer. Algorithms 32 (2003), 185–191. DOI 10.1023/A:1024058625449 | MR 1989366
[2] B. Achchab, O. Axelsson, A. Laayouni, and A. Souissi: Strengthened Cauchy-Bunyakowski-Schwarz inequality for a three-dimensional elasticity system. Numer. Linear Algebra Appl. 8 (2001), 191–205. DOI 10.1002/1099-1506(200104/05)8:3<191::AID-NLA229>3.0.CO;2-7 | MR 1817796
[3] D. N. Arnold, R. Falk, and R. Winther: Finite element exterior calculus. Acta Numer. 15 (2006), 1–135. DOI 10.1017/S0962492906210018 | MR 2269741
[4] O. Axelsson: On multigrid methods of the two-level type. In: Multigrid Methods. Lecture Notes in Mathematics, Vol. 960, W. Hackbusch, U. Trotenberg (eds.), Springer-Verlag, Berlin, 1982, pp. 352–367. MR 0685778 | Zbl 0505.65040
[5] O. Axelsson, R. Blaheta: Two simple derivations of universal bounds for the CBS inequality constant. Appl. Math. 49 (2004), 57–72. DOI 10.1023/B:APOM.0000024520.06175.8b | MR 2032148
[6] R. Blaheta: Nested tetrahedral grids and strengthened CBS inequality. Numer. Linear Algebra Appl. 10 (2003), 619–637. DOI 10.1002/nla.340 | MR 2030627
[7] R. Blaheta, S. Margenov, and M. Neytcheva: Uniform estimates of the constant in the strengthened CBS inequality for anisotropic non-conforming FEM systems. Numer. Linear Algebra Appl. 11 (2004), 309–326. DOI 10.1002/nla.350 | MR 2057704
[8] D. Braess: Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics, 2nd edition. Cambridge University Press, Cambridge, 2001, pp. 309–326. MR 1827293
[9] J. H. Brandts, S. Korotov, and M. Křížek: The strengthened Cauchy-Bunyakowski-Schwarz inequality for $n$-simplicial linear finite elements. In: Springer Lecture Notes in Computer Science, Vol. 3401, Springer-Verlag, Berlin, 2005, pp. 203–210.
[10] J. H. Brandts, S. Korotov, and M. Křížek: Survey of discrete maximum principles for linear elliptic and parabolic problems. In: Proc. Conf. ECCOMAS 2004, P. Neittaanmäki et al. (eds.), Univ. of Jyväskylä, 2004, pp. 1–19.
[11] J. H. Brandts, S. Korotov, and M. Křížek: Dissection of the path-simplex in $\mathbb{R}^n$ into $n$ path-subsimplices. Linear Algebra Appl. 421 (2007), 382–393. MR 2294350
[12] J. H. Brandts, M. Křížek: Gradient superconvergence on uniform simplicial partitions of polytopes. IMA J. Numer. Anal. 23 (2003), 489–505. DOI 10.1093/imanum/23.3.489 | MR 1987941
[13] S. Brenner, L. R. Scott: The Mathematical Theory of Finite Element Methods. Texts in Applied Mathematics 15. Springer-Verlag, New York, 1994. MR 1278258
[14] P. Ciarlet: The Finite Element Method for Elliptic Problems. North Holland, Amsterdam, 1978. MR 0520174 | Zbl 0383.65058
[15] C. M. Chen: Optimal points of stresses for tetrahedron linear element. Nat. Sci. J. Xiangtan Univ. 3 (1980), 16–24. (Chinese)
[16] COMSOL, Multiphysics Version 3.3 (2006). Sweden, http://www.femlab.com
[17] H. S. M. Coxeter: Trisecting an orthoscheme. Comput. Math. Appl. 17 (1989), 59–71. DOI 10.1016/0898-1221(89)90148-X | MR 0994189 | Zbl 0706.51019
[18] FEMLAB version 2.2 (2002). Multiphysics in Matlab, for use with Matlab. COMSOL, Sweden, http://www.femlab.com
[19] H. Fujii: Some remarks on finite element analysis of time-dependent field problems. In: Theory Pract. Finite Elem. Struct. Anal, Univ. Tokyo Press, Tokyo, 1973, pp. 91–106. Zbl 0373.65047
[20] G. Goodsell: Pointwise superconvergence of the gradient for the linear tetrahedral element. Numer. Methods Partial Differ. Equations 10 (1994), 651–666. DOI 10.1002/num.1690100511 | MR 1290950 | Zbl 0807.65112
[21] J. Karátson, S. Korotov: Discrete maximum principles for finite element solutions of nonlinear elliptic problems with mixed boundary conditions. Numer. Math. 99 (2005), 669–698. DOI 10.1007/s00211-004-0559-0 | MR 2121074
[22] M. Křížek, Q. Lin: On the diagonal dominance of stiffness matrices in 3D. East-West J. Numer. Math. 3 (1995), 59–69. MR 1331484
[23] M. Křížek, P. Neittaanmäki: On superconvergence techniques. Acta Appl. Math. 9 (1987), 175–198. DOI 10.1007/BF00047538 | MR 0900263
[24] Finite Element Methods: Superconvergence, Post-processing and A Posteriori Estimates. Proc. Conf. Univ. of Jyväskylä, 1996. Lecture Notes in Pure and Applied Mathematics, Vol. 196. M. Křížek, P. Neittaanmäki, and R. Stenberg (eds.), Marcel Dekker, New York, 1998. MR 1602809
[25] J. C. Nédélec: Mixed finite elements in $\mathbb{R}^3$. Numer. Math. 35 (1980), 315–341. DOI 10.1007/BF01396415
[26] J. C. Nédélec: A new family of mixed finite elements in $\mathbb{R}^3$. Numer. Math. 50 (1986), 57–81. DOI 10.1007/BF01389668 | MR 0864305
[27] L. A. Oganesjan, L. A. Ruhovets: Study of the rate of convergence of variational difference schemes for second-order elliptic equations in a two-dimensional field with a smooth boundary. Zh. Vychisl. Mat. Mat. Fiz. 9 (1969), 1102–1120. MR 0295599
[28] V. Ruas Santos: On the strong maximum principle for some piecewise linear finite element approximate problems of non-positive type. J. Fac. Sci., Univ. Tokyo, Sect. IA Math. 29 (1982), 473–491. MR 0672072 | Zbl 0488.65052
[29] R. P. Stevenson: An optimal adaptive finite element method. SIAM J. Numer. Anal. 42 (2005), 2188–2217. DOI 10.1137/S0036142903425082 | MR 2139244 | Zbl 1081.65112
[30] P. Tong: Exact solutions of certain problems by finite-element method. AIAA J. 7 (1969), 178–180. DOI 10.2514/3.5067 | CommonCrawl |
Do Leonardo da Vinci's drawings, room acoustics and radio astronomy have anything in common?
Andrzej Kulowski ORCID: orcid.org/0000-0001-7003-95151
Heritage Science volume 10, Article number: 104 (2022) Cite this article
After introducing Leonardo da Vinci's (LdV) predecessors in the field of light propagation research, his drawings on the topic of reflecting light by a spherical mirror are analysed. The discovery of LdV is presented, according to which, at an infinitely distant source of rays, a small fragment of the canopy is enough to generate a focus, while the rest of the mirror forms a caustic for which LdV did not indicate an application. An analytical description of the energy concentration in the focus and on the caustic is given, together with its reference to the geometric representation of the acoustic field in rooms. Based on the general principles of wave motion, symmetry is shown in the description of energy relations in acoustics and electromagnetism. It is explained why in the sound field in existing halls, instead of a whole caustic only its cusp is observed, which is perceived as a point-like sound focus. The size of the mirror aperture, shown graphically by LdV, is determined. How the development of receiving techniques increased the mirror aperture compared to the LdV estimate is also shown. The implementation of these improvements is presented via the example of the Arecibo and FAST radio telescopes.
Early considerations about the propagation of light lie at the beginning of the research discipline that has developed into today's physics. One of the earliest accounts on optics, i.e. the use of instruments interfering with the course of light, is the story from ancient times about Archimedes setting fire to Roman ships besieging Syracuse with the use of mirrors reflecting sunlight [1]. A later treatise by Ptolemy from the second century AD is another significant work of the antiquity period concerning the study of optics [2]. The scientific considerations he initiated were continued in the Middle Ages in the Islamic world. Leonardo da Vinci (LdV) carried out his works in reference to this tradition [3]. He paid particular attention to the application of the rules of geometric optics in architecture, painting and graphics, including studies in the field of perspective and chiaroscuro.
Leonardo's sketches on optics include studies of a particular form of focusing rays of light, nowadays known as caustics (Fig. 1). In the convention of geometric optics, caustic is a surface formed by rays tangent to it after reflection from a concave surface or as a result of propagation in an inhomogeneous medium. Under certain circumstances, a cusp may form on the caustics. In the mathematical description of caustics, it corresponds to a singularity, i.e. the parameters of the field of rays at this point tend to infinity. The physical counterpart of the caustic cusp is the focus of the mirror. The focus can also form without a caustic accompanying it, but it only applies to a few specific cases, among them a source in the centre of a spherical reflector, a parabolic reflector with an infinitely distant source on its geometric axis, or an ellipsoidal reflector with a source in one of its foci.
a Drawing of caustic from Leonardo da Vinci's notebook. The note below the picture in Fig. 1a, made in Leonardo's famous reverse script, says that in concave mirrors of equal diameter, the one that has the shallower curve will concentrate the largest number of reflected rays onto a focal point, and "as a consequence, it will kindle a fire with greater rapidity and force" [9] © British Library Board, Source: Arundel MS 263. b Details of Leonardo's drawing, (c, d) 3D views of the caustics created by spherical and cylindrical concave mirrors [10]
Leonardo showed that with an infinitely distant light source, only a small part of the spherical mirror is involved in creating the focus. The remainder of the mirror only forms caustics and is useless for focal formation, leading to important practical conclusions. The idea of focusing light in this way is attributed to Archimedes, who lived many centuries earlier, but the quantitative analysis shown graphically in Fig. 2 is Leonardo's personal contribution to the study of the principle of the operation of a concave mirror.
Illustration of Leonardo's concept, in which a shallower mirror (bottom) concentrates a larger number of rays than the mirror with a deeper bowl of the same diameter (top) [9]. © British Library Board, Source: Arundel MS 263
Against this historical background, the article discusses the formation of foci and caustics in the acoustic field and in the electromagnetic field. Observations of room acoustics indicate that in the audible frequency range, caustics are so blurred by diffuse sound, wave reflections and interference that it is reduced to its singularity. The observed form of caustic is then a compact area of increased sound pressure with a size dependent on the wavelength, which is the result of a diffractive broadening of a point-like focus known from graphic analysis. This occurs when the wavelength is of the same order or slightly shorter than that of the objects in the acoustic field, which is typical for rooms.
When the wavelength is much shorter than the objects in the wave field, the blur effect is much smaller and the caustic act as a clearly identified energy focus area. Caustics formed in this way are present in many fields of technology and science concerning the propagation of light, ultrasounds and electromagnetic waves, e.g. hydroacoustics, aeroacoustics, laser technology, and even radio astronomy [4,5,6].
Based on the inspiration of caustic drawings in LdV's works and his considerations on reflecting light by a concave mirror, the article presents a mathematical description of the effect of focusing rays. Against this background, the phenomena occurring on caustics in acoustic and electromagnetic fields are presented, taking into account their wave nature.
The main goal of this paper is to investigate the extent to which LdV observations of the formation of caustics and foci are present in modern technology. The article shows how LdV's estimate of a mirror aperture has expanded as radio waves receiving techniques have developed. The presence of Leonard's thoughts in these activities is demonstrated through the example of the large radio telescopes in Arecibo (Puerto Rico) and Dawodang (China).
Caustics in the legacy of Leonardo da Vinci
The drawings of caustics in Leonardo da Vinci's notes refer to his research in the field of optics in the years 1510–1515 [7, 8]. You can find in them many sketches of caustics at different stages of their formation, the most complete drawing of a caustic is shown in Fig. 1. Leonardo made his drawing 500 years ago with such competence that in the article it is quoted as a perfectly valid example of applying the principles of geometric optics in the formation of caustics.
Leonardo was interested in the potential utility of concave mirrors as sources of heat, and the purpose of his research was to assess the focusing properties of a spherical mirror. Figure 2 shows the two mirrors differing in the depth of the canopy referred to in his reverse script in Fig. 1a. In his later works, Leonardo also planned to use the effect of focusing sunlight to heat or even boil water [11].
In light of today's level of knowledge, Leonardo's concept is obvious. However, he lived 500 years ago and the accuracy of his explanations must be considered admirable. The further part of the article shows that even in areas as distant from optics as room acoustics and radio astronomy, Leonardo da Vinci's concept can be found.
According to modern technical terminology, the fraction of the total energy incident on the mirror that is available at the receiver is called the mirror aperture. For the purposes of this article, the ratio of this area to the area of a full hemispherical mirror was adopted as the relative measure of aperture. Assuming the propagation and reflection of the rays are lossless, the relative aperture of the lower mirror shown in Fig. 2 is approximately 0.4% (Eqs. 1, 2).
For the opening angle φ = 10° (Fig. 3), the arc length r is
$${\text{r}} = \, \frac{{\varphi {/2}}}{{{180}}}\Pi {\text{R}} = \frac{{\Pi {\text{R}}}}{{36 }}$$
Aperture of the shallower mirror from Fig. 2. Sm, R area and radius of the full hemispherical mirror, Sa, r area and radius of the aperture, φ opening angle read from Fig. 2, φ = 10o
and the aperture in relation to the surface of the full hemispherical mirror is
$$\frac{{{\text{S}}_{{\text{a}}} }}{{{\text{S}}_{{\text{m}}} }} = \frac{{\Pi {\text{r}}^{{2}} }}{{{2}\Pi {\text{R}}^{{2}} }} = \frac{1}{2}\left( {\frac{\Pi }{36}} \right)^{2} = 0.0038 \cong 0.4\%$$
Analytic description of energy concentration on the caustic
The LdV sketches present the effect of the energy concentration on a caustic in a graphical form. This section gives a quantitative assessment of this effect in an analytical form, using the original LdV drawing.
Consider the rays coming from an infinitely distant source and falling on a hemispherical mirror as a collimated beam (Fig. 1a). After the reflection, the rays form the caustic described by Eq. (3) [12].
$$\left\{ \begin{gathered} {\text{x}}\left( \theta \right) = {\text{R}}\cos^{3} \left( \theta \right) \hfill \\ y\left( \theta \right) = \frac{{\text{R}}}{2}\left( {2\sin^{3} \left( \theta \right) - 3\sin \left( \theta \right)} \right) \hfill \\ \end{gathered} \right.\quad \begin{array}{*{20}c} {0 \le \theta \le \Pi } \\ {} \\ \end{array}$$
To calculate the density of rays on a caustic, consider two rings inside the reflector's bowl: dS on its surface and dSc on a caustic (Fig. 4) [12, 13]. Since all rays reflected from dS are tangent to dSc, the relative density of rays on dSc is dS/dSc, where dS and dSc are the surfaces of the rings.
Caustic formed by rays incident on a hemispherical reflector. R radius of the reflector, dS, dSc ring on the reflector and on the caustic, x(θ),y(θ) Cartesian coordinates of the caustic (Eq. (3), dlc: the element of the section of the caustic [13]
The circumference and the width of the ring dS are 2ΠRcos(θ) and Rsin(θ)dθ, so
$$dS = 2\Pi \,{\text{R}}^{2} \,\cos \left( \theta \right)\,\sin \left( \theta \right)\,d\theta$$
Likewise, the circumference and the width of the ring dSc are 2Πx(θ) and dlc, so
$$dS_{c} = 2\Pi \,x\left( \theta \right)\,dl_{c}$$
where dlc is the element of a section of a caustic
$$dl_{c} = \sqrt {\left( {\frac{dx(\theta )}{{d(\theta )}}} \right)^{2} + \left( {\frac{dy(\theta )}{{d(\theta )}}} \right)^{2} } d\theta$$
and the derivatives over θ of x(θ), y(θ) are
$$\left\{ \begin{gathered} \frac{dx}{{d\theta }} = - 3{\text{R}}\cos^{2} \left( \theta \right)\sin \left( \theta \right) \hfill \\ \frac{dy}{{d\theta }} = - 3{\text{R}}\sin^{2} \left( \theta \right)\cos \left( \theta \right) - \frac{3}{2}{\text{R}}\cos \left( \theta \right) \hfill \\ \end{gathered} \right.$$
An elementary transformation gives
$$\sqrt {\left( {\frac{dx(\theta )}{{d(\theta )}}} \right)^{2} + \left( {\frac{dy(\theta )}{{d(\theta )}}} \right)^{2} } = \frac{3}{2}R\cos (\theta )$$
Substitution of Eq. (7) to Eq. (6) yields
$$dl_{c} = \frac{3}{2}{\text{Rcos}}\left( \theta \right)d\theta$$
$$dS_{c} = 3\Pi {\text{R}}^{2} \,\cos^{4} \left( \theta \right)\,d\theta$$
Finally, if the rays incident on the mirror are distributed evenly on the y = 0 plane (Fig. 4), the relative density of rays C (θ) over the caustic is
$$C(\theta ) = \frac{dS}{{dS_{c} }} = \frac{2\sin (\theta )}{{3|\cos^{3} (\theta )|}}$$
As θ tends to 0.5Π, C (θ) tends to infinity, which corresponds to the cusp formation on the caustic (Fig. 4). This singularity results from the caustic cross-sectional area dSc tending to zero.
Let us denote the surface density of rays incident on the reflector as Io [W/m2]. The density of rays C(θ) in Eq. (11) multiplied by Io can be interpreted as the surface density of energy over the caustic per unit of time, i.e. the intensity of the rays [W/m2].
Equation (11) shows the surface power density on the surface of the caustic. It is a finite value except for the caustic cusp where C (θ) goes to infinity. However, when the energy density is considered on a cross sectiona plane of a caustic, a different result is obtained. For example, following the dashed line from point M towards the origin of the XY coordinate system (Fig. 6), the surface power density increases and reaches infinity at the caustic. In this case, the singularity applies to the entire caustic [6]. However, the article adopts the analysis of the surface power density not on a cross section of the caustic but on its surface.
When the absorption coefficient α of the reflector is taken into account, where α = 0 and α = 1 relate to a total reflection and total absorption, respectively, the rays relative intensity is
$$I_{c} \left( \theta \right) = {\text{I}}_{{\text{o}}} \left( {1 - \alpha } \right)\,C\left( \theta \right) = {\text{I}}_{{\text{o}}} \left( {1 - \alpha } \right)\frac{2\sin \left( \theta \right)}{{3|\cos^{3} \left( \theta \right)|}}\quad_{{}}^{{\left[ {{\text{W/m}}^{2} } \right]}}$$
The total intensity of the rays over the caustic Ic,res(θ) consists of the energy of incident rays Io and the energy of the reflected rays on the caustic.
$$\begin{aligned} I_{c,res} (\theta ) &= {\text{I}}_{{\text{o}}} + {\text{I}}_{{\text{o}}} (1 - \alpha )\frac{2\sin (\theta )}{{3|\cos^{3} (\theta )|}}\\& = {\text{I}}_{{\text{o}}} \left( {1 + (1 - \alpha )\frac{2\sin (\theta )}{{3|\cos^{3} (\theta )|}}} \right)\end{aligned}$$
The intensity level of the rays on the caustic, with Io as the reference intensity, is then Lc,res(θ) (Fig. 5).
$$\begin{aligned} L_{c,res} \;\left( \theta \right) &= 10\log \frac{{I_{c,res} \;\left( \theta \right)}}{{{\text{I}}_{{0}} }} \\&= 10\log \left( {1 + \left( {1 - \alpha } \right)\frac{2\sin \left( \theta \right)}{{3\left| {\cos^{3} \left( \theta \right)} \right|}}} \right)\left[ {{\text{dB}}} \right]\end{aligned}$$
Rays' intensity level Lc,res(θ) [dB] on the caustic. α absorption coefficient of the reflector [12]
Caustics in the wave field
LdV's concept of caustics and its development, as shown in Fig. 2 and "Analytic description of energy concentration on the caustic", are based on the geometrical approach. Despite the roots that are distant in time, this approach is a fully functional model of energy propagation, currently used, for example, in room acoustics, optics and radio communication. This section describes the formation of caustics in a wave field, the manifestation of which is the interference of the incident wave with the wave reflected from the mirror.
Let us consider a plane wave of the wavelength λ much smaller than the diameter of the reflector D, incident on the reflector [12]. At the point in time t = 0, the wave front lies in the plane y = 0 (Fig. 6). Propagating deep into the reflector, the wave interferes with the reflected wave, creating an array of interference fringes filling the mirror's canopy and the space in front of it. The wave nature of the field is also the cause of wave deflection at the mirror edge [12]. The article concerns the caustic as presented by the LdV, therefore this chapter is limited to the description of the effect that occurs only in the caustic itself, i.e. the incident and reflected wave interference.
Directions of the incident and reflected waves KN and LMN overlapping each other on the caustic [12]. The result is the sum of the sound pressures. R: radius of the reflector
According to the law of reflection, the reflected wave is tangent to the caustic. The distances SKN and SLMN travelled by the incident and reflected waves are
$${\text{S}}_{{{\text{KN}}}} = \frac{{\text{R}}}{{2}}|2\sin^{3} \theta - 3\sin \theta | = \frac{{\text{R}}}{{2}}\left( {3\sin \theta - 2\sin^{3} \theta } \right),\quad 0 \ge \theta \ge \Pi$$
$${\text{S}}_{{{\text{LMN}}}} = {\text{Rsin}}\theta + \sqrt {\left( {{\text{R}}\cos \theta - {\text{R}}\cos^{3} \theta } \right)^{2} + \left( {{\text{ - R}}\sin \theta - \left( {\frac{{\text{R}}}{{2}}\left( {2\sin^{3} \theta - 3\sin \theta } \right)} \right)} \right)^{2} } = {\text{Rsin}}\theta + \sqrt {\left( {\frac{{\text{R}}}{{2}}\sin \theta \sin 2\theta } \right)^{2} + \left( {\frac{{\text{R}}}{{2}}\sin \theta \cos 2\theta } \right)^{2} } = {\text{Rsin}}\theta + \frac{{\text{R}}}{2}{\text{sin}}\theta \sqrt {\left( {\sin 2\theta } \right)^{2} + \left( {\cos 2\theta } \right)^{2} } = \frac{{3}}{2}{\text{Rsin}}\theta ,\quad 0 \ge \theta \ge \Pi$$
In spite of the essentially different nature of acoustic and electromagnetic waves, the general principles of wave motion describe the energetic relationships of acoustic and electromagnetic waves using the same equations, differing only in the physical interpretation of the individual components. Substantial difference concerns the superposition of waves, which is scalar or vector in nature for acoustic or electromagnetic waves, respectively. In order to facilitate the comparison of the energetic equations for both types of field, they are presented side by side.
The sound intensity Is and surface power density of the electromagnetic field Ie, both in [W/m2], are proportional to the squared sound pressure p2 [Pascal] and squared amplitude of the electric field E[V/m], respectively
$$I_{s} = p^{2} /\left( {\rho {\text{c}}_{{\text{s}}} } \right)$$
$$I_{e} = E^{2} /\left( {{\upmu }_{{\text{o}}} {\text{c}}_{l} } \right)$$
where ρ: density of the medium, [kg/m3], cs: speed of sound (in the air at atmospheric pressure and a temperature of 15 °C, cs = 331 m/s, ρcs = 415[kg/(m2s)]), μo: vacuum permeability (μo = 4Π10−7, [H/m]), cl: speed of light (cl = 3*108 [m/s]).
So the amplitudes pc(θ) and Ec(θ) of the sound pressure and the electric field's, respectively, on the caustic are
$$p_{c} (\theta ){ = }\sqrt {I_{c,s} \left( \theta \right){\rho \text{c}}_{{\text{s}}} } = \, \sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } \, \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, }$$
$$E_{c} (\theta ) = \sqrt {I_{c,e} \left( \theta \right)\mu_{0} {\text{c}}_{l} } = \sqrt {{\text{I}}_{{\text{e}}} \mu_{0} {\text{c}}_{l} } \, \sqrt {\frac{{2{\text{R}}\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, }$$
where Ic,s(θ): sound intensity on the caustic, [W/m2], Ic,e(θ): surface power density of the electromagnetic field on the caustic, [W/m2], Io: intensity of the incident sound, [W/m2], Ie: surface power density of the incident wave, electromagnetic field, [W/m2]. α: sound absorption coefficient, R: reflection coefficient of the electric component of the electromagnetic wave.
$$\alpha = \frac{{{\text{I}}_{{{\text{abs}}}} }}{{{\text{I}}_{{\text{i}}} }} = \left( {\frac{{{\overline{\text{p}}}_{{{\text{abs}}}} }}{{{\overline{\text{p}}}_{{\text{i}}} }}} \right)^{2}$$
$${\text{R}} = \left( {\frac{{\overline{E}_{refl} }}{{{\overline{\text{E}}}_{{\text{i}}} }}} \right)^{2}$$
\({\text{I}}_{{{\text{abs}}}}\), \({\text{I}}_{{\text{i}}}\): intensity of the absorbed and incident acoustic wave, \({\overline{\text{p}}}_{{\text{i}}}\),\({\overline{\text{p}}}_{{{\text{abs}}}}\):sound pressure amplitude of the incident and absorbed acoustic wave, averaged over a spherical angle of 2Π steradians. \({\overline{\text{E}}}_{{\text{i}}}\),\(\overline{E}_{refl}\): amplitude of the incident and reflected electric field, averaged over a spherical angle of 2Π steradians. For the sake of brevity, the total reflection of the electromagnetic wave, i.e. R = 1, was adopted in the article.
At the point in time t, the sound pressure p(t) and the amplitude of the electric field \(E(t)\) of the incident acoustics and electromagnetic wave, respectively, in the plane y = 0 of the reflector are
$$p(t){ = }\sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } {\text{sin }}\varpi t \,$$
$$E({\text{t}}) = \, \sqrt {{\text{I}}_{{\text{e}}} \mu_{0} {\text{c}}_{l} } \, \sin \, \varpi t$$
where ω = 2Πf, f: frequency, [Hz].
Assuming lossless wave propagation, after travelling the distance SKN by the acoustic wave, the incident sound pressure pi(t) is
$$p_{i} ({\text{t}}){ = }\sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } {\text{sin}}\varpi \left( {t + \Delta t_{1} } \right) \,$$
$${\Delta t}_{{1}} {\text{ = S}}_{{{\text{KN}}}} {\text{/c}}_{{\text{s}}}$$
and after travelling the distance SLMN, the sound pressure prefl(t) of the reflected wave on the caustic according to Eq. (19) is
$$p_{refl} (t,\theta ) = \, \sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } \, \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, } {\text{sin }}\varpi {\text{(t + }}\Delta {\text{t}}{}_{{2}}{) }$$
$$\Delta {\text{t}}_{{2}} = {\text{S}}_{{{\text{LMN}}}} {\text{/c}}_{{\text{s}}}$$
The sound pressure pres(t,θ) resulting from the scalar summation of the sound waves is
$$p_{res} (t,\theta ){ \,= }p_{i} (t) + p_{refl} (t,\theta ) = \sqrt {{\text{I}}_{{\text{o}}} {\rho} \text{c}_{{\text{s}}} } \left( {{\text{sin}}\varpi \left( {t + \Delta t_{1} } \right) \, + \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, } {\text{sin }}\varpi {\text{(t + }}\Delta {\text{t}}{}_{{2}}{) }} \right)$$
The rest of this point concerns the phenomena occurring in the acoustic field.
The path difference of the direct and reflected front of acoustic waves makes pres(t,θ) to fluctuate on the caustic over time. The fluctuations are described by Eq. (30), which is obtained by substituting Eqs. (15), (16) to (26), (28) and then to (29).
$$p_{res} (t,\theta ){ = }\sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } \left[ {{\text{sin }}\varpi \left( {{\text{t + }}\frac{{\text{R}}}{{{\text{2c}}_{{\text{s}}} }}\left( {3\sin \theta - 2\sin^{3} \theta } \right)} \right) + \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, } {\text{sin }}\varpi {\text{(t + }}\frac{{{\text{3R}}}}{{2{\text{c}}_{{\text{s}}} }}{\text{sin}}\theta {)}} \right]$$
The fluctuations are in the form of amplitude modulation, the maximum range of which results from Eq. (31).
$$\frac{{\text{d}}}{{{\text{dt}}}}p_{res} {\text{(t, }}\theta {)} = 0$$
For a given θ, Eq. (30) describes the fluctuations at a given point of the caustic. Solving Eq. (31) i.e. finding the function t(θ), determines the amplitude of the fluctuations on the entire caustic. The solution of Eq. (31) is given in Eq. (32). For details see the Appendix, Eq. (49).
$${\text{t}} = \frac{1}{\varpi }{\text{arctg}}\left( q \right) - \Delta {\text{t}}_{{1}}$$
$$q = \frac{{\sqrt {\frac{{3|\cos^{3} \theta |}}{{2\left( {1 - \alpha } \right)\sin \theta }}} + \cos \left( {\varpi \frac{{{\text{R}}\sin^{3} \theta }}{{{\text{c}}_{{\text{s}}} }}} \right)}}{{\sin \left( {\varpi \frac{{{\text{R}}\sin^{3} \theta }}{{{\text{c}}_{{\text{s}}} }}} \right)}}$$
Substituting t into Eq. (30) yields the maximum range of amplitude modulation of sound pressure pres,Max(θ) [Pa] over the whole caustic.
$$p_{res,Max} (\theta ) { = }\sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } \left[ {{\text{sin}}\left( {{\text{arctg }}\left( q \right)} \right) + \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, } {\text{sin }}\left( {{\text{arctg }}\left( q \right) + \varpi \frac{{{\text{R}}\sin^{3} \theta }}{{{\text{c}}_{{\text{s}}} }}} \right)} \right]$$
The mechanism of caustic formation and the circumstances of the focal formation, graphically shown by LdV in Fig. 1a, are shown in this section on real objects. The presented examples concern the formation of caustics indoors and in outdoor acoustic installations with a demonstration function. How the development of the receiving technique related to the detection of radio waves extended the mirror aperture estimated by LdV is also shown.
Caustics in sound field
Figure 7 a, b presents the graph of Eqs. (30) and (34) for sound waves with a frequency of f = 1000 Hz and f = 2000 Hz, reflected by a hemispherical reflector with the diameter of D = 2 m. In both cases, the wavelength λ is much smaller than the diameter of the reflector D (λ/D = 0.17 and λ/D = 0.085, respectively). The directions of the waves shown in Fig. 6 therefore meet the principles of geometric optics, and the diffraction of the wave at the reflector's edge can be neglected.
a, b Resultant sound pressure of the plane waves with the frequencies f = 1000 Hz and f = 2000 Hz, respectively, incident on a hemispherical reflector with the radius R = 1 m and interfering with the wave that forms the caustic. \(\overline{{{\text{p}}_{{\text{i}}} }}\): amplitude of the incident wave, α : sound absorption coefficient of the reflector. Thin black lines: sound pressure of the resultant wave pres(t,θ) at the points in time t = 0, T/8, 7 T/8 where T = 1/f, at α = 0.9. Green, red and blue lines: amplitude of fluctuations pres,Max(θ) at α = 0.9, α = 0.6 and α = 0, respectively. Due to symmetry, the range 0 ≤ θ ≤ Π/2 is shown. c, d Resultant sound pressure level SPLres (θ) of the interfering waves described above. SPLi: level of the incident wave
Let us assume that the intensity of the incident wave Io is 10–8 [W/m2], which corresponds to sound pressure with an amplitude of 0.002 [Pa] and sound pressure level SPLi = 40 dB re. 2 × 10–5 [Pa] (Eqs. 35, 36).
$$\overline{{{\text{p}}_{{\text{i}}} }} = \sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } = \sqrt {10^{ - 8} * 415} \cong 0.002\quad \left[ {{\text{Pa}}} \right]$$
$$SPL_{i} = 20\log \left( {0.002/(2 * 10^{ - 5} )} \right) = 40\,\;\left[ {{\text{dB}}} \right]$$
The sound pressure pres (Eq. 30) and the corresponding pressure level SPLres (Eq. 37) fluctuate around these values. The amplitude of the fluctuations increases with the increasing effect of wave concentration on the caustic (Fig. 7 a, b).
$$SPL_{res} (\theta ) = 20\log \left( {p_{res,\max } (\theta )/(2 * 10^{ - 5} )} \right)$$
At the cusp of the caustic, the concentrated energy of the reflected waves significantly exceeds the energy of the incident wave, which reduces the fluctuation effect (Fig. 7 c, d).
The result of interference outside the focus is the arrangement of nodes and antinodes formed by the superimposition of incident and reflected waves on the caustic. In real conditions, its regularity shown in Fig. 7 is disturbed by the broadband nature of the sound and by the reverberant field of the room. This is combined with the diffraction of the incident low frequency wave at the edge of the canopy. As a result, the presence of caustics in the room is usually difficult to detect by hearing, and the audible effect of sound focussed by acoustic mirrors is reduced to a compact area of increased sound pressure.
Figure 7 shows how much sound amplification can be expected at the cusp of the caustic. When the level of incident sound on the mirror is about 40 dB, which corresponds to e.g. a quiet conversation (Fig. 7 c, d), the sound level felt in the focus is so high that this phenomenon can be used for acoustic demonstrations or for eavesdropping of conversations practiced in historical times. Figure 7 shows that in the focus, the surface sound power density may increase by approx. 45 dB or more, which is accounted for by a small part of the canopy. It is a computational illustration of LdV's concept, as shown in Fig. 2. The opening angle of the canopy in the original LdV drawing, for obvious reasons not supported by calculations, is approx. 10° .
The field installations found in educational parks are a contemporary implementation of LdV's observation (Fig. 8a). The whisper caves shown in Fig. 8b, apart from demonstrating the echo effect [10], also serve as an element of historical park architecture and a place of shelter from rain. Therefore, their shape is wider than required by the demonstrated phenomenon of reflection.
Outdoor installations erected to demonstrate acoustic curiosities. a Contemporary outdoor installation made by 3D printing [15,16,17], photo courtesy of M. Kladeftira. b Eighteenth-century Whispering Grottoes in Oliwa Park, Gdańsk, Poland [10, 14], photo courtesy of T. Strug
There are numerous architectural objects in which caustics are formed in the form shown in LdV's drawings. We are talking here primarily about historical interiors with a sacred and ceremonial function, containing large areas in the form of a dome (Fig. 9). Caustics in the described form were also created in Nineteenth-century theaters and concert halls with concave vaults, which were the then canon of the neo-renaissance style (Fig. 10) [12].
Historical rooms with large dome-shaped areas. a Hagia Sophia in Istanbul, Turkey, photo courtesy of Keep & Share, [18]. b Dome of the Rock in Jerusalem, Israel, ©CC by 2.0 [30]
a Assembly Hall of Poznań University. This neo-renaissance building was erected according to the design of Edward Fürstenau in 1905–1910, photo courtesy of Poznań Film Commission [19]. b Cross-sections of a 3D caustic as predicted by LdV [12]
Caustics in electromagnetic fields
The Arecibo radio telescope in Puerto Rico (Fig. 11) was put into operation in 1963 and initially the reflector aperture was small. It was significantly upgraded in 1997 by the use of the Gregorian subreflector system, which concentrates the energy of the caustics sections adjacent to the cusp into the single focal point (Fig. 12). The subreflector system consists of two shaped surfaces called secondary and tertiary reflectors hidden inside on the geodetic dome. The first is the parabolic reflector and the second constitutes the pair of elliptic reflectors (Fig. 13) [20].
Spherical radio telescope in Arecibo, Puerto Rico, photo taken before 01.12.2020 [22] Photo by the Public Domain of the National Space Foundation Multimedia Gallery. Below: diagram of the Arecibo telescope [20]
Bowl of the Arecibo reflector and the caustic it forms. The active part of the reflector and caustics is shown (red). a, b, c Directions of wave arrival − 20o, 0o, + 20o relative to the zenith. Illustrative sketch based on [20]
Ray tracing of the secondary and tertiary reflectors of the Arecibo Gregorian Optics [23]. In addition to the energy of the focus, the Gregorian subreflector system also uses the energy concentrated on the part of caustic marked in red. The pair of ellipses defining the shape of the tertiary reflector is shown
The upgraded aperture of the AArecibo radio telescope is approx. 30,000 m2, which is approx. 7% of the hemisphere surface with a radius of RArecibo = 265 m (Eq. (38)) [21].
$$\frac{{{\text{A}}_{{{\text{Arecibo}}}} }}{{{2}\Pi {\text{R}}_{{{\text{Arecibo}}}}^{{2}} }} = \frac{30000}{{2\Pi {2}65^{2} }} = 0.068 = 6.8\%$$
Compared to the reflector analysed by LdV with an aperture of approx. 0.4% of the hemisphere area (see Fig. 3, Eqs. 1 and 2), the relative aperture of the Arecibo radio telescope reflector is 17 times greater (0.068/0.004 = 17), i.e. by 1 order of magnitude. This is due to the fact that, according to LdV, the energy concentrated by the reflector is contained in its focus, while the Arecibo radio telescope enlarges it by the energy of a significant part of the caustic.
Prior to upgrading the Arecibo telescope, the receiver recorded the reflected wave concentrated in the focus and the wave coming from space. After the telescope was upgraded with the Gregorian subreflectors, receiver hidden inside on the geodetic dome is shielded by the secondary reflector as well as by the geodetic dome itself (Fig. 13) and does not record space signal. This change, however, is negligible, as the level of the signal concentrated in the focus exceeds the level of the space signal by several dozen dB (see Fig. 5).
On December 1, 2020—a tragic day for the scientific community—the Arecibo radio telescope was destroyed due to the cables breaking and the 900-ton main platform falling onto the radio telescope's canopy.
Parabolic reflector
In 2016, 53 years after the radio telescope in Arecibo was launched, the FAST radio telescope (Five-hundred-meter Aperture Spherical radio Telescope) was put into operation in Dawodang (China). Its antenna with a diameter of 520 m is a section of a sphere with a radius of RFAST = 300 m (Fig. 14). A receiver weighing 3 tons, which moves over the dish by a system of cables, enables the observation of radio-sources contained in a cone with an opening angle of aperture of 80 degrees. The FAST radio telescope is a receiving device, while the Arecibo radio telescope was a transmitting and receiving device [25].
Five-hundred-metre antenna of the FAST radio telescope [24]. Photo by permission from Springer Nature, license number 5315900703664. Below: a diagram of the FAST telescope [27]
The canopy of the FAST radio telescope consists of 4500 movable elements, the position of which can be corrected in such a way that a selected part of the spherical reflector is transformed into a paraboloid segment (Fig. 15), including a circle with a diameter of 300 m. The corrected part of the reflector thus creates an aperture with a diameter of AFAST = 300 m and a depth of DFAST = 40.2 m [26], which gives an area of approx. 38,000 m2, i.e. approx. 6.7% of the hemisphere area (Eqs. 39, 40). This shows a different direction of upgrade of LdV's concept over Arecibo. It involves manipulating the curvature of the reflector, while in Arecibo, the useful range of the caustic was manipulated.
$$2\Pi \times \tfrac{1}{2}{\text{A}}_{{{\text{FAST}}}} \times {\text{D}}_{{{\text{FAST}}}} = 2\Pi \times 150 \times 40.2 \approx 38000{\text{m}}^{{2}}$$
$$\frac{{{38000}}}{{2\Pi {\text{R}}_{{{\text{FAST}}}}^{{2}} }} = \frac{{{38000}}}{{2\Pi 300^{2} }} = 0.067 = 6.7\%$$
One of the tested solutions, showing how to adapt a section of the spherical mirror of the FAST radio telescope to the curvature of the paraboloid [26]
The caustics present in the LdV sketches, also known a spherical aberration, are treated as a limitation in the use of a spherical mirror. In the case of a parabolic mirror, such a limitation is a coma aberration. It occurs when the observed object is located off the mirror axis and consists in blurring the focus into a loop caustic called a coma (Fig. 16). In a spherical mirror, aberration is an irremovable element of its functioning, while in a parabolic mirror, it disappears completely with the axial incidence of the rays. In the FAST radio telescope, the coma aberration is limited by positioning the receiver with a few-millimetre accuracy, with a deviation from the paraboloid axis not exceeding 8 arc seconds [27].
Cross-section of a parabolic mirror with its typical distortion in the form of comatic aberration. When the source is located off the mirror axis, the focal point takes the form of a loop caustic, known as a coma [28]
In the achievements of many leading fields of science, you can find ideas from hundreds of years ago, often coming from areas unrelated to the field. The article shows the presence of the concept of reflecting light by a spherical mirror, formulated by Leonardo daVinci about 500 years ago, in the development of seemingly distant fields of science and technology, such as acoustics and radio astronomy.
Leonardo conducted his theoretical research using a spherical mirror. He showed that less than 0.5% of the hemisphere area is enough to concentrate energy coming from an infinitely distant source, e.g. from the Sun. With the application of this mirror, the rest of the canopy is useless. This idea, obvious from the point of view of modern knowledge, but formulated 500 years ago, is present today in many areas of technology and science—the aperture of modern spherical mirrors is only a small part of the hemisphere. Their functioning in the field of architectural acoustics, in optical instruments, as antennas in radio telescopes, etc., is fully in line with the LdV predictions.
During the research on the phenomenon of light concentration, LdV showed a method of graphically determining the surface accompanying the focus, on which the reflected rays are concentrated. This surface is known today as a caustic and is present in many fields of technology and science. LdV, however, did not develop the idea of caustics, being apparently unaware of the importance of his discovery. In modern technical knowledge, you can encounter both of the above-mentioned elements of the functioning of mirrors discovered by LdV, i.e. foci and caustics. In the example shown in the article, when the caustic energy is added to the focal energy, the aperture increases from the approx. 0.5% predicted by LdV to approx. 5–7%. After local adjustment of the spherical mirror surface to the curvature of the parabola, the aperture increases to a similar extent. The aperture of the mirror determined by LdV has therefore undergone a significant upgrade as a result of the development of receiving techniques. The technical implementation of the described improvements are the 300 m radio telescope in Arecibo (Puerto Rico) and the 500 m FAST radio telescope in Dawodang (China). Internet reports inform about the concept of building a 1000 m radio telescope, located on the dark side of the Moon away from the Earth's electromagnetic smog, but due to the early stage of this idea, it is not discussed in the article [29].
The caustics found in LdV's drawings are also formed in acoustic field indoors. However, wave phenomena occurring in a room, reverberation and noise floor make it difficult to audibly identify the caustics. As a result, the effect of sound focusing by large curved surfaces in rooms, e.g. arched vaults and concave walls, is reduced to a point focus at the caustic cusp, and the rest of the caustic becomes inaudible. For this reason, the concept of caustics is almost unknown in the field of architectural acoustics.
LdV:
Five-hundred-meter aperture spherical radio telescope
http://www.unmuseum.org/burning_mirror.htm. Accessed 29 Sept 2021.
https://en.wikipedia.org/wiki/Ptolemy. Accessed 29 Sept 2021.
Raynaud Dominique. The aerial perspective of Leonardo da Vinci and his origins in the optics of Ibn al-haytham (de aspectibus, III, 7). Arab Sci Philos. 2009;19(2):225–46.
Ivanov VP, Ivanova GK. Caustic structure of the underwater sound channel. Open Journal of Acoustics, 4, 26–37. 2014. http://file.scirp.org/pdf/OJA_2014032115460011.pdf. Accessed 29 Sept 2021.
Khatkevich AG, Khatkevich LA. Propagation of laser beams and caustics in crystals. Journal of applied spectroscopy, 74, No. 4. 2007. https://link.springer.com/article/10.1007/s10812-007-0086-8. Accessed 29 Sept 2021.
Skowron J. Analiza niestandardowych zjawisk mikrosoczewkowania grawitacyjnego gwiazd Galaktyki. Rozprawa doktorska (Analysis of non-standard phenomena of gravitational microlensing of Galactic stars. PhD dissertation, in Polish). Uniwersytet Warszawski. 2009. http://www.astrouw.edu.pl/~jskowron/PhD/thesis/phd.pdf. Accessed 13 Nov 2021.
http://www.bl.uk/turning-the-pages/?id=cb4c06b9-02f4-49af-80ce540836464a46&type=book. The Leonardo notebook, pp. 8–15. Accessed 29 Sept 2021.
http://www.bl.uk/turning-the-pages/?id=cb4c06b9-02f4-49af-80ce540836464a46&type=book,%20p.%208%E2%80%9314, Leonardo da Vinci's Codex Arundel, pp. 224–226, 414. Accessed 29 Sept 2021.
Leonardo da Vinci's studies of reflections from concave mirrors ff.86v-87. pages11and12.html. http://www.bl.uk/onlinegallery/ttp/leonardo/accessible/. Accessed 08 June 2022.
Kulowski A. The caustic in the acoustics of historic interiors. Appl Acoust. 2018;133:82–90. https://doi.org/10.1016/j.apacoust.2017.12.008.
Leonardo da Vinci's studies of reflections from concave mirrors ff.87v-88. pages13and14.html. https://www.bl.uk/onlinegallery/ttp/leonardo/accessible/. Accessed 08 June 2022.
Kulowski A. Analysis of a caustic formed by a spherical reflector: impact of a caustic on architectural acoustics. Appl Acoust. 2020. https://doi.org/10.1016/j.apacoust.2020.107333.
Burkhard DG, Shealy DL. Formula for the density of tangent rays over a caustic surface. Appl Opt. 1982;21(18):3299–306.
https://staraoliwa.pl/. Accessed 31 May 2022.
https://dbt.arch.ethz.ch/project/acoustic-mirrors/. Accessed 16 Aug 2020.
Kladeftira M. et al. Design strategies for a 3D printed acoustic Mirror Proc of the 24th CAADRIA conference-Vol 1. Wellington: Victoria University of Wellington; 2019. p. 123–132.
Kladeftira M et al. Printing whisper dishes. Large-scale binder jetting for outdoor installations. Proc of the ACADIA 2018 recalibration: on imprecision and infidelity, Proc of the 38th annual conference of the association for computer aided design in architecture. Mexico City; 2018. p. 328–35.
https://www.keepandshare.com/photo/382500/hagia-sophia?ifr=y. Accessed 31 May 2022.
http://poznanfilmcommission.pl/lokacja/aula-uam. Accessed 21 Sept 2019.
https://www.researchgate.net/figure/A-diagram-of-the-Arecibo-telescope_fig1_2209614. Accessed 29 Sept 2021.
Magnani L. The arecibo 5GHz mini-gregorian feed system; spectral line performance. Publ Astron Soc Pac. 1993;105(690):894–901.
https://www.space.com/38217-arecibo-observatory-puerto-rico-telescope-photos.html. Accessed 29 Sept 2021.
Cortés-Medellín G. AOPAF: arecibo observatory phased array feed. Internet publication by national astronomy and ionosphere center cornell university. 2010. https://www.naic.edu/~phil/hardware/byuPhasedAr/logs/Cortes%20AOPAF_short%20report%20Sept%202010-1.pdf. Accessed 29 Sept 2021.
https://www.nature.com/articles/d41586-019-02790-3. Accessed 29 Sept 2021.
Mathews JD. A short history of geophysical radar at Arecibo Observatory. Hist Geo- Sp Sci. 2013;4(1):19–33. https://doi.org/10.5194/hgss-4-19-2013.
Nan R, Li D et al. The five-hundred-meter aperture spherical radio telescope (FAST) project, international journal of modern physics D, ©World Scientific Publishing Company. https://arxiv.org/ftp/arxiv/papers/1105/1105.3794.pdf. Accessed 29 Sept 2021.
Williams II RL. "Five-Hundred Meter Aperture Spherical Radio Telescope (FAST) Cable-Suspended Robot Model and Comparison with the Arecibo Observatory". www.ohio.edu/people/williar4/html/pdf/FAST.pdf. Accessed 29 Sept 2021.
Schmidt RF. Analytical caustic surfaces. NASA technical memorandum 87805. NASA technical reports server. 1987. https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19880001678.pdf. Accessed 29 Sept 2021.
https://www.nasa.gov/directorates/spacetech/niac/2020_Phase_I_Phase_II/lunar_crater_radio_telescope/. Accessed 26 Oct 2021.
Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Al_Aqsa_(6888221391).jpg (cropped). Accessed 31 May 2022.
Faculty of Architecture, Gdańsk University of Technology, ul. Gabriela Narutowicza 11/12, 80-233, Gdańsk, Poland
Andrzej Kulowski
Correspondence to Andrzej Kulowski.
The author declare that he has no competing interests.
$$\begin{gathered} \frac{{\text{d}}}{{{\text{dt}}}}p_{c,res} {\text{(t,}}\theta {) = }\frac{{\text{d}}}{{{\text{dt}}}}\left[ {\sqrt {{\text{I}}_{{\text{o}}} {\rho \text{c}}_{{\text{s}}} } \left( {{\text{sin }}\left( {\varpi {\text{(t + }}\Delta {\text{t}}_{{1}} {)}} \right) + \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, } {\text{sin }}\left( {\varpi {\text{(t + }}\Delta {\text{t}}{}_{{2}}{)}} \right)} \right)} \right] = \\ \\ \sqrt {{\text{I}}_{{\text{o}}} {\rho c}_{{\text{s}}} } \varpi \left( {{\text{cos }}\left( {\varpi {\text{(t + }}\Delta {\text{t}}_{{1}} {)}} \right) + \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, } {\text{cos }}\left( {\varpi {\text{(t + }}\Delta {\text{t}}{}_{{2}}{)}} \right)} \right) = 0 \\ \end{gathered}$$
Substituting,
$$\tau = {\text{t}} + \Delta {\text{t}}_{1}$$
$${\text{b}} = \sqrt {\frac{{2\left( {{1 - }\alpha } \right)\sin (\theta )}}{{3|\cos^{3} (\theta )|}} \, }$$
yields,
$${\text{cos }}\varpi {(}\tau {)} = - {\text{ b cos }}\varpi \left( {\tau { - }\left( {\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}}} \right)} \right)$$
and after expansion,
$${\text{cos }}\varpi {(}\tau {)} = - {\text{b }}\left[ {{\text{cos }}\left( {\varpi \tau } \right)\cos \left( {\varpi \, \left( {\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}}} \right)} \right) + {\text{sin }}\left( {\varpi \tau } \right)\sin \left( {\varpi \, \left( {\Delta {\text{t}}_{1} - \Delta {\text{t}}{}_{{2}}} \right)} \right)} \right]$$
regrouping yields,
$$\frac{{{\text{sin }}\left( {\varpi \tau } \right)}}{{{\text{cos }}\left( {\varpi \tau } \right)}} = \frac{{\frac{{1}}{{\text{b}}} \, + \cos \left( {\varpi \, \left( {\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}}} \right)} \right)}}{{ - \sin \left( {\varpi \, \left( {\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}}} \right)} \right)}}$$
and then,
$$\varpi \left( {{\text{t}} + \Delta {\text{t}}_{{1}} } \right) = {\text{arctg }}\left( {\frac{{\frac{{1}}{{\text{b}}} \, + \cos \left( {\varpi \, \left( {\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}}} \right)} \right)}}{{ - \sin \left( {\varpi \, \left( {\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}}} \right)} \right)}}} \right)$$
since,
$$\Delta {\text{t}}{}_{{1}} - \Delta {\text{t}}{}_{{2}} = \frac{{\frac{{\text{R}}}{2}\left( {3\sin \theta - 2\sin^{3} \theta } \right)}}{{{\text{c}}{}_{{\text{s}}}}} - \frac{{\frac{{\text{R}}}{2}3\sin \theta }}{{{\text{c}}_{{\text{s}}} }} = - \frac{{{\text{R}}\sin^{3} \theta }}{{{\text{c}}{}_{{\text{s}}}}}$$
$${\text{t}} = \frac{1}{\varpi }{\text{arctg}}\left( {\frac{{\sqrt {\frac{{3|\cos^{3} \theta |}}{{2\left( {1 - \alpha } \right)\sin \theta }}} + \cos \left( {\varpi \frac{{{\text{R}}\sin^{3} \theta }}{{{\text{c}}_{{\text{s}}} }}} \right)}}{{\sin \left( {\varpi \frac{{{\text{R}}\sin^{3} \theta }}{{{\text{c}}{}_{{\text{s}}}}}} \right)}}} \right) - \Delta {\text{t}}_{{1}}$$
Kulowski, A. Do Leonardo da Vinci's drawings, room acoustics and radio astronomy have anything in common?. Herit Sci 10, 104 (2022). https://doi.org/10.1186/s40494-022-00713-6
Spherical reflector | CommonCrawl |
History of Science and Mathematics
History of Science and Mathematics Meta
History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up.
Why is one of Maxwell's equations named after Ampère? Who first named it after Ampère?
Ampère never wrote down what is confusingly called "Ampère's circuital law," not even the form without the displacement current term, as Ampère never dealt with the field concept.* Maxwell derived
$$\nabla \times \mathbf{B} = \mu_0\mathbf{J}\qquad(1)$$
in his 1855 paper On Faraday's Lines of Force, based on analogies to hydrodynamics, which he corrected to be
$$\nabla \times \mathbf{B} = \mu_0\left(\mathbf{J} + \varepsilon_0 \dfrac{\partial \mathbf{E}} {\partial t} \right)\qquad(2)$$
in his 1861 paper On Physical Lines of Force; he never wrote down Ampère's force law in either paper.
Ampère's force law is completely different from any of Maxwell's equations. It gives the force that current elements $I_1 d\vec {\ell }_1$ and $I_2 d\vec {\ell }_2$ exert on one another to be:
$$d^2\vec{F_{21}^A} = - \frac{\mu _0 }{4\pi }I_1 I_2 \frac{\hat {r}_{12} }{r_{12}^2 }\left[2(d\vec {\ell }_1 \cdot d\vec {\ell }_2) - 3({\hat {r}_{12} \cdot d\vec {\ell }_1 })({\hat {r}_{12} \cdot d\vec {\ell }_2 })\right] = - d^2\vec{F_{12}^A}.$$
Thus, it is appropriate that Equation (2) is one of Maxwell's equations. Gauss and Faraday utilized the field concept, thus Equation (2) is the most "Maxwellian" of the four Maxwell's equations.
So, why are Equations (1) & (2) above named after Ampère? Who first named them after Ampère?
*cf. Assis, André Koch Torres; Chaib, J. P. M. C; Ampère, André-Marie (2015). Ampère's electrodynamics: analysis of the meaning and evolution of Ampère's force between current elements, together with a complete translation of his masterpiece: Theory of electrodynamic phenomena, uniquely deduced from experience (PDF). Montreal: Apeiron. ISBN 978-1-987980-03-5. ch. 15 pp. 221ff.
electromagnetism electricity terminology
GeremiaGeremia
$\begingroup$ Crossposted from physics.stackexchange.com/q/270767/2451 $\endgroup$
– Qmechanic
$\begingroup$ Here is an explanation given, in German though: lp.uni-goettingen.de/get/text/6627 $\endgroup$
$\begingroup$ @Claus Thanks, but this is wrong, as Ampère never dealt with fields: "Ampère hatte empirisch gefunden, dass für das Magnetfeld $\oint\limits_{\partial A} \vec{B}\text{d}\vec{s}=\mu_0\cdot I = \mu_0\int\limits_A\vec{j}\text{d}\vec{A}$ gilt." $\endgroup$
– Geremia
Oliver Heaviside's 1893 Electromagnetic Theory (vol. 1) mentions "Ampere's Rule [or 'formula' or 'law'] for deriving the magnetic force from the current" in a handful of places (cf. p. 64). He calls it "Ampère's 'dodge'" in his 1892 Electrical Papers (vol. 1) p. 261.
Probably the most curious statement by Heaviside on Ampère is in his paper "The Mutual Action of a Pair of Rational Current-Elements" (The Electrician, Dec. 28, 1888 (written: 25 Nov. 1888), p. 230 = Electrical Papers (vol. 2), p. 501); Heaviside ends the short paper with:
It has been stated, on no less authority than that of the great Maxwell[Treatise §528], that Ampère's law of force between a pair of current-elements is the cardinal formula of electrodynamics. If so, should we not be always using it? Do we ever use it? Did Maxwell, in his treatise? Surely there is some mistake. I do not in the least mean to rob Ampère of the credit of being the father of electrodynamics; I would only transfer the name of cardinal formula to another due to him, expressing the mechanical force on an element of a conductor supporting current in any magnetic field; the vector product of current and induction. There is something real about it; it is not like his force between a pair of unclosed elements; it is fundamental; and, as everybody knows, it is in continual use, either actually or virtually (through electromotive force) both by theorists and practicians.
Thanks for contributing an answer to History of Science and Mathematics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged electromagnetism electricity terminology or ask your own question.
Why do Maxwell's equations bear his name?
Did Maxwell originally write his equations using quaternions?
Electromagnetic constants and the speed of light
Who originally derived the general force law equation of force between current elements?
Why is Maxwell and not Ampère credited for unifying electricity and magnetism?
What were the 3 critical experiments and the argument given by Ampère which helped in determining the magnetic field made by electric current?
Why is the amount of charge needed to generate 1 unit of electric flux, called permittivity?
How on Earth did Ampere come up with Ampere's circuital law?
Why is magnetic flux density named after Nikola Tesla?
What is Heaviside's version of Maxwell's equations?
Why was Indicial equations named so?
What was the real need of divergence and curl operators?
Why is wave guide theory developed so long after Maxwell's work was published?
Hamiltonian $H$ named after Huygens? | CommonCrawl |
Journal of the American Mathematical Society
Published by the American Mathematical Society, the Journal of the American Mathematical Society (JAMS) is devoted to research articles of the highest quality in all areas of pure and applied mathematics.
The 2020 MCQ for Journal of the American Mathematical Society is 4.79.
Journals Home eContent Search About JAMS Editorial Board Author and Submission Information Journal Policies Subscription Information
Square function/non-tangential maximal function estimates and the Dirichlet problem for non-symmetric elliptic operators
by Steve Hofmann, Carlos Kenig, Svitlana Mayboroda and Jill Pipher PDF
J. Amer. Math. Soc. 28 (2015), 483-529 Request permission
We consider divergence form elliptic operators $L= {-}\mathrm {div} A(x) \nabla$, defined in the half space $\mathbb {R}^{n+1}_+$, $n\geq 2$, where the coefficient matrix $A(x)$ is bounded, measurable, uniformly elliptic, $t$-independent, and not necessarily symmetric. We establish square function/non-tangential maximal function estimates for solutions of the homogeneous equation $Lu=0$, and we then combine these estimates with the method of "$\epsilon$-approximability� to show that $L$-harmonic measure is absolutely continuous with respect to surface measure (i.e., n-dimensional Lebesgue measure) on the boundary, in a scale-invariant sense: more precisely, that it belongs to the class $A_\infty$ with respect to surface measure (equivalently, that the Dirichlet problem is solvable with data in $L^p$, for some $p<\infty$). Previously, these results had been known only in the case $n=1$.
Pascal Auscher, On necessary and sufficient conditions for $L^p$-estimates of Riesz transforms associated to elliptic operators on $\Bbb R^n$ and related estimates, Mem. Amer. Math. Soc. 186 (2007), no. 871, xviii+75. MR 2292385, DOI 10.1090/memo/0871
Pascal Auscher and Andreas Axelsson, Weighted maximal regularity estimates and solvability of non-smooth elliptic systems I, Invent. Math. 184 (2011), no. 1, 47–115. MR 2782252, DOI 10.1007/s00222-010-0285-4
M. Angeles Alfonseca, Pascal Auscher, Andreas Axelsson, Steve Hofmann, and Seick Kim, Analyticity of layer potentials and $L^2$ solvability of boundary value problems for divergence form elliptic equations with complex $L^\infty$ coefficients, Adv. Math. 226 (2011), no. 5, 4533–4606. MR 2770458, DOI 10.1016/j.aim.2010.12.014
Pascal Auscher, Steve Hofmann, Michael Lacey, Alan McIntosh, and Ph. Tchamitchian, The solution of the Kato square root problem for second order elliptic operators on ${\Bbb R}^n$, Ann. of Math. (2) 156 (2002), no. 2, 633–654. MR 1933726, DOI 10.2307/3597201
Pascal Auscher, Steve Hofmann, John L. Lewis, and Philippe Tchamitchian, Extrapolation of Carleson measures and the analyticity of Kato's square-root operators, Acta Math. 187 (2001), no. 2, 161–190. MR 1879847, DOI 10.1007/BF02392615
Pascal Auscher and Philippe Tchamitchian, Square root problem for divergence operators and related topics, Astérisque 249 (1998), viii+172 (English, with English and French summaries). MR 1651262
Alain Bensoussan, Jacques-Louis Lions, and George Papanicolaou, Asymptotic analysis for periodic structures, Studies in Mathematics and its Applications, vol. 5, North-Holland Publishing Co., Amsterdam-New York, 1978. MR 503330
L. Caffarelli, E. Fabes, S. Mortola, and S. Salsa, Boundary behavior of nonnegative solutions of elliptic operators in divergence form, Indiana Univ. Math. J. 30 (1981), no. 4, 621–640. MR 620271, DOI 10.1512/iumj.1981.30.30049
R. R. Coifman and C. Fefferman, Weighted norm inequalities for maximal functions and singular integrals, Studia Math. 51 (1974), 241–250. MR 358205, DOI 10.4064/sm-51-3-241-250
R. R. Coifman, Y. Meyer, and E. M. Stein, Some new function spaces and their applications to harmonic analysis, J. Funct. Anal. 62 (1985), no. 2, 304–335. MR 791851, DOI 10.1016/0022-1236(85)90007-2
Björn E. J. Dahlberg, Approximation of harmonic functions, Ann. Inst. Fourier (Grenoble) 30 (1980), no. 2, vi, 97–107 (English, with French summary). MR 584274
Björn E. J. Dahlberg, David S. Jerison, and Carlos E. Kenig, Area integral estimates for elliptic differential operators with nonsmooth coefficients, Ark. Mat. 22 (1984), no. 1, 97–108. MR 735881, DOI 10.1007/BF02384374
C. Fefferman and E. M. Stein, $H^{p}$ spaces of several variables, Acta Math. 129 (1972), no. 3-4, 137–193. MR 447953, DOI 10.1007/BF02392215
John B. Garnett, Bounded analytic functions, Pure and Applied Mathematics, vol. 96, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1981. MR 628971
David Gilbarg and Neil S. Trudinger, Elliptic partial differential equations of second order, 2nd ed., Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 224, Springer-Verlag, Berlin, 1983. MR 737190, DOI 10.1007/978-3-642-61798-0
Steve Hofmann, A local $Tb$ theorem for square functions, Perspectives in partial differential equations, harmonic analysis and applications, Proc. Sympos. Pure Math., vol. 79, Amer. Math. Soc., Providence, RI, 2008, pp. 175–185. MR 2500492, DOI 10.1090/pspum/079/2500492
Hofmann S., Kenig C., Mayboroda S., and Pipher J., The Regularity problem for second order elliptic operators with complex-valued bounded measurable coefficients. preprint., DOI 10.1007/s00208-014-1087-6
Steve Hofmann, Michael Lacey, and Alan McIntosh, The solution of the Kato problem for divergence form elliptic operators with Gaussian heat kernel bounds, Ann. of Math. (2) 156 (2002), no. 2, 623–631. MR 1933725, DOI 10.2307/3597200
Hofmann S., Mayboroda S., and Mourgoglou M., $L^p$ and endpoint solvability results for divergence form elliptic equations with complex $L^{\infty }$ coefficients. preprint.
Hofmann S., Mitrea M., and Morris A., The method of layer potentials in $L^p$ and endpoint spaces for elliptic operators with $L^\infty$ Coefficients. preprint.
David S. Jerison and Carlos E. Kenig, The Dirichlet problem in nonsmooth domains, Ann. of Math. (2) 113 (1981), no. 2, 367–382. MR 607897, DOI 10.2307/2006988
Tosio Kato, Perturbation theory for linear operators, Die Grundlehren der mathematischen Wissenschaften, Band 132, Springer-Verlag New York, Inc., New York, 1966. MR 0203473
Carlos E. Kenig, Harmonic analysis techniques for second order elliptic boundary value problems, CBMS Regional Conference Series in Mathematics, vol. 83, Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1994. MR 1282720, DOI 10.1090/cbms/083
C. Kenig, H. Koch, J. Pipher, and T. Toro, A new approach to absolute continuity of elliptic measure, with applications to non-symmetric equations, Adv. Math. 153 (2000), no. 2, 231–298. MR 1770930, DOI 10.1006/aima.1999.1899
Carlos E. Kenig, Fanghua Lin, and Zhongwei Shen, Homogenization of elliptic systems with Neumann boundary conditions, J. Amer. Math. Soc. 26 (2013), no. 4, 901–937. MR 3073881, DOI 10.1090/S0894-0347-2013-00769-9
Carlos E. Kenig and Jill Pipher, The Neumann problem for elliptic equations with nonsmooth coefficients, Invent. Math. 113 (1993), no. 3, 447–509. MR 1231834, DOI 10.1007/BF01244315
Carlos E. Kenig and Zhongwei Shen, Homogenization of elliptic boundary value problems in Lipschitz domains, Math. Ann. 350 (2011), no. 4, 867–917. MR 2818717, DOI 10.1007/s00208-010-0586-3
Carlos E. Kenig and Zhongwei Shen, Layer potential methods for elliptic homogenization problems, Comm. Pure Appl. Math. 64 (2011), no. 1, 1–44. MR 2743875, DOI 10.1002/cpa.20343
Andreas Rosén, Layer potentials beyond singular integral operators, Publ. Mat. 57 (2013), no. 2, 429–454. MR 3114777, DOI 10.5565/PUBLMAT_{5}7213_{0}8
James Serrin and H. F. Weinberger, Isolated singularities of solutions of linear elliptic equations, Amer. J. Math. 88 (1966), 258–272. MR 201815, DOI 10.2307/2373060
Varopoulos N., A remark on BMO and bounded harmonic functions (1970).
Retrieve articles in Journal of the American Mathematical Society with MSC (2010): 42B99, 42B25, 35J25, 42B20
Retrieve articles in all journals with MSC (2010): 42B99, 42B25, 35J25, 42B20
Steve Hofmann
Affiliation: Department of Mathematics, University of Missouri, Columbia, Missouri 65211
MR Author ID: 251819
Email: [email protected]
Carlos Kenig
Affiliation: Department of Mathematics, University of Chicago, Chicago, Illinois, 60637
Email: [email protected]
Svitlana Mayboroda
Affiliation: School of Mathematics, University of Minnesota, Minneapolis, Minnesota, 55455
Email: [email protected]
Jill Pipher
Affiliation: Department of Mathematics, Brown University, Providence, Rhode Island 02912
Email: [email protected]
Received by editor(s): February 10, 2012
Received by editor(s) in revised form: February 11, 2014
Published electronically: May 21, 2014
Additional Notes: Each of the authors was supported by the NSF
This work has been possible thanks to the support and hospitality of the University of Chicago, the University of Minnesota, the University of Missouri, Brown University, and the BIRS Centre in Banff (Canada). The authors would like to express their gratitude to these institutions.
Journal: J. Amer. Math. Soc. 28 (2015), 483-529
MSC (2010): Primary 42B99, 42B25, 35J25, 42B20 | CommonCrawl |
Joint congestion control and resource allocation for energy-efficient transmission in 5G heterogeneous networks
Jain-Shing Liu1,
Chun-Hung Lin ORCID: orcid.org/0000-0003-0840-394X2 &
Heng-Chih Huang2
The deployment of small cells with carrier aggregation (CA) is a significant feature of fifth generation (5G) mobile communication systems which could be characterized by the multi-dimensional heterogeneity on their diversified requirements upon different resources. Taking the heterogeneity into account, we consider here a joint optimization problem wherein multiple kinds of resources are concurrently allocated to optimize the system throughput utility while enhancing the network energy efficiency (EE) and maintaining the system stability. Especially, for the high-dimensional non-deterministic polynomial (NP)-hard allocation problem embedded, we conduct a mathematical programming model involving nonlinear integer constraints to seek the long-term stable utility on throughput and introduce an iterative optimal modulation and coding scheme-based (optimal MCS-based) heuristic algorithm as an effective solver. In addition, as data traffic and channel condition will be time-varying in the real world, an admission control based on the Lyapunov technique that requires no prior knowledge on channel information is proposed to reduce the system overhead. Finally, not only the performance bound is derived in theory, but also the numerical experiments are conduced to reveal its characteristics with respect to the system parameter V and the EE requirement.
For the next generation of mobile internet connectivity, 5G networks aim to offer increased data rate, shortened latency, improved energy efficiency, reduced cost, and other desired features. To this end, the communication society has proposed many techniques from different aspects such as dense heterogeneous networks, cloud-based radio access networks, energy-aware communications, and wireless energy harvesting [1]. Among these, the dense heterogeneous networks (HetNets) based on Long Term Evolution-Advanced (LTE-A) including carrier aggregation (CA) as its key feature would be particularly useful since the aggregation can achieve wider bandwidth and better energy efficiency (EE) [2]. Specially, with the aid of 4G framework, small-cells (SCs) that represent pico-cells, femto-cells, etc., can be more easily deployed to improve the 5G capacity by offloading the traffic from a macro cell (MC) to SCs [3].
Providing these benefits, designing HetNets, however, is a challenging work. One of the hardest challenges is caused by its resource and interference management because both MCs and SCs in a 5G network would tend to utilize the radio resources from the same service provider. To reduce the overhead emerged, the cells would be arranged under the so-called co-channel deployment, i.e., by spatially reusing the available spectrum or, specifically, by using a different set of channels or resource blocks (RBs) for macro base stations (MBSs) and small base stations (SBSs), as noted, e.g., in [4]. However, it is only a step toward the solution to the CA-capable LTE system that allows several component carriers (CCs) to be aggregated. That is, given CA, this system is still complicated by its requirement to modify the radio resource management (RRM) entity, including CC selection, RB allocation, modulation and coding scheme (MCS) assignment, and power allocation. For this complexity, many researches had been done to develop the approaches on RRM that can properly allocate RBs, CCs [5–7], and even MCSs [8] to increase the performance. Now, as the standard evolves, more attentions are paid to the heterogeneous networks wherein the multiple types of resources would be allocated between MCs and SCs that are connected by backhaul links in a multi-tier sense [9, 10]. In such networks, high-capacity fiber backhaul (e.g., IEEE 802.3av 10G-EPON) will play a major role that consistently provides data rates 100 times higher than cellular networks to help in reaching the envisioned 10 Gbps peak data rates required by 5G [10]. Here, we focus on the multi-cell multi-tier networks equipped with high-capacity backhaul and introduce a solution based on discrete power controlFootnote 1, reflecting the fact that 3GPP LTE cellular networks only support discrete power levels in the downlink via a user-specific data-to-pilot-power offset parameter [13].
Given that, a joint congestion control and downlink resource allocation problem is particularly considered with the objective to maximize a long-term throughput utility subject to a system-wide EE requirement. The major challenge of this optimization problem is brought by the various constraints that are specific to the LTE-A system with CA. For this, a high-dimensional allocation problem involved is first formulated as a programming model whose constraints involve integer variables coupled with a nonlinear form, and optimally solving such a model at each transmission time interval (TTI) is impractical. In addition, for the data traffic and channel condition involved would be both time-varying in the system, an admission control is usually required to stabilize the data queue for each user equipment (UE). Thus, to address the combinatorial problem with queueing stability, an iterative optimal MCS-based heuristic algorithm inspired by the iterative linear programming-based heuristic [14, 15] is proposed to resolve the NP-hard allocation problem involved in the low layer. Then, as LTE would be a stochastic system with time-varying traffic and channel as noted, we further address its queueing stability problem at the high layer for the system. This is challenging because unlike deterministic optimization, stochastic optimization is usually hard to solve, and even harder than most well-known combinatorial optimization problems [16]. Given that, the Lyapunov-based optimization is considered to be a very useful technique to enable constrained optimization of time averages in general stochastic systems [17]. Accordingly, a Lyapunov optimization framework is developed to address the high layer problem focusing on the time-varying data traffic and channel condition without a priori knowledge of arrivals. By combining the solutions from the two layers, we are able to approach the optimal tradeoff with a control parameter V and satisfy the long-term EE requirement simultaneously. More specifically, the characteristics of this work can be summarized as follows:
For the high-dimensional resource allocation optimization problem in the 5G LTE-A multi-tier multi-cell heterogeneous wireless networks that is a NP hard combinatorial problem, we first transform the corresponding nonlinear integer programming model into a linear counterpart that can be solved by conventional techniques.
Then, an iterative optimal MCS-based heuristic algorithm or IOMHA for short inspired by the iterative linear programming-based heuristic is developed to approach the optima within a time limit. Given that, a two-layer method is proposed for the stochastic programming problem so that the data queue of each UE can be stabilized in the high layer based on the resources efficiently allocated in the low layer.
Using the Lyapunov optimization framework, we realize a formulation to strike a balance between average throughput and average delay while guaranteeing the required EE performance and accommodating both traffic variations in the long term and channel fading in the short term, in the heterogeneous networks.
We show that with the EE constraint enforced, the proposed algorithm has its performance advantage especially on EE through our simulation study. In the study, by gradually improving its result, our IOMHA is also shown to resolve the complex allocation problem effectively, trading the optimality of the NP-hard optimization problem off against a lower and controllable complexity to approach the optimal solution iteratively, in contrast to the other algorithms shown in, e.g., [5, 8], which would be done only once for obtaining suboptimal solutions to their allocation problems in LTE without a chance for further improvements.
The remainder of this paper is organized as follows. First, the related works are summarized in Section 2. Then, the scheduling constraints and queueing dynamics of the joint optimization problem are formulated in Section 3. The online control method based on Lyapunov drift-plus-penalty technique for this problem is proposed in Section 4, and the iterative optimal-MCS-based heuristic algorithm involved is introduced in Section 5. Given that, the performance bounds and evaluations of this work are presented in Sections 6 and 7, respectively. Finally, conclusions are drawn in Section 8.
For 5G networking, there are many networks continuously explored with variant aims on different performance metrics. Among these, energy efficiency (EE) plays a vital role in 5G as the future networks should effectively reduce the overall carbon footprint for the world to be sustainable. With respect to this issue, the authors in [18] had studied energy efficiency of resource allocation in orthogonal frequency division multiple access (OFDMA) downlink networks where the circuit power consumption and the minimum data rate required were both considered. More recently, the authors in [9] investigated energy-efficient power allocation and wireless backhaul bandwidth allocation in OFDMA heterogeneous small cell networks. Specifically, they proposed a near optimal iteration resource algorithm to solve the power and bandwidth allocation problem and suggested also a suboptimal low-complexity algorithm to this end.
Apart from the above, downlink radio resource allocation methods in LTE system with CA are particularly noted here for their potential on EE even without direct objectives for this aim. As surveyed in [19], a two-step allocation method was considered in [5–7] that first uses a load balancing scheme to assign CCs to UEs, and then schedules RBs of these CCs to reduce the computational complexity for the NP-hard RB/CC allocation problem. In addition, different joint allocation approaches had also been done with various efforts to reduce the time complexity. For example, the work in [20] divided the optimization problem into a number of subproblems for each CC to optimize its RB allocation independently. Then, after RB assignment, an iterative resource adjustment algorithm was performed to meet the CA capability requirement for UEs. Despite their differences, these approaches mainly focus on RB/CC allocation and pay no attention to the other constraints specific to LTE/LTE-A.
In addition, if categorized by using the number of cells, the authors in [21] have recently proposed for a single-cell scenario a downlink scheduling algorithm aiming to maximize the weighted sum of throughput constrained by the allocation rules of LTE. Similarly, the authors in [8] have addressed a downlink resource scheduling problem that takes also into account the MCS constraint for LTE, with a greedy-based algorithm to maximize the system throughput. With the notable performance gains obtained, these algorithms, however, consider no queueing dynamic resulted from the dynamic traffics that should be involved also. Next, as another category, for a multi-cell scenario, the authors in [22] proposed a resource allocation algorithm that accounts for MCS, RB, and transmit power, with inter-cell interference coordination, but ignores MCS constraint, CA, and queueing dynamic. In addition, the previous work in [23] considered a dynamic resource allocation algorithm for downlink transmission in a multi-cell network. However, it considered no discrete power allocation in the downlink and ignored the EE performance that is one of the most important factors impacting the system. Here, for the 5G multi-tier multi-cell networks based on LTE-A with discrete power levels, we first transform the nonlinear integer scheduling constraints to be involved into their linear counterparts as the previous. Then, an optimal-MCS-based heuristic algorithm inspired by the iterative linear programming-based heuristic is proposed to approach the optima within a time limit. Finally, a drift-plus-penalty approach for joint admission control and resource allocation with the requirement on EE and queueing stability is constructed that iteratively resolves the stochastic optimization problem involved for the long-term optimal throughput utility.
System model and problem formulation
In the sequel, we consider a multi-tier multi-cell heterogeneous network as exemplified in Fig. 1, consisting of s base stations (including a MBS and s−1 SBSs) and u UEs located in the service area of these cells. In addition, the network is equipped with a number of c CCs. Each CC has b RBs, and each RB can use one of l MCSs for transmission. Further, there are p discrete power levels (PLs), and MBS/SBSs can choose among P={σ1Pmax,σ2Pmax,....,σp=|P|Pmax} to transmit, where 0<σ1<σ2,...,<σp=|P|=1 and Pmax denotes the maximum power as that in [12]. In summary, there are \({\mathcal {U}}, {\mathcal {C}}, {\mathcal {B}}, {\mathcal {L}}, {\mathcal {S}}\), and \({\mathcal {P}}\) to represent the set of UEs, the set of CCs, the set of RBs per CC, the set of MCSs per RB, the set of base stations (BSs), and the set of power levels (PLs) with u, c, b, l, s, and p as their indices, and \({\mathbf {u}} = |{\mathcal {U}}|, {\mathbf {c}} = |{\mathcal {C}}|, {\mathbf {b}} = |{\mathcal {B}}|, {\mathbf {l}} = |{\mathcal {L}}|, {\mathbf {s}} = |{\mathcal {S}}|\), and \({\mathbf {p}} = |{\mathcal {P}}|\) as their cardinalities, respectively. Given that, we focus on downlink transmission in the 5G heterogeneous network based on LTE-A and consider a stochastic communication system whose traffic load is changed from time to time, requiring an online admission algorithm for its stability. Further, its channel condition is also time-varying. For this condition, a UE would inspect reference signals currently transmitted from MBS or SBSs to estimate the channel quality of each RB [24]. After that, it will send a feedback report with the channel quality indicator (CQI) whose value would then be mapped to the highest-rate MCS adoptable by the UE for receiving the corresponding RB from MBS/SBSs [25]. Then, with the information from UEs and SBSs, MBS is responsible for admission control, resource scheduling, and link adaption. For easy reference, the important symbols for the formulation are summarized in Table 1 in advance.
An example of the heterogeneous network
Table 1 A list of important symbols used in the problem formulation
Multi-resource allocation
To show the multiple types of resources involved more concisely, we denote by \(\underline {e}\) a binary variable or an element of the feasible set Ξ representing all possible allocations, where \(\underline {e}_{{u,c,b,l,s,p}} \stackrel {\triangle }{=} (u_{\underline {e}} = u, c_{\underline {e}} = c, b_{\underline {e}} = b, l_{\underline {e}} = l, s_{\underline {e}} = s, p_{\underline {e}} = p\)) with value of 1 exhibits that RB b of CC c on MCS l at PL p of cell s is assigned to UE u, and 0 otherwise. Further, let Ψu,c,b,s,p be the index of the highest-rate MCS allowed among the possible transmissions, \(\underline {\hat {e}}_{{u,c,b,s,p}} = (u,c,b,\hat {l},s,p), \forall \hat {l} \in {\mathcal {L}}\). Given that, the achieved transmission rate with the allocation, \(v(\underline {e})\), is the data rate of an RB on MCS l, rl, for l≤Ψu,c,b,s,p, and 0 otherwise.
Channel, power, and energy efficiency model
Accordingly, the allocation (or scheduling) algorithm is conducted to accommodate a slow fading network wherein channel condition would remain unchanged during the resource allocation period (Ch. 6 of [26]), which complies with the high-rate network with reduced degree of mobility. In this situation, the signal-to-noise ratio (SNR) from BS s to UE u using RB b of CC c at PL p in time t can be represented by
$$ {SNR}_{s,u}^{\text{\textit{c,b,p}}}(t) \stackrel{\triangle}{=} \frac{P^{p}_{s,u}(t) |h_{s,u}^{c,b}(t)|^{2} d_{s,u}^{-\rho}(t)}{N_{s,u}^{c,b}(t)} $$
where \(h^{c,b}_{s,u}\) is the channel gain from transmitter (MBS or SBS) s to receiver (UE) u using RB b of CC c, and ds, u is the distance from s to u. The channel is considered to be Rayleigh fading which yields the channel gain following the exponential distribution. In addition, ρ is the path-loss factor and \(N_{s,u}^{c,b}\) is the noise experienced by u when s transmits to u on RB b of CC c. Providing that, an empirical downlink SNR to CQI mapping for LTE such as that in [27, 28] could be used to estimate the CQIs to be returned to BSs. Then, according to the CQIs collected, MBS would decide each MCS index l for the downlink transmission from BS \(s_{\underline {e}} = s\) to UE \(u_{\underline {e}} = u\) using RB \(b_{\underline {e}} = b\) of CC \(c_{\underline {e}} = c\) at PL \(p_{\underline {e}} = p\), in terms of \(\underline {e}\), and transmit the decisions to all SBSs it associates via the backhaul network. Consequently, as 3GPP specifies the transmit data rate of each MCS index l using table representation [24], the data rate \(v(\underline {e})\) would be obtained through a function or table mapping, rl, for each RB on MCS l. Given that and the feasible allocation set Ξ, the total data rate can be given by \(R_{\text {tot}}(t) = \sum _{\underline {e} \in \Xi }\left (\underline {e}(t) \times v\left (\underline {e}(t)\right) \right) \). Similarly, the total power consumption can be obtained by \(P_{\text {tot}}(t) = \sum _{\underline {e} \in \Xi } \left (\underline {e}(t) \left (P^{p}_{s,u}(t) + P^{c}_{s,u}\right) \right)\), where \(P^{p}_{s,u}(t)\) is the transmit power from \(s_{\underline {e}} = s\) to \(u_{\underline {e}} = u\) at power level \(p_{\underline {e}} = p\), and \(P^{c}_{s,u}\) is the constant circuit power for this transmission.
Specifically, in the stochastic system, we are interested in the limits of the time-average expectations of the above metrics. That is,
$$ \overline{R}_{\text{tot}} = {\lim}_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau = 0}^{t-1} \mathbb{E}\{R_{\text{tot}}(\tau)\} $$
$$ \overline{P}_{\text{tot}} = {\lim}_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau = 0}^{t-1} \mathbb{E}\{P_{\text{tot}}(\tau)\} $$
In terms of the long-term metrics, the energy efficiency is considered as the ratio of the long-term aggregated rate to the long-term total energy consumption as
$$ \eta_{EE} = \frac{{\lim}_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau = 0}^{t-1} \mathbb{E}\{R_{\mathrm{\text{tot}}}(\tau)\} }{W{\lim}_{t \rightarrow \infty} \frac{1}{t} \sum_{\tau = 0}^{t-1} \mathbb{E}\{P_{\mathrm{\text{tot}}}(\tau)\}} = \frac{\overline{R}_{\text{tot}}}{W\overline{P}_{\text{tot}}} $$
where W is used to accommodate the quantitative difference between the two metrics in the ratio.
Scheduling constraints
For the heterogeneous network with CA, we have the following scheduling constraints. First, as the basic unit for the transmission, each RB can be assigned to a single UE u at most with a certain MCS l. To show this, we let \(\hat {\underline {e}}_{1} \stackrel {\triangle }{=} (\hat {u},c,b,\hat {l},s,p)\) be the binary allocation variables with different \(\hat {u} \in {\mathcal {U}}\) and \(\hat {l} \in {\mathcal {L}}\) while fixing \(c_{\underline {\hat {e}}_{1}} = c, b_{\underline {\hat {e}}_{1}} = b, s_{\underline {\hat {e}}_{1}} = s\), and \(p_{\underline {\hat {e}}_{1}} = p\). Given that, this constraint can be simply shown by
$$\begin{array}{@{}rcl@{}} && \sum_{\forall \hat{\underline{e}}_{1} =(\hat{u},c,b,\hat{l},s,p)} \mathbbm{1} \left\{\hat{\underline{e}}_{1} \right\} \leq \mathbbm{1}, \hspace{0pt} \forall c \in {\mathcal{C}}, \forall b \in {\mathcal{B}}, \forall s \in {\mathcal{S}}, \forall p \in {\mathcal{P}} \end{array} $$
where \(\mathbbm {1}\{x\}\) denotes an indicator function whose value is 1 if x is true, and 0 otherwise. In addition, the notations are given without the time index t for brevity. Further, according to LTE-A, it is required that if a UE u is assigned a CC c by a BS s serving it, then all RBs of c allocated to u should use the same MCS l to transmit. More specifically, the MCS constraint based on LTE-A is considered as
$$\begin{array}{@{}rcl@{}} && \sum_{\forall l \in {\mathcal{L}}} \mathbbm{1} \left\{ \sum_{\forall \hat{\underline{e}}_{2} =(u,c,\hat{b},\hat{l},s,p)} \mathbbm{1} \left\{\hat{\underline{e}}_{2}|l_{\hat{\underline{e}}_{2}} = l \right\} \right\} \leq 1, \\ && \hspace{45pt} \forall u \in {\mathcal{U}}, \forall c \in {\mathcal{C}}, \forall s \in {\mathcal{S}}, \forall p \in {\mathcal{P}} \end{array} $$
As noted in Section 3.2, a UE can only use a MCS less than or equal to Ψu,c,b,s,p. If not, it could lead to an unacceptable bit error rate on transmission, and the transmission should be discarded. Thus, we have the following constraint
$$\begin{array}{@{}rcl@{}} && \sum_{\forall \hat{\underline{e}}_{3} =(u,c,b,\hat{l},s,p)} \mathbbm{1} \left\{\hat{\underline{e}}_{3} | l_{\hat{\underline{e}}_{3}} > \Psi_{\text{\textit{u,c,b,s,p}}} \right\} = 0, \\ && \hspace{15pt} \forall u \in {\mathcal{U}}, \forall c \in {\mathcal{C}},\forall b \in {\mathcal{B}}, \forall s \in {\mathcal{S}}, \forall p \in {\mathcal{P}} \end{array} $$
Apart from the above without special notion on the number of cells involved, here, we take into account the constraints specific to the multi-cell environment as well as follows. First, to reduce overheads on backhaul, it is commonly considered that a UE is only served by a single BS s, which implies a monopoly constraint as
$$\begin{array}{@{}rcl@{}} && \sum_{\forall \hat{\underline{e}}_{4} =(u,\hat{c},\hat{b},\hat{l},s,\hat{p})} \mathbbm{1}\left\{\hat{\underline{e}}_{4} \right\} \hspace{10pt} \times \sum_{\forall \tilde{\underline{e}}_{4} =(u,\tilde{c},\tilde{b},\tilde{l},\tilde{s},\tilde{p})} \mathbbm{1} \left\{\tilde{\underline{e}}_{4} | \tilde{s} \in {\mathcal{S}} \backslash s \right\} \hspace{5pt} = \hspace{5pt} 0, \\ && \hspace{120pt} \forall u \in {\mathcal{U}}, \forall s \in {\mathcal{S}} \end{array} $$
Second, even given the spatial reuse principle, it should be still considered that an RB b of CC c already allocated to a BS s can not assigned to its neighboring BSs \(s' \in {\mathcal {N}}_{s}\) to avoid the leading cause of inter-cell interference. Consequently, it also implies a monopoly constraint as
$$\begin{array}{@{}rcl@{}} && \sum_{\forall \hat{\underline{e}}_{5} =(\hat{u},c,b,\hat{l},s,\hat{p})} \mathbbm{1} \left\{\hat{\underline{e}}_{5} \right\} \hspace{10pt} \times \sum_{\forall \tilde{\underline{e}}_{5} =(\tilde{u},c,b,\tilde{l},\tilde{s},\tilde{p})} \mathbbm{1} \left\{\tilde{\underline{e}}_{5} | \tilde{s} \in {\mathcal{N}}_{s} \right\} \hspace{5pt} = \hspace{5pt} 0, \\ &&\hspace{90pt} \forall c \in {\mathcal{C}}, \forall b \in {\mathcal{B}}, \forall s \in {\mathcal{S}} \end{array} $$
Moreover, there are two cardinality constraints to be involved. First, each UE u has its own limitation on the number of CC allocated by a BS s, denoted by ku. For example, it can be enforced that a UE of LTE 8/9 can only use 1 CC while a LTE-A UE would use 2 CCs. In general, such a constraint can be written by
$$ {}\sum_{\forall c \in {\mathcal{C}}} \mathbbm{1} \left\{\sum_{\forall \hat{\underline{e}}_{6} =(u,\hat{c},\hat{b},\hat{l},\hat{s},\hat{p})} {}\mathbbm{1} \left\{\hat{\underline{e}}_{6} |c_{\hat{\underline{e}}_{6}} = c \right\} \right\}\leq k_{u}, \forall u \in {\mathcal{U}} $$
Similarly, a cardinality constraint for each BS s to equip with at most fs CCs for communication can be represented by
$$ {}\sum_{\forall c \in {\mathcal{C}}} \mathbbm{1} \left\{\sum_{\forall \hat{\underline{e}}_{7} =(\hat{u},\hat{c},\hat{b},\hat{l},s,\hat{p})} \mathbbm{1} \left\{\hat{\underline{e}}_{7} |c_{\hat{\underline{e}}_{7}} = c \right\} \right\}\leq f_{s}, \forall s \in {\mathcal{S}} $$
Linear transformation of scheduling constraints
As shown in above, the indicator functions with complex conditions could be nonlinear on the binary integer variables involved. For those especially involving logical operations, we refer to the work in [29] showing that two either-or constraints f(x1,x2,...,xn)≤0 and g(x1,x2,...,xn)≤0 can be transformed to f(x1,x2,...,xn)≤My and g(x1,x2,...,xn)≤M(1−y) with a large number M and auxiliary binary variable y such that f(x1,x2,...,xn)≤M and g(x1,x2,...,xn)≤M. Here, given a certain l, the condition in the outer indicator function in (6) implies a logic operation to choose among the multiple binary variables \(\underline {\hat {e}}_{8} \stackrel {\triangle }{=} (u,c,\hat {b},l,s,p)\) which satisfy the condition \(l_{\hat {\underline {e}}_{8}} = l\) shown in the inner indicator function can be transformed to \(\sum \underline {\hat {e}}_{8} \leq \mathbf {b} \underline {y}_{1}\), where \(\underline {\hat {y}}_{1} \stackrel {\triangle }{=} (u, c, \hat {l}, s, p)\) is defined to play the role of the auxiliary variable y, and \(\mathbf {b} = |{\mathcal {B}}|\) defined before plays the role of M. Given that, the constraint that all RBs should be assigned only the same MCS in this context can be done by \(\sum \underline {\hat {y}}_{1} \leq 1\) on the auxiliary variables. Therefore, (6) can be transformed to the linear counterparts as
$$\begin{array}{@{}rcl@{}} && \sum_{\underline{\hat{e}}_{8} = (u,c,\hat{b},l,s,p)} \underline{\hat{e}}_{8} \leq \mathbf{b} \underline{\hat{y}}_{1}, \\ && \hspace{5pt} \forall u \in {\mathcal{U}}, \forall c \in {\mathcal{C}}, \forall l \in {\mathcal{L}}, \forall s \in {\mathcal{S}}, \forall p \in {\mathcal{P}} \end{array} $$
$$\begin{array}{@{}rcl@{}} && \sum_{ \underline{\hat{y}}_{1} = (u,c,\hat{l},s,p)} \underline{\hat{y}}_{1} \leq 1, \hspace{2pt} \forall u \in {\mathcal{U}}, \forall c \in {\mathcal{C}}, \forall s \in {\mathcal{S}}, \forall p \in {\mathcal{P}} \end{array} $$
In addition, the inner indicator functions in (10) and (11) could be also regarded as the logic operations to choose among the binary variables that can satisfy the conditions specified, and can apply a transformation like the above. Specifically, with the aid of the auxiliary variables \(\underline {\hat {y}}_{2}\) in addition to the binary variables \(\underline {\hat {e}}_{9}\), both shown below, (10) can be represented by
$$\begin{array}{@{}rcl@{}} && \sum_{\underline{\hat{e}}_{9} = (u,c,\hat{b},\hat{l},\hat{s},\hat{p})} \underline{\hat{e}}_{9} \leq \mathbf{b} \underline{\hat{y}}_{2}, \hspace{10pt} \forall u \in {\mathcal{U}}, \forall c \in {\mathcal{C}} \end{array} $$
$$\begin{array}{@{}rcl@{}} &&\sum_{\underline{\hat{y}}_{2} = (u,\hat{c})} \underline{\hat{y}}_{2} \leq k_{u}, \hspace{23pt} \forall u \in {\mathcal{U}} \end{array} $$
Similarly, with the auxiliary variables \(\underline {\hat {y}}_{3}\) and the binary variables \(\underline {\hat {e}}_{10}\) shown below, (11) can be transformed to
$$\begin{array}{@{}rcl@{}} && \sum_{\underline{\hat{e}}_{10} = (\hat{u},c,\hat{b},\hat{l},s,\hat{p})} \underline{\hat{e}}_{10} \leq \mathbf{b} \underline{\hat{y}}_{3}, \hspace{10pt} \forall c \in {\mathcal{C}}, \forall s \in {\mathcal{S}} \end{array} $$
$$\begin{array}{@{}rcl@{}} &&\sum_{\underline{\hat{y}}_{3} = (s,\hat{c})} \underline{\hat{y}}_{3} \leq f_{s}, \hspace{20pt} \forall s \in {\mathcal{S}} \end{array} $$
Apart from these, the monopoly constraints shown in (8) could be rewritten with linear forms as well. To this end, let \(\sum _{\hat {\underline {e}}_{4} =(u,\hat {c},\hat {b},\hat {l},s,\hat {p})} \hat {\underline {e}}_{4}\) be the first metric for transforming the logical either-or constraints in [29] (here, \(\hat {\underline {e}}_{4}\) is directly drawn because \(\mathbbm {1}\left \{\hat {\underline {e}}_{4} \right \}= \hat {\underline {e}}_{4}\)) and \(\sum _{\tilde {\underline {e}}_{4} =(u,\tilde {c},\tilde {b},\tilde {l},\tilde {s} \in {\mathcal {S}} \backslash s,\tilde {p})} \tilde {\underline {e}}_{4}\) be the second metric. Then, by introducing also the large number, M=ucblsp, and the auxiliary binary variables \(\underline {\hat {y}}_{4}\), we can transform (8) into its linear counterparts as
$$\begin{array}{@{}rcl@{}} && \sum_{\hat{\underline{e}}_{4} =(u,\hat{c},\hat{b},\hat{l},s,\hat{p})} \hat{\underline{e}}_{4} \leq M \underline{\hat{y}}_{4}, \hspace{10pt} \forall u \in {\mathcal{U}}, \forall s \in {\mathcal{S}} \end{array} $$
$$\begin{array}{@{}rcl@{}} && \sum_{\tilde{\underline{e}}_{4} =(u,\tilde{c},\tilde{b},\tilde{l},\tilde{s} \in {\mathcal{S}} \backslash s,\tilde{p})} \tilde{\underline{e}}_{4} \leq M \left(1- \underline{\hat{y}}_{4}\right), \hspace{5pt} \forall u \in {\mathcal{U}}, \forall s \in {\mathcal{S}} \end{array} $$
Similarly, by introducing the auxiliary binary variables \(\underline {\hat {y}}_{5} \) and M into (9), we have the linear counterparts as
$$\begin{array}{@{}rcl@{}} && \sum_{\forall \hat{\underline{e}}_{5} =(\hat{u},c,b,\hat{l},s,\hat{p})} \hat{\underline{e}}_{5} \leq M \underline{\hat{y}}_{5}, \hspace{10pt} \forall c \in {\mathcal{C}}, \forall b \in {\mathcal{B}}, \forall s \in {\mathcal{S}} \end{array} $$
$$\begin{array}{@{}rcl@{}} && \sum_{\forall \tilde{\underline{e}}_{5} =(\tilde{u},c,b,\tilde{l},\tilde{s} \in {\mathcal{N}}_{s},\tilde{p})} \tilde{\underline{e}}_{5} \leq M(1- \hat{\underline{y}}_{5}), \hspace{0pt} \forall c \in {\mathcal{C}}, \forall b \in {\mathcal{B}}, \forall s \in {\mathcal{S}} \end{array} $$
Stochastic system and queue dynamic
Now, even though the scheduling constraints can be linearly transformed, the design of 5G heterogenous networks in dynamic is still challenged by stochastic channel condition and time-varying data traffic. Specifically, the random channel gains are considered to be exponentially distributed, and the downlink traffics to UEs in time t are represented by an vector A(t)=△(A1(t),...,Au(t)), according to an independently and identically distributed (i.i.d.) distribution over t whose expectations would be \(\mathbb {E}\{A(t)\} = \lambda \stackrel {\triangle }{=} (\lambda _{1},...,\lambda _{\mathbf {u}})\). In addition, it is assumed that a maximum \(A_{u}^{max}\) exists that any non-negative traffic arrival Au(t) will not exceed. Given that, however, the statistics of A(t) are still unknown and its capacity region is also hard to estimate for a real system. Thus, without flow control, the data queues can not be stabilized in general. For this issue, an admission control method is proposed here to determine Ru(t) out of Au(t), followed by an allocation algorithm introduced next to provide link rates μu(t) for serving the admitted traffic. To realize this mechanism, the data queueing dynamic for UE \(u \in {\mathcal {U}}\) is formulated first by
$$ Q_{u}(t+1) = \max\{Q_{u}(t) - \mu_{u}(t), 0\} + R_{u}(t) $$
Then, the average data queue length on each u would be conducted to be strongly stable as
$$ \overline{Q} \stackrel{\triangle}{=} {\lim}_{T \rightarrow \infty} \frac{1}{T} \sum_{t=0}^{T-1} \sum_{u \in {\mathcal{U}}}\mathbb{E}\{Q_{u}(t)\} < \infty $$
Note that in (22), the service rate μu defined for a UE u can be obtained by
$$ \mu_{u} = \sum_{\underline{\hat{e}}_{u} = (u,\hat{c},\hat{b},\hat{l},\hat{s},\hat{p})}\left(\underline{\hat{e}}_{u} \times v(\underline{\hat{e}}_{u}) \right),\hspace{10pt} \forall u \in {\mathcal{U}} $$
Similarly, we ignore the time index t in above for brevity. As a result, \(R_{tot} = \sum _{u \in {\mathcal {U}}} \mu _{u}\). Moreover, we can see that not only the resource scheduling to provide service, but also the throughput \(r_{u}(t) \stackrel {\triangle }{=} \frac {1}{t} \sum _{\tau =0}^{t-1} \mathbb {E}\{R_{u}(\tau)\}\) to represent its performance contributes the queueing dynamic (22). Given that, the time-average throughput \(\overline {r}_{u}\), which represents the admitted and transmitted data for u in the long term, is considered as the key metric in the time-varying system for optimization:
$$ \overline{r}_{u} \stackrel{\triangle}{=} {\lim}_{T \rightarrow \infty} \frac{1}{T} \sum_{t=0}^{T-1} \mathbb{E}\{R_{u}(t)\} $$
Problem formulation
Taking all the above into account, we can now formulate the joint congestion control and resource allocation with EE-delay tradeoff problem (JCREEP) for the heterogeneous wireless network by the following stochastic programming model:
$$\begin{array}{@{}rcl@{}} \begin{array}{lll} \textbf{Maximize} &\sum_{u \in {\mathcal{U}}} \phi(\overline{r}_{u}) & \\ \textbf{subject\ to} &\text{C1:} \overline{Q} < \infty & \\ &\text{C2:} 0 \leq r_{u} \leq \lambda_{u}, & \forall u \\ &\text{C3:} 0 \leq R_{u}(t) \leq A_{u}(t) \leq A_{u}^{\text{max}}, & \forall u,\forall t \\ &\text{C4:} (\text{\ref{eqn:ouex}}), (\text{\ref{eqn:quex}}), (\text{\ref{4lineartransformeq1x}})-(\text{\ref{14lineartransformeq10x}}), &\forall t \\ &\text{C5:} \eta_{EE} \geq \eta_{EE}^{\text{req}} \end{array} \end{array} $$
In above, (26-C1) denotes the strong stability of data queues in the long term. (26-C2) and (26-C3) exhibit the constraints enforcing that the average and instantaneous throughput to be feasible. (26-C4) shows the resource scheduling constraints in linear forms most done in Section 3.5. Note that, even with the linear forms, the constraints (5), (7), and (12)-(21) are still involving the specific binary integer variables \(\underline {\hat {e}}, \underline {\hat {y}}\), or both, and deciding these binary variables concurrently for the optimization is a combinatorial problem that is NP-hard if no special structures are imposed. Finally, (26-C5) ensures that the EE performance will achieve the requirement \(\eta _{EE}^{\text {req}}\) predefined. It is worth noting here that, by using EE in (4) as one of the constraints rather than the objective function, we can maximize the system utility and guarantee EE of the whole system simultaneously, which may not be achieved by simply optimizing the EE metric as the program objective as that in the related works [30, 31].
Optimization for the stochastic system
With the aid of Lyapunov drift-plus-penalty technique and the iterative heuristic algorithm to be introduced, we would next develop an online control framework to resolve (26) composed of the resource allocation problem and the traffic admission control problem in the stochastic system.
Equivalent transformation
As shown in (26), JCREEP involves a function \(\phi (\overline {x})\) with a time-average parameter, say \(\overline {x}\), rather than a time-average function \(\overline {\phi (x)}\) with a pure parameter, say x. To use the Lyapunov drift-plus-penalty technique in the optimization as shown in [17], we would reformat JCREEP to involve the latter by first introducing an infinite sequence of random vectors in \(\mathbb {R}\) as γ=(γ1(t),...,γu(t)). Then, we define a time-average metric \(\overline {\gamma }_{u} \stackrel {\triangle }{=} {\lim }_{T \rightarrow \infty } \frac {1}{T} \sum _{t=0}^{T-1} \mathbb {E}\{\gamma _{u}(t)\}\) and a time-average function \(\overline {\phi (\gamma _{u})} \stackrel {\triangle }{=} {\lim }_{T \rightarrow \infty } \frac {1}{T} \sum _{t=0}^{T-1} \mathbb {E}\{\phi (\overline {\gamma }_{u}(t))\}\). With these, JCREEP can be transformed to an equivalent problem, say eJCREEP, as follows:
$$\begin{array}{@{}rcl@{}} \def\arraystretch{0.8} \begin{array}{lll} \textbf{Maximize} &\sum_{u \in {\mathcal{U}}} \overline{\phi(\gamma_{u})} & \\ \textbf{subject\ to} &\text{C1:} \overline{Q} < \infty & \\ &\text{C2:} 0 \leq r_{u} \leq \lambda_{u}, & \forall u \\ &\text{C3:} 0 \leq R_{u}(t) \leq A_{u}(t) \leq A_{u}^{\text{max}}, & \forall u,\forall t \\ &\text{C4:} (\text{\ref{eqn:ouex}}), (\text{\ref{eqn:quex}}), (\text{\ref{4lineartransformeq1x}})-(\text{\ref{14lineartransformeq10x}}), & \forall t \\ &\text{C5:} \eta_{EE} \geq \eta_{EE}^{\text{req}} \\ &\text{C6:} \overline{\gamma}_{u} \leq \overline{r}_{u}, & \forall u \\ &\text{C7:} 0 \leq \gamma_{u}(t) \leq A_{u}^{\text{max}}, & \forall u,\forall t \end{array} \end{array} $$
Virtual queues
In eJCREEP, (27-C6) denotes the constraints to ensure the system stability representing the fact that the arrivals would be eventually served. To conform these constraints, we define a virtual queue Hu for each \(u \in {\mathcal {U}}\). Specifically, given an initial value Hu(0)=0, such a queue will be updated by
$$ H_{u}(t+1) = \max \{H_{u}(t) - R_{u}(t), 0 \} + \gamma_{u}(t) $$
In addition, for the EE performance requirement in (27-C5), we define a virtual queue Z which evolves as
$$ Z(t+1) = \max\left\{ Z(t) - R_{tot}(t), 0 \right\} + W \eta_{EE} P_{tot}(t) $$
In terms of queueing dynamic similar to (22), the variables γu(t) and WηEEPtot(t) can be regarded as the arrivals of the virtual queues in (28) and (29), while Ru(t) and Rtot(t) as the service rates of these virtual queues, respectively.
Online control based on Lyapunov drift-plus-penalty
Given Hu(t),Z(t), and Qu(t) for the online control method, we define \(\Theta (t)\stackrel {\triangle }{=} \left \{Q_{u}(t), H_{u}(t), Z(t): u \in {\mathcal {U}} \right \}\), a vector concatenating all the data and virtual queues involved. Further, for realizing a scalar metric to reflect the queue congestion, we define a quadratic Lyapunov function corresponding to the system as
$$ L(\Theta(t)) \stackrel{\triangle}{=} \frac{1}{2} \left\{ \sum_{u \in {\mathcal{U}}}{Q_{u}(t)}^{2} + \sum_{u \in {\mathcal{U}}}{H_{u}(t)}^{2} + {Z(t)}^{2} \right\} $$
Here, a small value of L(Θ(t)) implies that the sizes of data queues and virtual queues are all small and that the queues have strong stability. Given that, the queue stability can be ensured by persistently pushing the Lyapunov function toward a lower congestion state. Thus, to stabilize these queues, a one-slot conditional Lyapunov drift can be defined by
$$ \Delta(\Theta(t)) \stackrel{\triangle}{=} \mathbb{E} [L(\Theta(t+1)) - L(\Theta(t)) | \Theta(t) ] $$
Now, apart from satisfying the average constraints and optimizing the system throughput utility, with this drift, our online dynamic control algorithm can observe the data and virtual queues, the current channel conditions, and the traffic states at each slot t so that Ru(t) can be determined and the resources be allocated to support γu(t), by minimizing a bound on the following Lyapunov conditional drift-plus-penalty expression:
$$ \Delta(\Theta(t)) - V \mathbb{E} \left\{\sum_{u \in {\mathcal{U}}} \overline{\phi(\gamma_{u}(t))} | \Theta(t) \right\} $$
In above, the system parameter V is a non-negative weight to represent the emphasis on the utility maximization compared with the queue stability and can be flexibly chosen to make a tradeoff between them. More precisely, with the above queueing dynamics, an upper bound for the drift-plus-penalty-based algorithm can be obtained with the following theorem.
At slot t, for any observed queue state Θ(t), and V≥0, the Lyapunov drift-plus-penalty algorithm can satisfy the following inequality:
$$\begin{array}{@{}rcl@{}} && \Delta(\Theta(t)) - V \mathbb{E} \left\{\sum_{u \in {\mathcal{U}}} \overline{\phi(\gamma_{u}(t))} | \Theta(t) \right\} \leq \\ && \Gamma - V \mathbb{E} \left\{\sum_{u \in {\mathcal{U}}} \overline{\phi(\gamma_{u}(t))} | \Theta(t) \right\} + \\ && \mathbb{E}\left\{ \sum_{u \in {\mathcal{U}}} Q_{u}(t) \left(R_{u}(t) - \mu_{u}(t) \right) | \Theta(t) \right\} + \\ && \mathbb{E}\left\{ \sum_{u \in {\mathcal{U}}} H_{u}(t) \left(\gamma_{u}(t) - R_{u}(t) \right) | \Theta(t) \right\} + \\ && \mathbb{E}\left\{ Z(t) \left(W \eta{EE}^{\text{req}} P_{\text{tot}}(t) - R_{\text{tot}}(t)\right) | \Theta(t) \right\} \end{array} $$
where \(\Gamma = \frac {1}{2} \left (3 \sum _{u \in {\mathcal {U}}} \left (A_{u}^{\text {max}}\right)^{2} + \sum _{u \in {\mathcal {U}}} \left (\mu _{u}^{\text {max}}\right)^{2} + \left (P_{\text {tot}}^{\text {max}}(t)\right)^{2} + \left (R_{\text {tot}}^{\text {max}}(t)\right)^{2} \right)\), and \(\mu _{u}^{\text {max}}\) denotes the maximum transmission rate that can be obtained on u.
Please refer to Appendix 1. □
Solving problem by decomposition
By observing the inequality in Theorem 1, we can decide to minimize the bound given in the right-hand side (R.H.S.) of (33) at every time slot for the optimization. This is more convenient than directly minimizing the drift-plus-penalty function itself because the minimization on R.H.S. could be decoupled to a series of independent subproblems that can be solved independently and simultaneously, as shown as follows.
Auxiliary variables
The first subproblem is to determine the optimal auxiliary variables γu, conducted to track the stability constraint shown in (27-C6). Specifically, the optimal γu can be resulted from minimizing \(- \mathbb {E}\left \{\sum _{u \in {\mathcal {U}}} \left (V\overline {{\phi }(\gamma _{u}(t))} - H_{u}(t) \gamma _{u}(t) \right) | \Theta (t)\right \}\) that is obtained by slightly rearranging the relevant terms in the R.H.S of (33). Clearly, for the minimization, a concave nondecreasing system utility ϕ(·) for each u should be given at first. Here, the well-known utility function log(1+vuγu) is considered as an example wherein vu denotes a weight to maintain, e.g., the proportional fairness among UEs. Further, since the variables are independent among UEs, the minimization on γu(t) can be decoupled from the joint optimization. Finally, by reversing the sign in the objective for minimization, we have an equivalent maximization problem as
$$\begin{array}{@{}rcl@{}} \begin{array}{lll} \underset{{\gamma_{u}(t)}}{{\textbf{Maximize}}} & V \overline{{\phi}(\gamma_{u}(t))} - H_{u}(t) \gamma_{u}(t) &\\ \textbf{subject\ to} & 0 \leq \gamma_{u}(t) \leq A_{u}^{\text{max}}, & \forall u \in {\mathcal{U}} \end{array} \end{array} $$
Obviously, it is a convex optimization problem. To find its optimum, we can first differentiate the objective function \(V \overline {{\phi }(\gamma _{u}(t))} - H_{u}(t) \gamma _{u}(t)\) with respect to γu(t) and then make the result equal to zero. For the log utility function just exemplified, we can solve the equation resulted to obtain its solution as
$$ \gamma_{u}(t) = \left\{ \def\arraystretch{0.55} \begin{array}{l l} 0, & H_{u}(t) > v_{u} V \\ \frac{V}{H_{u}(t)} - \frac{1}{v_{u}}, & \frac{V}{A_{u}^{\text{max}}+\frac{1}{v_{u}}} \leq H_{u}(t) \leq v_{u} V \\ A_{u}^{\text{max}}, & H_{u}(t) < \frac{V}{A_{u}^{\text{max}} + \frac{1}{v_{u}}} \end{array} \right. $$
Admission control
Recall that for the system stability, our algorithm can admit only Ru(t) out of Au(t) arrivals to transmit. For the traffic admission control subproblem in hand, we can observe the second and third expectations in R.H.S. of (33) to minimize \(\mathbb {E}\{ \sum _{u \in {\mathcal {U}}} R_{u}(t) \left (Q_{u}(t) - H_{u}(t) \right) | \Theta (t)\}\), which leads to the optimal traffic admission control at each TTI, as follows:
$$\begin{array}{@{}rcl@{}} \begin{array}{lll} \underset{{R_{u}(t)}}{{\textbf{Minimize}}} & \sum_{u \in {\mathcal{U}}} R_{u}(t) \left(Q_{u}(t) - H_{u}(t) \right)&\\ \textbf{subject\ to} &0 \leq R_{u}(t) \leq A_{u}(t), \hspace{10pt} \forall u \in {\mathcal{U}} & \end{array} \end{array} $$
This is clearly a linear problem, and a simple threshold-based admission control strategy for this problem can be derived as
$$ R_{u}(t) = \left\{ \begin{array}{ll} A_{u}(t), & H_{u}(t) > Q_{u}(t) \\ 0, & \text{otherwise} \end{array} \right. $$
As the threshold would imply, only when the virtual queue Hu(t) is accumulated larger than the data queue Qu(t), the new arrival Au(t) can then be admitted; otherwise, they will be denied to ensure the data traffic stability. That is, with the simple threshold, the admission control will be conducted to reduce Hu(t) to push γu(t) toward Ru(t) and increase the throughput Ru(t) to improve the system utility simultaneously.
Resource allocation for energy efficient transmission
As the kernel issue of eJCREEP, how to concurrently determine the multiple kinds of resources at each TTI for EE transmission is a NP-hard combinatorial problem, in general, without special structures imposed. Here, with the aid of the drift-plus-penalty technique developed, such a high-dimensional allocation subproblem can be decomposed as minimizing \(- \mathbb {E}\left \{\sum _{u\in {\mathcal {U}}} Q_{u}(t) \mu _{u}(t) + Z(t) \left (R_{\text {tot}}(t) - W \eta _{_{EE}}^{\text {req}} P_{tot}(t) \right) | \Theta (t) \right \}\) without knowing the channel states in advance. Similarly, by negating the objective, we have an equivalent maximization problem as
$$\begin{array}{@{}rcl@{}} &&\underset{{\underline{e}(t)}}{{\textbf{Maximize}}} \hspace{10pt} \nu(t) = \sum_{u \in {\mathcal{U}}} \left(\alpha_{u}(t) \mu_{u}(t) - \beta(t) P_{u}(t) \right) \\ &&\textbf{subject\ to} \hspace{10pt} (\text{\ref{eqn:ouex}}), (\text{\ref{eqn:quex}}), (\text{\ref{4lineartransformeq1x}})-(\text{\ref{14lineartransformeq10x}}) \end{array} $$
where \(\alpha _{u}(t) = Q_{u}(t) + Z(t), \beta (t) = W \eta _{_{EE}}^{\text {req}} Z(t)\), and \(P_{u}(t) = \sum _{s \in {\mathcal {S}}} P_{s,u}^{p} + P^{c}_{s,u}\). As shown in Sections 3.4 and 3.5, the scheduling constraints are composed by the binary integer variables involved, and the combinatorial problem would be NP-hard, despite the optimization tools. Thus, instead of directly using an integer programming tool to solve this problem which would be still time-consuming when the inputs are not small enough, we design in the sequel a more computationally efficient algorithm based on the iterative linear programming-based heuristic (ILPH) to obtain a suboptimal solution that can be done within a time limit required.
Iterative optimal MCS-based heuristic algorithm
As shown in [15], iterative linear programming-based heuristic (ILPH) is a useful approach to resolve 0-1 integer programs, which is done by solving a series of small subproblems obtained from linear programming relaxations. Specifically, at each iteration, ILPH will conduct an LP-relaxation of the current problem P to generate one constraint. Then, a reduced problem induced from an optimal solution of the LP-relaxation is solved to obtain a feasible solution for the initial problem. After that, if the stopping criterion is satisfied, then the solutions found are returned. Otherwise, a pseudo-cut is added to P and the process is repeated.
In our work, the binary variable \(\underline {e}\) for resource allocation is highly dimensional so that even solving a corresponding LP-relaxation problem could be time-consuming unless the input size is trivially small. Thus, a MCS-based reallocation approach is conducted here to reduce the overhead. For doing so, we define \(J^{0}(\underline {e}) = \{j \in (u,c,b,l,s,p): \underline {e}_{j} =0 \}, J^{1}(\underline {e}) = \{j \in (u,c,b,l,s,p): \underline {e}_{j} =1 \}\), and \(J(\underline {e}) = J^{0}(\underline {e}) \cup J^{1}(\underline {e})\) similar to that in [15]. Then, an iterative optimal MCS-based heuristic algorithm (IOMHA) is introduced to restrict the search process to visiting the optimal solutions already generated from the time-limited optimization on P by adding a pseudo-cut at each iteration. As tabulated in Algorithm ?? with details, IOMHA first solves the maximization problem instance P in (38) to find a feasible solution \(\underline {e}^{*}\) with utility ν∗. If the solution is not optimal, it might be improved by boosting the MCS of remaining RBs to find the largest MCS usable by all considered RBs [8]. However, instead of using the primitive method, IOMHA further attempts to make the utility contributed by the UE larger by releasing more RBs of the considered CC to render its remaining RBs able to employ an even higher-rate MCS. To this end, consider the utility \(h(\underline {e}) = ((Q_{u} + Z) v(\underline {e}) - W\eta _{EE}^{\mathrm { req}}Z P_{u})\) without the time index t for brevity. Given that, if a UE u served by a BS s has some RB(s) of CC c∗ at PL p re-allocated to UE u∗, we search the MCS l′ that makes the largest the total UE utility contributed by all remaining RBs of c∗ assigned to u among all maximum MCSs employable by these RBs (lines 5–7). Then, we reassign MCS l′ to UE u on the transmission of CC c∗ from BS s and release the allocations without any utility contribution, producing \(J^{1}(\underline {e}^{*})\) and \(J^{0}(\underline {e}^{*})\) (lines 8–9). The reassignment further forms a new set of constraints \(\{\hat {f} \underline {e} = C \}\), where \(\hat {f}_{j} = 1, \forall j \in J\) while Cj=1 if j∈J1 and 0 if j∈J0 (line 10), and we solve the corresponding problem \(Q = (P|\{\hat {f} x = C \})\) with the time limit Tl to obtain a feasible (or an optimal) solution \(\hat {\underline {e}}\) giving utility \(\hat {\nu }\) (line 11). If the improvement \(I = \frac {\hat {\nu }} {\nu ^{o}}\) does not exceed a given low bound IB, the process would stop. Otherwise, based on Propositions 1 and 2 in [15], a pseudo-cut \(\{ f \underline {e} \leq |J^{1}(\underline {e}^{*})| -1 \}\), where \(f_{j} = 2 \underline {e}^{*}_{j} - 1\) if \(\underline {e}^{*}_{j} \in J(\underline {e}^{*})\) and 0 otherwise, will be added when the remaining time t=t−2Tl allows, and the problem will be updated as \(P = (P|\{f \underline {e} \leq |J^{1}(\underline {e}^{*})| -1 \})\) that would be solved to seek further improvements (lines 12–14). Finally, the allocation result \(\underline {e}\) corresponding to the best utility found during the searching process will be returned (line 15).
Performance bounds
As shown in above, IOMHA is an approximation algorithm to resolve the high-dimensional allocation subproblem involved. However, if the optimal solutions can be given, the overall algorithm for eJCREEP can operate under the performance bounds on, e.g., data queue lengths, as shown in the following theorem.
Given arbitrary traffic arrival rates and an energy efficiency requirement, the algorithm solving eJCREEP with a fixed control parameter V≥0 can guarantee the bounds on data queue lengths as
$$ Q_{u}(t) \leq Q_{u}^{\mathrm{ max}} = v_{u} V + 2 A_{u}^{\mathrm{ max}} \hspace{10pt} $$
Apart from the above, the other performance bounds for the Lyapunov drift-plus-penalty framework can be also found in a similar way. For example, a drift-plus-penalty approach had been shown, e.g., in [32], to achieve O(ε) approximation with a convergence time of O(1/ε2) with ε=1/V.
Environment setting
In this section, we numerically evaluate our optimization algorithm through a simulation topology as shown in Fig. 2, wherein 1 MBS and 3 SBSs are deployed, and each of them initially serves 3 UEs located within its transmission range for their downlink transmissions before the resource allocation. In addition to s=4 and u=3 just indicated, there are also c=5, b=10, l=29, and p=3, other resources contributing to the overall complexity that is significant enough to evaluate the high-dimensional allocation problem involved. Further, each UE in cell is conducted to dynamically change its position according to the random waypoint (RWP) model [33], and the channel condition is assumed to be varied time to time on each RB as that in [34]. Given the time-varying environment, MBS is conducted to perform the proposed algorithm with Tl=1000,W=1,vu=1, along with the other key parameters summarized in Table 2. Based on the above setting, the performance results are resulted and summarized in the sequel.
Initial topology for the experiments
Table 2 Parameters in the experiments, with Pt = 29 dbm denoting the maximum transmit power Pmax
To be specific, the performance metrics include time-average utility, throughput, data queue length, and energy efficiency (EE) denoted by \(\overline {\phi }, \overline {\gamma }, \overline {Q}\), and \(\overline {\eta _{EE}}\), respectively; each of them is represented by its mean value obtained from all UEs carrying out 100 times of the algorithm per experiment. Given these metrics, our algorithm is then conducted by varying V and \(\eta _{EE}^{\text {req}}\) to focus on the performance tradeoffs among throughput, data queue length (or delay), and energy efficiency (EE) in the experiments exemplifying the performance trends. To this end, \(A_{u}(t), \forall u \in {\mathcal {U}}\) at each slot t is randomly generated by the Poisson distribution with the mean value obtained by the maximum TBS=680 multiplying with a given constant C1=14 which represents a possible varying traffic arrival at time t under the maximum allowable rate \(A_{u}^{\text {max}} = TBS * C_{2}\), where C2=20. Following that, the time-varying Rayleigh channel conditions are simulated by using the random channel gains obtained by the exponential distribution with the mean value of 1. Consequently, a wide range of V sampled at [101,103,106,107,108,10910111015], and that of \(\eta _{EE}^{\text {req}}\) at [1,2,4,8] are combinatorially examined to know their impacts on the algorithm in general.
The experiment results are summarized in Fig. 3. Specifically, from Fig. 3a and b, we can see that as V increases, the utility and throughput improve significantly and converge to their maximum levels for larger V. This is expected because the achieved utility would increase to optimum at the speed of O(1/V) when V increases, which implies a control emphasizing more on the throughput. However, as shown by the curves remaining nearly the same for large V, we can see also that the improvement will diminish with an excessive increment of V which may then aggravate the congestion as the data queue length would rise as V increases. In addition, it can be noted that as V increases, the system would more emphasize on the throughput utility as noted before, which could increase γu (with (35)) and then Hu (with (28)), leading to more arrivals to be admitted (with (37)) and eventually an increased data queue length (with (22)). Specifically, Fig. 3c exhibits that the increasing data queue length due to the increment of V would increase the average delay, and thus, the tradeoff between throughput and delay emerges, which well confirms Theorem 2.
Impacts of varying V upon the time-average a utility \(\overline {\phi }\), b throughput \(\overline {\gamma }\), c data queue length \(\overline {Q}\), and d energy efficiency \(\overline {\eta _{EE}}\)
On the other hand, the performance differences on EE obtained by different EE requirements versus different V are exhibited in Fig. 3d. To show its implication, we note that in the simulations, the EE value obtained from different V without any EE requirement is 3.58 on average, denoted here by EE threshold. Clearly, when \(\eta _{EE}^{\text {req}} = 1\) and 2 that are smaller than the threshold, the EE values actually obtained shown in this figure as well as the throughput-delay tradeoffs shown in the above are very similar, despite V. On the other hand, when \(\eta _{EE}^{\text {req}}\) increases to 4 and 10 that are larger than the threshold, the average throughput would increase especially when V is smaller (see Fig. 3b). This phenomenon can be explained by the aid of Fig. 4 that is obtained with V=101. As shown therein (Fig. 4), to guarantee \(\eta _{EE}^{\text {req}}\), the network would decrease the transmit power level, and thus encourage the transmissions of small cells by allocating more RBs to SBSs that achieve a higher spectrum reuse gain, followed by the increment of the EE obtained and the average throughput. When V is smaller (such as V=101 as exemplified), the EE performance gain obtained by a higher EE requirement \(\left (\eta _{EE}^{\text {req}}\right)\) is more significant. On the other hand, as V increases, the system would more emphasize on the throughput utility and pay less attention to EE, and hence, the EE gain would decrease and become less significant (see Fig. 3d). These results confirm that our algorithm actually represents a controllable method that can approach the optimal throughput while satisfying the EE requirement by simply manipulating the parameter V to achieve the performance tradeoff required by the system.
RB allocation results with V=101
As our IOMHA is conducted to concurrently allocate multiple types of resources in the multi-tier multi-cell networks, which can hardly correspond to an existing method in the related works that did not consider the resources: UEs, RBs, CCs, MCSs, cells, and PLs, and the EE constraint at the same time. However, to explore its performance benefits in eJCREEP, we extend the greedy algorithm in [8] (called Greedy), and the LL+RS algorithm introduced therein for comparison, to involve the multiple cells and discrete power levels in this question, resulting in more comparable methods for our work, when compared with the other algorithms possessing certain properties such as continuous power allocation that is hard to changed for the sake of comparison. As introduced in [8], in the first step, the LL+SS algorithm based on [5] will perform CC assignment with the concept of Least Load (LL) by which each UE is assigned the CCs with the least number of UEs. In the second step, it assigns RBs of each CC to UE by its packet scheduling function while resolving the MCS constraint in the scheduling. Given that, LL+SS as well as Greedy still did not consider allocating CCs to multiple cells and utilizing discrete PLs. To address this issue, we first allocate CCs to different cells with the objective of maximizing the sum of SNR values of CCs perceived by cells while complying with our multi-cell constraint that a UE can be only served by a single BS s and that each BS s can equip with at most fs CCs, as the first level of the extension. After allocating CCs to each cell, Greedy and LL+SS can then be run to play the role of IOMHA in eJCREEP with discrete PLs, respectively, to solve the allocation problem in Section 4.4.3, as the second level of the extension.
In addition, for a more general condition to be encountered, we do not restrict ourselves to consider only the SNR values based on the distances and channel models in the previous set of experiments. Instead, we assume that SNR of each RB perceived by UE would be a random variable uniformally distributed in the range between − 5 and 22.38 according to the SNR-CQI index mapping in [27], exemplifying an allocation that can involve all possible mapping values and their results in the simulation. In this case, we solve the allocation problem (38) with an optimization tool for the optimum without limiting its solving time while approaching the optimal result by using IOMHA with a reasonable time constraint represented by TB=1000 and Tl=500 and obtaining the suboptimal solutions based on Greedy and LL+SS, respectively, to see their performance differences varied with different V. Specifically, in view of the results revealed in the previous experiment set, we use V={101,106,1015} to exemplify a possible low/midle/high system parameter causing the performance tradeoff in the same spectrum of V from 101 to 1015 considered in Section 7.2, while fixing \(\eta ^{\text {req}}_{EE}=10\) and remaining the other parameters.
The comparison results are now summarized in Fig. 5. As shown in Fig. 5a, while complying with the performance trend shown in Section 7.2, IOMHA exhibits its throughput to approach the optimal value which is significantly higher than that to be achieved by Greedy and LL+SS despite V. This confirms the benefit of the joint optimization that can concurrently decide the CC allocation to cells and the allocation of the RBs involved to UEs while complying with the MCS constraint and the other constraints. In contrast to the joint approach, the related works [5, 8] usually schedule RBs with or without the MCS constraint, based on the assumption of pre-allocated CCs. Here, without the joint optimization gain, Greedy is worse than IOMHA as a result, but it still outperforms LL+SS which is consistent with the observation shown in [8].
Performance comparison on the time-average a throughput \(\overline {\gamma }\), b data queue length \(\overline {Q}\), and c energy efficiency \(\overline {\eta _{EE}}\) that are optimally obtained and resulted from IOMHA, Greedy, and LL+SS, respectively
In Fig. 5b, the data queue length is shown in log10 magnitude to focus on the performance differences brought by the different methods in this metric. If applying a normal scale, the larger queue lengthes resulted from a high V (1015) would be the focus of the figure, making the results from a lower V (101 or 106) insignificant even though the relative differences among them are all large enough despite V. In this representation, it is clearly shown that IOMHA yields a lower queue length than Greedy and LL+SS throughout the three V parameters, which also denotes a lower delay to be obtained by our method.
Finally, in Fig. 5c, the decreasing trend for the EE performance is exhibited to be the same as that observed from Fig. 3d. While all the methods under comparison have the same trend as expected, the EE performance resulted from IOMHA in eJCREEP is only slightly lower than the optimum, and Greedy has the result lower than ours but still outperforms LL+SS significantly. Taking all the tree metrics (throughput, queue length or delay, and EE) into account, it could be noted that using IOMHA with a proper time constraint to resolve the resource allocation problem and gradually improve the result would be a good method to trade the optimality for the eJCREEP that is NP optimization problem off against a lower and controllable complexity. That is, using IOMHA in eJCREEP would be better than simply adopting on-the-fly methods such as Greedy and LL+SS in this problem that can be done only once for a suboptimal solution to the complex allocation problem involved without a chance for further improvements.
In this work, we have addressed an optimization problem on the throughput utility while satisfying the EE requirement with time-varying channel condition and data traffic realized by the carrier aggregation technique in 5G heterogeneous wireless networks. For obtaining a practical solution, the high-dimensional NP-hard allocation problem involved was first formulated with a programming model involving nonlinear integer constraints, and then reformatted to be an equivalent problem involving only linear integer constraints. However, finding an optimal solution for the mixed integer programming model without special structures imposed would be still NP-hard and time-consuming, even with the linear form. For this challenge, an iterative optimal MCS-based heuristic algorithm (IOMHA) was proposed to approach the optimum within a limited period of time demanded by the user, in the low layer. Given that, a Lyapunov optimization framework was developed to resolve the problem in the high layer that can admit time-varying traffics without a priori knowledge of arrivals. Then, with the solutions from the two layers, we completed an approach that can make an optimal tradeoff with a system control parameter V and satisfy the long-term EE requirement simultaneously. Finally, the proposed framework was verified to reveal the performance tradeoffs among throughput, delay, and energy efficiency, showing that it can serve as an efficient way to address such a complex optimization problem, exhibiting the performance trends on the tradeoffs for the future works. In particular, as a resource allocation problem for nowadays stochastic networks becomes more challenging to meet fast convergence and tolerable delay requirement, a machine learning approach involving batch training could be developed as our future work while preserving the stochastic network optimization context guarantees queue stability with our Lyapunov drift-plus-penalty framework that can take the advantage of the iterative optimal MCS-based heuristic algorithm proposed to flexibly adjust its convergence time required by the system.
Proof of Theorem 1:
By leveraging the fact that A≥0,b≥0,Q≥0,(max{Q−b,0}+A)2≤Q2+A2+b2+2Q(A−b), we can square both sides of (22), (28), and (29), and sum the squares for (22) and (28) over all u, leading to
$$\begin{array}{@{}rcl@{}} && \sum_{u \in {\mathcal{U}}} \left(Q_{u}(t+1)^{2} - Q_{u}(t)^{2}\right) \leq \sum_{u \in {\mathcal{U}}} (A_{u})^{2} + \\ && \sum_{u \in {\mathcal{U}}} (\mu_{u})^{2} + 2 \sum_{u \in {\mathcal{U}}} Q_{u}(t) (R_{u}(t)- \mu_{u}(t)) \end{array} $$
$$\begin{array}{@{}rcl@{}} && \sum_{u \in {\mathcal{U}}} \left(H_{u}(t+1)^{2} - H_{u}(t)^{2}\right) \leq 2 \sum_{u \in {\mathcal{U}}} (A_{u})^{2} + \\ &&\hspace{40pt} 2 \sum_{u \in {\mathcal{U}}} H_{u}(t) (\gamma_{u}(t)- R_{u}(t)) \end{array} $$
$$\begin{array}{@{}rcl@{}} && Z(t+1)^{2} - Z(t)^{2} \leq (P_{\text{tot}}(t))^{2} + (R_{\text{tot}}(t))^{2} + \\ &&\hspace{50pt} 2 \left(W \eta_{_{EE}}^{\text{req}} P_{\text{tot}}(t) - R_{\text{tot}}(t) \right) \end{array} $$
Let \(A_{u}^{\text {max}}\) and \(\mu _{u}^{\text {max}}\) be the upper bounds of Au(t) and μu(t),∀t, respectively. Further let \(R_{tot}^{\text {max}}(t)\) be \(\sum _{\forall \underline {e} \in \Xi } v(\underline {e}(t))\), and \(P_{\text {tot}}^{\text {max}}(t)\) be \(W \eta _{_{EE}}^{\text {req}} \left (\sum _{\forall \underline {e} \in \Xi } \mathbf {I}(\underline {e}) \left (P^{p}_{s, u}(t) + P^{c}_{s,u} \right)\right)\), where I(x)=1,∀x. In addition, consider \(R_{u}(t) \leq A_{u}^{\text {max}}\) and \(\gamma _{u}(t) \leq A_{u}^{\text {max}}\). After taking these definitions and considerations into (40), (41), and (42), we can then combine the resulted equations and take the expectation with respect to Θ(t) on both sides of the combination, which eventually leads to the one-slot conditional Lyapunov drift as follows:
$$\begin{array}{@{}rcl@{}} && \Delta(\Theta(t)) \leq \Gamma + \mathbb{E}\left\{ \sum_{u \in {\mathcal{U}}} Q_{u}(t) \left(R_{u}(t) - \mu_{u}(t) \right) | \Theta(t) \right\} + \\ && \hspace{10pt} \mathbb{E}\left\{ \sum_{u \in {\mathcal{U}}} H_{u}(t) \left(\gamma_{u}(t) - R_{u}(t) \right) | \Theta(t) \right\} + \\ && \hspace{10pt} \mathbb{E}\left\{Z(t) \left(W \eta_{_{EE}}^{\text{req}} P_{\text{tot}}(t) - R_{\text{tot}}(t) \right) | \Theta(t) \right\} \ \end{array} $$
where \(\Gamma \!\,=\, \frac {1}{2} \left (\!3 \sum _{u \in {\mathcal {U}}} \left (A_{u}^{\text {max}}\right)^{2} \!\,+\, \sum _{u \in {\mathcal {U}}} \left (\mu _{u}^{\text {max}}\right)^{2} \!\,+\, \left (P_{\text {tot}}^{\text {max}}(t)\right)^{2} + (R_{\text {tot}}^{\text {max}}(t))^{2} \right)\). Finally, (33) is obtained by adding \(- V \mathbb {E} \left \{\sum _{u \in {\mathcal {U}}} \overline {\phi (\gamma _{u}(t))} | \Theta (t) \right \}\) on both sides of (43).
Proof of Theorem 2
For the performance bound, we would first show that \(H_{u}^{\text {max}}\stackrel {\triangle }{=} v_{u} V + A_{u}^{\text {max}}\), will be the upper bound of Hu(t). It is done by induction, showing that if this bound holds at time slot t, it will be true at time t+1 also. More specifically, because γu(t) can not exceed \(A_{u}^{\text {max}}\), the algorithm can increase Hu(t) with an amount of at most \(A_{u}^{\text {max}}\) at slot t based on (37), and thus, if Hu(t)≤vuV, Hu(t+1) will not exceed \(v_{u} V + A_{u}^{\text {max}}\). Otherwise, if Hu(t)>vuV,γu(t) will be 0 according to (35). In this case, Hu(t) will not increase in t+1, and hence, Hu(t+1)≤Hu(t) which is bounded above by \(H_{u}^{\text {max}}\).
Next, we proceed to prove Qu(t) to be bounded with respect to \(H_{u}^{\text {max}}\) shown above, which can be also done by induction. First, this bound is assumed to be true at t. Given the induction hypothesis and the relationship \(R_{u}(t) \leq A_{u} (t) \leq A_{u}^{\text {max}}, Q_{u}\) will increase by at most \(A_{u}^{\text {max}}\) in one slot. Recall that \(H_{u}^{\text {max}} \stackrel {\triangle }{=} v_{u} V + A_{u}^{\text {max}}\) is the upper bound of Hu(t). Then, if \(Q_{u}(t) \leq H_{u}^{\text {max}}\), then Qu(t+1) will not exceed \(H_{u}^{\text {max}} + A_{u}^{\text {max}} = \left (v_{u} V + A_{u}^{\text {max}}\right) + A_{u}^{\text {max}} = v_{u} V + 2 A_{u}^{\text {max}}\), according to the data queueing dynamic (22) which increases Qu(t) by at most Ru(t) while Ru(t) can increase by at most \(A_{u}^{\text {max}}\) based on (37). Otherwise, if \(Q_{u}(t) > H_{u}^{\text {max}}\), then Ru(t) will be 0 according to (37) as well. Finally, both cases confirm \(Q_{u}^{\text {max}} \stackrel {\triangle }{=} H_{u}^{\text {max}} + A_{u}^{\text {max}} = v_{u} V + 2 A_{u}^{\text {max}}\) to be the bound shown in (39), and the proof is done.
It is so considered according to the note shown in [11, 12] that discrete power control can offer two main benefits over continuous power control: (i) the transmitter design is simplified and, more importantly, (ii) the overhead of information exchange among network nodes is significantly reduced.
3GPP:
3rd Generation Partnership Project
5G:
Component carrier
CQI:
Channel quality indicator
EE:
eJCREEP:
Equivalent joint congestion control and resource allocation with EE-delay tradeoff problem
HetNet:
Heterogeneous network
IOMHA:
Iterative optimal-MCS-based heuristic algorithm
RWP:
Random waypoint
JCREEP:
Joint congestion control and resource allocation with EE-delay tradeoff problem
Long-Term Evolution
LTE-A:
Long-Term Evolution-Advanced
Macro base station
MC:
Macro cell
MCS:
Modulation and coding scheme
Non-deterministic polynomial time
OFDMA:
Orthogonal frequency division multiple access
PL:
RB:
Resource block
RRM:
Radio resource management
SBS:
Small base station
SC:
Small-cell
SNR:
Signalto-noise ratio
TTI:
Transmission time interval
User equipment
E. Hossain, M. Hasan, 5g cellular: key enabling technologies and research challenges. IEEE Instrum. Meas. Mag.18:, 11–21 (2015).
M. Deghani, K. Arshad, Lte-advanced radio access enhancements: A survey. Wirel. Pers. Commun.80(3) (2014).
FemtoForum, Femtocells Natural Solution for Offload. Tech. Rep. (2010). https://www.slideshare.net/wandalex/femtocells-a-natural-solution-for-offload.
F. D. Ganni, A. Pratap, R. Misra, in Proceedings of the 7th ACM International Workshop on Mobility, Interference, and MiddleWare Management in HetNets (MobiMWareHN'17). Distributed algorithm for resource allocation in downlink heterogeneous small cell networks (USA ArticleNew York, 2017), pp. 5–6.
Y. Wang, K. I. Pedersen, T. B. Sorensen, P. E. Mogensen, Carrier load balancing and packet scheduling for multi-carrier systems. IEEE Trans. Wirel. Commun.9(5), 1780–1789 (2010).
Y. Wang, K. I. Pedersen, T. B. Sorensen, P. E. Mogensen, in 2011 IEEE 73rd Vehicular Technology Conference (VTC Spring). Utility Maximization in LTE-Advanced Systems with Carrier Aggregation, (2011), pp. 1–5. https://doi.org/10.1109/VETECS.2011.5956494.
L. Zhang, K. Zheng, W. Wang, L. Huang, Performance analysis on carrier scheduling schemes in the long-term evolution-advanced system with carrier aggregation. IET Commun.5(5), 612–619 (2011).
H. S. Liao, P. Y. Chen, W. T. Chen, An efficient downlink radio resource allocation with carrier aggregation in lte-advanced networks. IEEE Trans. Mob. Comput.13(10), 2229–2239 (2014).
H. Zhang, H. Liu, J. Cheng, V. C. M. Leung, Downlink energy efficiency of power allocation and wireless backhaul bandwidth allocation in heterogeneous small cell networks. IEEE Trans. Commun.66(4), 1705–1716 (2018).
H. Beyranvand, M. Levesque, M. Maier, J. A. Salehi, C. Verikoukis, D. Tipper, Toward 5g: Fiwi enhanced lte-a hetnets with reliable low-latency fiber backhaul sharing and wifi offloading. IEEE/ACM Trans. Netw.25(2), 690–707 (2017).
H. Zhang, L. Venturino, N. Prasad, P. Li, S. Rangarajan, X. Wang, Weighted sum-rate maximization in multi-cell networks via coordinated scheduling and discrete power control. IEEE J. Sel. Areas Commun.29(6), 1214–1224 (2011).
J. Zheng, Y. Cai, Y. Liu, Y. Xu, B. D. and, and x. (sherman) shen, "optimal power allocation and user scheduling in multicell networks: Base station cooperation using a game-theoretic approach," in. IEEE Trans. Wirel. Commun.13(12), 6928–6942 (2014).
G. T. version 8.4.0 Release 8, Evolved universal terrestrial radio access (e-utra): Physical layer procedures. https://www.3gpp.org/ftp/Specs/archive/36_series/36.213/.
A. L. Soyster, B. Lev, W. Slivka, Zero-one programming with many variables and few constraints. Eur. J. Oper. Res.2:, 195–201 (1978).
S. Hanafi, C. Wilbaut, Improved convergent heuristics for the 0-1 multidimensional knapsack problem. Ann. Oper. Res.183(1), 125–142 (2011).
M. Dyer, L. Stougie, Computational complexity of stochastic programming problems. Math. Program.106(3), 423–432 (2006).
M. J. Neely, Stochastic Network Optimization with Application to Communication and Queueing Systems (Morgan and Claypool Publishers, San Rafael, 2010).
D. W. K. Ng, E. S. Lo, R. Schober, Energy-efficient resource allocation in multi-cell OFDMA systems with limited backhaul capacity. IEEE Trans. Wirel. Commun.11(10), 3618–3631 (2012).
H. Lee, S. Vahid, K. Moessner, A survey of radio resource management for spectrum aggregation in LTE-advanced. IEEE Commun. Surv. Tutor.16(2), 745–760 (2014).
F. Wu, Y. Mao, S. Leng, X. Huang, in 2011 IEEE Ninth International Conference on Dependable, Autonomic and Secure Computing, ed. by N. S. W. Sydney. A Carrier Aggregation Based Resource Allocation Scheme for Pervasive Wireless Networks, (2011), pp. 196–201. https://doi.org/10.1109/DASC.2011.54.
H. Mahdavi-Doost, N. Prasad, S. Rangarajan, in 2016 8th International Conference on Communication Systems and Networks (COMSNETS). Energy efficient downlink scheduling in LTE-Advanced networks (Bangalore, 2016), pp. 1–8. https://doi.org/10.1109/COMSNETS.2016.7439928.
D. Lopez-Perez, X. Chu, A. V. Vasilakos, H. Claussen, On distributed and coordinated resource allocation for interference mitigation in self-organizing LTE networks. IEEE/ACM Trans. Netw.21(4), 1145–1158 (2013).
J. -s. Liu, Joint downlink resource allocation in LTE-advanced heterogeneous networks. Comput. Netw.146:, 85–103 (2018).
E. Universal, 3GPP TS 36.211 3GPP TSG RAN enolved universal terrisrial radio access (E-UTRA) physical channels and modulation, version 11.0.0, release 11, 2012, 3GPP. 3rd Generation Partnership Project(3GPP) (2012).
3rd Generation Partnership Project, 3GPP TS 36.213 v10.6.0, Evolved Universal Terrestrial Radio Access (E-UTRA). Physical layer procedure (2012).
D. T. Ngo, T. Le-Ngoc, Architectures of Small-cell Networks and Interference Management (1st Ed.) (Springer, New York, 2014).
S. S. A. Tiwari, LONG TERM EVOLUTION (LTE) PROTOCOL Verification of MAC Scheduling Algorithms in NetSim (Tetcos White Paper, Tetcos, 2014).
M. T. Kawser, N. I. B. Hamid, M. N. Hasan, M. S. Alam, M. M. Rahman, Downlink snr to cqi mapping for different multiple antenna techniques in lte. Int. J. Inf. Electron. Eng.2(5), 756–760 (2012).
W. L. Winston, M. Venkataramanan, Introduction To Mathematical Programming (Duxbury Resource Center, Belmont CA, 2002).
Y. Li, M. Sheng, Y. Shi, X. Ma, W. Jiao, Energy efficiency and delay tradeoff for time-varying and interference-free wireless networks. IEEE Trans. Wirel. Commun.13(11), 5921–5931 (2014).
M. Sheng, Y. Li, X. Wang, J. Li, Y. Shi, Energy efficiency and delay tradeoff in device-to-device communications underlaying cellular networks. IEEE J. Sel. Areas Commun.34:, 92–106 (2016).
M. J. Neely, A simple convergence time analysis of drift-plus-penalty for stochastic optimization and convex program. arXiv:1412.0791v1 [math.OC], 1–10 (2014).
D. B. Johnson, D. A. Maltz, in Mobile Computing. The Kluwer International Series in Engineering and Computer Science, 353, ed. by T. Imielinski, H. F. Korth. Dynamic Source Routing in Ad Hoc Wireless Networks (SpringerBoston, 1996).
H. Liao, P. Chen, W. Chen, An efficient downlink radio resource allocation with carrier aggregation in LTE-advanced networks. IEEE Trans. Mob. Comput.13(10), 2229–2239 (2014).
Department of Computer Science and Information Engineering, Providence University, Taichung, 43301, Taiwan
Jain-Shing Liu
Department of Computer Science and Engineering, National Sun Yat-Sen University, Kaohsiung, 804, Taiwan
Chun-Hung Lin
& Heng-Chih Huang
Search for Jain-Shing Liu in:
Search for Chun-Hung Lin in:
Search for Heng-Chih Huang in:
All authors contribute to the concept, the design, and developments of the algorithm and the simulation results in this manuscript. All authors read and approved the final manuscript.
Correspondence to Chun-Hung Lin.
Liu, J., Lin, C. & Huang, H. Joint congestion control and resource allocation for energy-efficient transmission in 5G heterogeneous networks. J Wireless Com Network 2019, 227 (2019) doi:10.1186/s13638-019-1532-z
Heterogeneous wireless networks
Joint optimization | CommonCrawl |
Matching Camera to Microscope Resolution
The ultimate resolution of a charge-coupled device (CCD) or complementary metal oxide semiconductor (CMOS) image sensor is a function of the number of photodiodes and their size relative to the image projected onto the surface of the imaging array by the microscope optical system. When attempting to match microscope optical resolution to a specific digital camera and video coupler combination, use this calculator for determining the minimum pixel density necessary to adequately capture all of the optical data from the microscope.
The tutorial initializes with a randomly chosen specimen appearing in the Specimen Image window (black box) and bounded by the eyepiece aperture or projection lens field diaphragm. A colored rectangle designating the CCD dimensions (2/3-inch by default) is superimposed over the image to reveal the actual area of the specimen that is captured by the sensor. In the gray, yellow, and red boxes beneath the sliders, the microscope Optical Resolution (gray), CCD Required Pixel Size (yellow), Optimum CCD Array Size (yellow), Monitor Magnification (red) and Total Magnification (red) of the image are presented in micrometers or a product. These values are continuously updated as the sliders are translated. A new CCD Format (size) can be selected by using the radio buttons appearing to the left of the Specimen Image window. The physical CCD Dimensions of the selected sensor (in millimeters) are displayed on the right side of the image window along a rectangle having the same aspect ratio as the imaging chip.
In order to operate the tutorial, shift the Numerical Aperture and Objective Magnification sliders (values appear above the slider bars) to set the appropriate values for the microscope optical configuration to be considered. Next, choose an eyepiece or projection lens Field Number (values range between 18 and 26 millimeters) and Video Coupler magnification (between 0.5x and 1.0x). As the coupler slider is translated, the size of the rectangle superimposed over the specimen image is altered by the tutorial to match the specimen area captured by the CCD sensor. A new specimen can be selected at any point by using the Choose A Specimen pull-down menu.
The efficiency of capturing images generated by an optical microscope onto the photodiode array of a CCD or CMOS image sensor is dependent upon several factors, ranging from the objective magnification, numerical aperture, and resolution, to the electronic image sensor photodiode array size, aspect ratio, video coupler magnification, and the dimensions of individual photo-sensitive elements within the array. In addition, parameters that are specific to the specimen being imaged, such as contrast, signal-to-noise ratio, intrascene dynamic range, and integration time, must also be considered.
The ultimate optical resolution of a CCD is a function of the number of photodiodes and their size relative to the image projected onto the array surface by the microscope lens system. Currently available CCD arrays vary in size from several hundred to many thousands of pixels. Modern array sizes used in devices intended for scientific investigations range from 1000 × 1000 up to 5000 × 5000 sensor elements. The trend in consumer and scientific-grade CCD manufacture is for the sensor size to continuously decrease, and digital cameras with photodiodes as small as 4 × 4 micrometers are currently available.
Adequate resolution of a specimen imaged with the optical elements of a microscope can only be achieved if at least two samples are made for each resolvable unit, although many investigators prefer three samples per resolvable unit to ensure sufficient sampling. In diffraction limited optical instruments, such as the microscope, the Abbe limit of optical resolution at an average visible light wavelength (550 nanometers) is 0.20 micrometers when using an objective lens having a numerical aperture of 1.4. In this case, a sensor size of 10 square micrometers would be just large enough to allow the optical and electronic resolution to be matched, with a 7 × 7 micrometer sensor size preferred. Although smaller photodiodes in a CCD image sensor improve the spatial resolution, they also limit the dynamic range of the device.
Table 1 - Pixel Size Requirements for Matching Microscope Optical Resolution
(Numerical Aperture)
(Micrometers)
Required Pixel
1x (0.04) 6.9 6.9 3.5
4x (0.10) 2.8 11.2 5.6
10x (0.25) 1.1 11.0 5.5
10x (0.30) 0.92 9.2 4.6
20x (0.40) 0.69 13.8 6.9
60x (0.80) 0.34 20.4 10.2
100x (0.90) 0.31 31.0 15.5
In microscopy, the image is typically projected by the optical system onto the surface of a detector, which can be the retina of a human eye, an electric image sensor, or the sensitive chemical emulsion on traditional film. In order to optimize the information content of the resulting image, the resolution of the detector must closely match that of the microscope. The wavelength spectrum of visible light used to create the image of a specimen is one of the determining factors in the performance of the microscope with respect to optical resolution. Shorter wavelengths (375-500 nanometers) are capable of resolving details to a greater degree than are the longer wavelengths (greater than 500 nanometers). The limits of spatial resolution are also dictated by the diffraction of light through the optical system, a term that is generally referred to as diffraction limited resolution. Investigators have derived several equations that have been used to express the relationship between numerical aperture, wavelength, and optical resolution:
Formula 1 - Numerical Aperture, Wavelength, and Optical Resolution
$$r = \frac{λ}{2 × NA}$$
$$r = 0.61 × \frac{λ}{NA}$$
$$r = 1.22 × \frac{λ}{NA_{Obj} + NA_{Cond}} $$
Where r is resolution (the smallest resolvable distance between two specimen points), NA equals the objective numerical aperture, λequals wavelength, NA(Obj) equals the objective numerical aperture, and NA(Cond) is the condenser numerical aperture. Notice that equation (1) and (2) differ by the multiplication factor, which is 0.5 for equation (1) and 0.61 for equation (2). These equations are based upon a number of factors, including a variety of theoretical calculations made by optical physicists to account for the behavior of objectives and condensers, and should not be considered an absolute value of any one general physical law. The assumption is that two point light sources can be resolved (separately imaged) when the center of the Airy disk generated by one of the sources overlaps with the first order reflection in the diffraction pattern of the second Airy disk, a condition known as the Rayleigh Criterion. In some instances, such as confocal and multiphoton fluorescence microscopy, the resolution may actually exceed the limits placed by any one of these three equations. Other factors, such as low specimen contrast and improper illumination, may serve to lower resolution and, more often than not, the real-world maximum value of r (about 0.20 microns using a mid-spectrum wavelength of 550 nanometers) and a numerical aperture of 1.35 to 1.40 are not realized in practice.
When the microscope is in perfect alignment and has the objectives appropriately matched with the substage condenser, then objective numerical aperture value can be substituted into equations (1) and (2), with the added result that equation (3) reduces to equation (2). An important concept to note is that magnification does not appear as a factor in any of these equations, because only numerical aperture and the wavelength of the illumination determine specimen resolution. As mentioned above (and can be observed in the equations) the wavelength of light is an important factor in the resolution of a microscope. Shorter wavelengths yield higher resolution (lower values for r) and vice versa. The greatest resolving power in optical microscopy is realized with near-ultraviolet light, the shortest effective imaging wavelength. Near-ultraviolet light is followed by blue, then green, and finally red light in the ability to resolve specimen detail. Under most circumstances, microscopists use broad-spectrum white light generated by a tungsten-halogen bulb to illuminate the specimen. The visible light spectrum is centered at about 550 nanometers, the dominant wavelength for green light (our eyes are most sensitive to green light). It is this wavelength that was used to calculate resolution values for the tutorial and presented in Table 1. The numerical aperture value is also important in these equations and higher numerical apertures will also produce higher resolution (see Table 1).
Matthew J. Parry-Hill, Kimberly M. Vogt, John D. Griffin, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
World-class Nikon objectives, including renowned CFI60 infinity optics, deliver brilliant images of breathtaking sharpness and clarity, from ultra-low to the highest magnifications.
Objective Selector
Filter, find, and compare microscope objective lenses with Nikon's Objective Selector tool.
Nikon has a diverse camera and controller lineup, enabling researchers to configure a microscope imaging system that's ideal for their specimens and applications.
Share this tutorial: | CommonCrawl |
Artificial noise-assisted physical layer security in D2D-enabled cellular networks
Yajun Chen1,
Xinsheng Ji1,2,3,
Kaizhi Huang1,
Jing Yang1,
Xin Hu1 &
Yunjia Xu1
This article has been updated
The Correction to this article has been published in EURASIP Journal on Wireless Communications and Networking 2017 2017:199
Device-to-device (D2D) communication has been deemed as a promising technology in the next generation 5G wireless communication. Due to the openness nature of the transmission medium, secure transmission is also a critical issue in the D2D-enabled cellular network as well as other wireless systems. In this paper, we investigate secure communication for the cellular downlink in this hybrid network. We consider a case in which each base station has no channel state information (CSI) from D2D transmitters which are generally deployed in the cell edge. To guarantee the secure communication of the cellular link, each base station employs the artificial noise assisted transmission strategy. Firstly, we derive the close-form expression and asymptotic expression of the secrecy outage probability of the cellular link in different scenarios: (I) eavesdroppers having no multi-user decedability; (II) eavesdroppers having the multi-user decedability. Then, we comprehensively discuss the impacts of some main system parameters on the performance to provide some system design guidances. To characterize the reliable communication of the typical D2D link, the close-form expression and asymptotic expression of the connection outage probability are, respectively, derived and some comprehensive analysis are presented. Finally, simulation results are provided to validate the effectiveness of theoretical analysis.
To meet the explosive demand of proximity services, device-to-device (D2D) communication has been regarded as an ideal candidate technology for the next generation 5G wireless communication. D2D communication allows proximity user equipments to deliver their own messages over the direct link established between them without the base station relaying messages, which has the promise of many types of advantages: superior spectrum efficiency, increasing quality of service (QoS) of edge users and network capacity. Accordingly, D2D communication underlaying a cellular network has attracted remarkable attention both in the world of academia [1–3] and industry [4, 5] in recent years.
Due to the openness nature of the transmission medium, secure transmission is identified as a critical challenge facing the D2D-enabled cellular network as well as other wireless systems. As a remedy of the traditional security mechanism, the concept of physical layer security (PHY-security) has been proposed recently to achieve secure communication for wireless systems by exploiting the characteristics of wireless channels.
Recently, PHY-security in the D2D-enabled cellular network has sparked of wide interests and achieved fruitful research works in many different scenarios. To the best of our knowledge, most of works designed different resource-scheduling schemes to guarantee secure communication for the cellular uplink, such as [6–11]. More specifically, the literature above employed the hybrid interference to improve secure performance for the cellular uplink.
For the cellular downlink, Liu et al. proposed a power transfer model and an information signal model to enable wireless energy harvesting and secure information transmission in larger-scale cognitive cellular networks and comprehensively discussed wireless power transfer policies and secrecy performance in [12]. More particularly, the authors in [13] designed two optimal D2D link-scheduling schemes under different criteria when each base station has a single antenna. When a base station is equipped with multi-antenna, the space redundancy could be exploited to enhance secure communication for the cellular link through some designed schemes, such as the secure beamforming and artificial noise assisted scheme. Specifically, Chu et al. investigated robust secrecy rate optimization problems for a multiple-input single-output (MISO) secrecy channel with multiple D2D communications, which was equivalently converted into the power minimization problem and the secrecy rate maximization problem to design the robust secure beamforming in [14]. The authors in [15] investigated a secure wireless powered D2D communication, in which a base station charged for D2D transmitters in wireless energy transfer phase and introduced the jamming service to interfere with the multiple eavesdroppers.
On the other hand, artificial noise-assisted scheme in the field of PHY-security is the most representative one among different schemes assuring the security of the wireless communication in MISO or multi-input multi-output (MIMO) scenario [16, 17]. The design idea of the artificial noise assisted scheme is that legitimate transmitters inject artificial noise into their transmission signals to confuse malicious eavesdroppers. Meanwhile, in order to guarantee the reliable communication of the legitimate user as much as possible, the artificial noise should be injected into the null space of the main channel (from source to destination). The authors in [18, 19] have expended the artificial noise-assisted scheme to the MISO D2D-enabled cellular network. More particularly, they designed the corresponding artificial noise-assisted beamforming vector matrix under the assumption that the channel state information (CSI) from each D2D transmitter is perfectly known at each base station.
Nevertheless, they only considered one cellular user and one D2D pair within a cell, which only focused on the point-to-point link and ignored the interference from other neighbor cells [18, 19]. On the other hand, due to the original purpose of the D2D communication, the D2D transmitter is generally deployed in the cell edge. Hence, in practical cases, CSI between each base station and each D2D transmitter is difficult to be perfectly known at each base station due to the channel estimation and quantization errors.
Motivation and contributions
Motivated by the abovementioned observations, in this paper, we consider a case in which each base station only knows CSI from the served cellular user, but does not know CSI from each D2D transmitter. Firstly, the spatial locations of base stations, cellular users, D2D pairs and eavesdroppers within a cell are modeled as independent homogeneous Poisson Point Processes (HPPP). In order to guarantee secure transmission for the cellular link, each multi-antenna base station employs the artificial noise assisted transmission strategy. Then, we derive the secrecy outage probability of the cellular link and provide some comprehensive analysis. Then, we, respectively, derive the connection outage probability of the typical D2D link to characterize its reliable communication. Specially, our main contributions can be summarized as follows:
In this hybrid network, we consider a case in which each base station has no CSI from each D2D transmitter generally deployed in the cell edge. To guarantee the secure communication for the cellular link, it is assumed that each base station employs the artificial noise-assisted transmission scheme. The close-form expression and asymptotic expression of secrecy outage probability of the typical cellular link are derived in the scenario in which eavesdroppers do not have the multi-user decodability. Based on the derived result of secrecy outage probability, we provide some comprehensive analysis on the secrecy performance of the cellular link.
Then, when eavesdroppers have the multi-user decodability, the close-form expression and asymptotic expression of secrecy outage probability of the typical cellular link are also, respectively, derived. Based on derived results of the secrecy outage probability in this case, some comprehensive analysis are also provided to guide the system design.
Finally, according to the design of the artificial noise assisted transmission scheme, the artificial noise is just injected into the null space of the cellular channel because each base station does not know CSI from each D2D transmitter. Hence, the artificial noise and the information-bearing signal could all degrade the reliable communication of the typical D2D link. In order to characterize the reliable performance, we analytically derive the close-form expression and the asymptotic expression of connection outage probability, and provide some comprehensive analysis.
Organization and notations
The reminder of this paper is organized as follows. In Section 2, we present the system model. In Section 3, the secrecy outage probability of the typical cellular link, the connection outage probability of the typical D2D link are respectively derived and some corresponding analysis are provided. Simulation results are presented in Section 4. Finally, we conclude this paper in Section 5.
Notations: Bold letters mean matrices (column vectors). We use \({{\mathcal C}{\mathcal N}}\left ({\mu,{N_{0}}} \right)\) to denote the circularly symmetric complex Gaussian with mean μ and covariance N 0. \(\mathbb {P}\left \{\bullet \right \}\) represents the probability of an input event and the notation \({\mathbb {E}}\{\bullet \}\) denotes the statistical expectation. exp(1) denotes the exponential distribution with unit mean. Gamma(N,λ) is Gamma distribution with parameters N and λ. In addition, ∥∙∥ denotes euclidean norm and (∙)T means the transpose of the input matrix. \({\kappa _{n}} \buildrel \Delta \over = {{\Gamma \left ({n - 1 + \rho } \right)\Gamma \left ({1 - \rho } \right)} \left /\right. {\Gamma \left ({n - 1} \right)}}\) and Γ(x) is gamma function.
As illustrated in Fig. 1, we consider the cellular downlink between the base station and the cellular user in which a set of malicious eavesdroppers attempt to intercept the confidential message of the cellular link in a passive way without modifying it. Each D2D pair DD n consists of a transmitter T n and its associated receiver D n . The spatial locations of base stations, cellular users, D2D transmitters, eavesdroppers, denoted as Φ b , Φ c , Φ d , Φ e , are modeled as HPPP with the intensities λ b , λ c , λ d , λ e over the two-dimensional space, respectively. The associated receiver with its corresponding D2D transmitter is located at a fixed distance away with the isotropic direction. It is assumed that each legitimate user (including cellular users and D2D pairs) and eavesdropper is equipped with a single antenna, respectively. Each base station has M antennas where M≥2.
System model. Figure 1 depicts the cellular downlink between the base station and the cellular user in which a set of malicious eavesdroppers attempt to intercept the confidential message of the cellular link in a passive way
Both the large-scale fading and small-scale fading of wireless channels are considered in this paper. The standard path loss model is taken into account for the large-scale fading, i.e., l(r ij )=r ij −α, where r ij is the distance between the node i and the node j, and α>2 represents the fading coefficient. In addition, the small-scale fading imposes the independent quasi-static Rayleigh fading model, whose coefficient is constant for each transmission block.
Artificial noise-assisted transmission scheme and wiretap code
To guarantee secure transmission of the cellular link, every multi-antenna base station employs the artificial noise-assisted transmission scheme to confuse malicious eavesdroppers. Consequently, the transmitted signal u i of the base station located at x i can be expressed as:
$$ {{{\mathbf{u}}_{i}} = \sqrt {p_{I}} {{\mathbf{w}}_{i}}{s_{i}} + \sqrt {p_{A}} {{\mathbf{W}}_{i}}{{\mathbf{v}}_{i}}}, $$
where s i is the information-bearing signal with \({\mathbb {E}\left [ {{{\left | {{s_{i}}} \right |}^{2}}} \right ] = 1}\), \({{\mathbf {v}}_{i} \in \mathbb {C}^{\left ({M-1}\right) \times {1}}}\) is an artificial noise vector with independent identically distributed (i.i.d.) entries \({v_{i,n}}\sim {{\mathcal {C}}{\mathcal {N}}}\left ({0,\frac {1}{{M - 1}}} \right)\). p I =ϕ p is the allocated transmission power to the information-bearing signal and p A =(1−ϕ)p represents the allocated transmission power to generate the artificial noise at each base station to confuse malicious eavesdroppers, where p is the total transmission power. Thus, ϕ∈[0,1] represents the ratio of the total transmission power p allocated to transmit the information-bearing signal s i . In addition, h i means the wireless channel between each base station and the served cellular user. We consider a case in which it is assumed that CSI between each base station and D2D users is unknown at each base station because D2D users are generally deployed in the cell edge in practical cases. According to the design of the artificial noise-assisted transmission scheme, the beamforming vector w i for the served cellular user should satisfy \({{\mathbf {w}}_{i}} = {{{\mathbf {h}}_{i}^ +} \left /\right. {\left \| {{{\mathbf {h}}_{i}}} \right \|}}\). \({{\mathbf {W}}_{i}} \in \mathbb {C}^{M \times \left ({M - 1} \right)}\) is a weight matrix for the artificial noise, and the columns of \({\mathbf {W}} \buildrel \Delta \over = \left [ {{{\mathbf {w}}_{i}} {{\mathbf {W}}_{i}}} \right ]\) constitute an orthogonal basis.
To improve the secrecy performance of this hybrid network, all transmitters adopt the wiretap code scheme to encode the data before transmission. More specifically, it is assumed that the rate of the transmitted codeword and the rate of the confidential message are, respectively, denoted by R b and R s . The codeword rate R b is the actual transmission rate of the codewords, while the secrecy rate R s is the rate of the embedded message. The rate redundancy R e =R b −R s is intentionally added in order to provide secrecy against malicious eavesdroppers. More discussions on code construction can be found in [20].
In this paper, we mainly focus on two performance metrics: secrecy outage probability and connection outage probability to respectively characterize the performance of the cellular link and D2D link in this hybrid network. Based on the analysis above, we next will give their definitions.
When the capacity of the channel from the legitimate transmitter to the corresponding receiver falls below the predefined target codeword rate R b , the receiver will not decode the transmission message correctly. We define the probability of this event as connection outage probability [21].
In addition, when the capacity of the most detrimental one among multiple eavesdroppers (i.e., the eavesdropper having the maximal capacity of the channel from the legitimate transmitter to multiple eavesdroppers) is above the predefined target rate redundancy R e , confidential messages for legitimate receivers will be decoded correctly and obtained by malicious eavesdroppers. We define the probability of this event as secrecy outage probability [21]Footnote 1.
In practical cases, the capacity of the channel is determined by signal-to-interference-plus-noise ratio (SINR) according to Shannon Theorem. Hence, the two outage probabilities above can be redefined in terms of SINR. To be specific, connection outage will happen if the instantaneous SINR falls below the target SINR threshold for the main channel. What is more, secrecy outage will occur if the instantaneous SINR is above the target SINR threshold for the most detrimental eavesdropper. Thus, the definition of connection outage probability and secrecy outage probability can be rewritten as:
$$\begin{array}{*{20}l} {p_{cop}} = \mathbb{P}\left(\text{SINR}\le\alpha\right), \end{array} $$
$$\begin{array}{*{20}l} {p_{sop}} = \mathbb{P}\left(\text{SINR}{_{e}}\ge\beta\right), \end{array} $$
where SINR and SINR e respectively, denote the received SINR at the legitimate receiver and the most detrimental eavesdropper. α and β are the target SINR thresholds for reliable and secure communication, respectively.
Cellular user association
In this paper, we consider that each user is served by the nearest base station. In this hybrid network, the cellular user should be assigned the orthogonal resource before D2D being allowed to share the cellular resource block. However, some base stations may not serve any cellular user [22], which do not transmit any signal and are called inactive base stations. Just as in [22] and [23], we denote the probability that a base station being active as p a , which can be given by:
$$\begin{array}{*{20}l} {p_{a}} = 1 - {\left({1 + \frac{\delta }{\varsigma }} \right)^{- \varsigma }}, \end{array} $$
where ς=3.5 for the nearest base station association scheme and δ=λ c /λ b represents the cell load. Note that when there are more than one cellular users to be served by a base station, the base station just chooses one cellular user to serve at each time slot through the time-division multiple access (TDMA) schemeFootnote 2.
Outage probabilities analysis
In the section, we will conduct the performance analysis about the security of the typical cellular link and the reliability of the typical D2D link, respectively. Firstly, considering whether eavesdroppers have the multi-user decedability or not, we derive the close-form expression and asymptotic expression of the secrecy outage probability of the cellular link in two different scenarios in the non-colluding way. Then, for the typical D2D link, we consider its reliable communication and derive the close-form expression and asymptotic expression of the connection outage probability.
Secrecy outage probability of cellular links
In this subsection, we will conduct the secrecy performance analysis for the cellular link and derive the secrecy outage probability of the cellular link in two different scenarios depending on whether eavesdroppers have the multi-user decedability or not. Due to the property of PPP that its distribution will not be changed by shifting the coordinates, we firstly shift the coordinates to put the typical base station located at the origin.
In the system model, eavesdroppers work in a non-colluding way, the most detrimental eavesdropper is the one who has the largest SINR. According to the definition of the secrecy outage probability, secrecy outage will occur when the instantaneous SINR of the most detrimental eavesdropper is above the given target threshold SINR. Consequently, if the given target SINR threshold is set as \({\hat \gamma _{e}}\), the secrecy outage probability of the typical cellular link for the eavesdropper located at x z can be calculated as:
$$ \begin{aligned} {P_{c,sop}}&=\mathbb{P}\left({\mathop {\max }\limits_{{x_{z}} \in {\Phi_{e}}}\text{SINR}_{e}\left({{x_{z}}}\right)\ge {{\hat \gamma_{e}}}} \right)\\ &=1 - \mathbb{P}\left({\mathop{\max }\limits_{{x_{z}} \in{\Phi_{e}}} \text{SINR}_{e}\left({{x_{z}}} \right)\le{{\hat \gamma_{e}}}}\right)\\ &=1-\mathbb{P}\left({\bigcap\limits_{{x_{z}} \in {\Phi_{e}}}{\text{SINR}_{e}\left({{x_{z}}}\right) \le {{\hat\gamma_{e}}}}}\right)\\ &= 1-{\mathbb{E}_{{\Phi_{e}},{\Phi^{a}_{b}}}}\left({{}\left({\left.{\bigcap\limits_{{x_{z}} \in{\Phi_{e}}}{\text{SINR}_{e}\left({{x_{z}}}\right)\le {{\hat\gamma_{e}}}}}\right|{\Phi_{e}},{\Phi^{a}_{b}}} \right)}\right)\\ &\mathop=\limits^{\left(a \right)} 1-{\mathbb{E}_{{\Phi^{a}_{b}}}}\left\{ {{\mathbb{E}_{{\Phi_{e}}}}\left\{{\prod\limits_{{x_{z}}\in {\Phi_{e}}}{\left({\left.{\text{SINR}_{e}\left({{x_{z}}}\right)< {{\hat \gamma_{e}}}} \right|{\Phi_{e}},{\Phi^{a}_{b}}} \right)}}\right\}}\right\}\\ &\mathop=\limits^{\left(b\right)} 1-{\mathbb{E}_{{\Phi^{a}_{b}}}}\left\{{\exp\left({-{\lambda_{e}}\int\limits_{{R^{2}}} {\prod\limits_{{x_{z}}\in {\Phi_{e}}}\mathbb{P}{\left({\left.{\text{SINR}_{e}\left({{x_{z}}}\right) \ge{{\hat\gamma_{e}}}}\right|{\Phi_{e}},{\Phi^{a}_{b}}}\right)d{x_{z}}}}} \right)} \right\}.\\ \end{aligned} $$
Note that the rate redundancy is R e and \({\hat \gamma _{e}} = {2^{{R_{e}}}} - 1\). (a) follows from the independent of different channel gains. (b) follows from the probability generating functional (PGFL) of PPP [24]: \({}\left [ {\prod \limits _{x \in \Phi } {f\left (x \right)}} \right ] = \exp \left (-\lambda \int _{{R^{2}}} {\left ({1 - f\left (x \right)} \right)dx} \right)\). \(\Phi ^{a}_{b}\) represents the active base station set.
On the other hand, since eavesdroppers generally work in a passive way, it is difficult for legitimate transmitters to know their abilities to overhear the confidential message of the cellular link. Hence, according to the different abilities of eavesdroppers to decode the transmission message, we consider the secrecy performance of the cellular link and derive the respective expressions of the secrecy outage probability in two different scenarios in the following subsection. Firstly, we will discuss the performance of the cellular link in the case in which eavesdroppers have no multi-user decedability.
Scenario I
In what following, we firstly derive the close-form expression of the secrecy outage probability under the condition that eavesdroppers do not have the multi-user decedability. In other words, the information-bearing signal as well as the artificial noise from legitimate transmitters (including base stations and D2D transmitters) could confuse eavesdroppers. Based on the analysis above, the received SINR at the eavesdropper located at x z could be expressed as:
$$ \text{SINR}_{e}\left({{x_{z}}} \right) = \frac{{{p_{I}}{{\left| {{\mathbf{g}}_{0e}^{T}{{\mathbf{w}}_{0}}} \right|}^{2}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }}}}{{\frac{p_{A}}{M - 1}{{\left\| {{\mathbf{g}}_{0e}^{T}{{\mathbf{W}}_{0}}} \right\|}^{2}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }} + I_{e{\backslash \left\{ 0 \right\}}}+ {N_{0}}}}. $$
where \(\frac {{{p_{A}}}}{{M - 1}}{{\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {W}}_{0}}} \right \|}^{2}}\) represents the received interference induced by the injected artificial noise from the typical base station. \(I_{e{\backslash \left \{ 0 \right \}}}=\sum \limits _{{y_{i}} \in {\Phi _{d}}} {{p_{d}}{{\left | {{h_{i}}} \right |}^{2}}{{\left \| {{y_{i}} - {x_{z}}} \right \|}^{- \alpha }}}+\sum \limits _{{x_{i}} \in {\Phi ^{a}_{b}}\backslash \left \{ 0 \right \}} {\left ({{p_{I}}{{\left | {{\mathbf {g}}_{ie}^{T}{{\mathbf {w}}_{i}}} \right |}^{2}}{+ }\frac {{{p_{A}}}}{{M - 1}}{{\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {W}}_{i}}} \right \|}^{2}}} \right){{\left \| {{x_{i}} - {x_{z}}} \right \|}^{- \alpha }}}\) represents the cumulative interference from legitimate transmitters (including both D2D transmitters located at y i and base stations located at x i , but except the typical base station located at the origin). For notational conciseness, we define \(I_{e,c-e}=\sum \limits _{{x_{i}} \in {\Phi ^{a}_{b}}\backslash \left \{ 0 \right \}} {\left ({{p_{I}}{{\left | {{\mathbf {g}}_{ie}^{T}{{\mathbf {w}}_{i}}} \right |}^{2}} + \frac {{{p_{A}}}}{{M - 1}}{{\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {W}}_{i}}} \right \|}^{2}}} \right){{\left \| {{x_{i}} - {x_{z}}} \right \|}^{- \alpha }}}\), which represents the cumulative interference from other base stations located at x i (except the typical base station) induced by both the information-bearing signal and the artificial noise. \( I_{e,d - e}=\sum \limits _{{y_{i}} \in {\Phi _{d}}} {{p_{d}}{{\left | {{h_{i}}} \right |}^{2}}{{\left \| {{y_{i}} - {x_{z}}} \right \|}^{- \alpha }}}\) represents the cumulative interference from all the D2D transmitters. N 0 represents the covariance of the additive Gaussian noise at eavesdroppers. Based the analysis above, we can easily obtain I e∖{0}=I e,c−e +I e,d−e .
Considering the case in which eavesdroppers have no multi-user decedability, the close-form expression of the secrecy outage probability of the cellular link can be given by:
$$\begin{array}{*{20}l} {P^{I}_{c,sop}} = 1 - \exp \left(-\upsilon{\int_{0}^{\infty} {{e^{- s{N_{0}}}} \exp \left(-\mu\right) rdr}} \right), \end{array} $$
where \(\rho = \frac {2}{\alpha }\) and \(s = \frac {{{{\hat \gamma }_{e}}{r^{\alpha } }}}{{{p_{I}}}}\). For notational conciseness, we define \(\upsilon = 2\pi {\lambda _{e}}{{\left ({1 + {{\hat \gamma }_{e}}\xi } \right)}^{1 - M}}\) and \(\mu = { \left ({\pi {\lambda _{b}}{p_{a}}\omega p_{I}^{\rho } + \frac {{\pi {\lambda _{d}}p_{d}^{\rho } }}{{\sin c\rho }}} \right){s^{\rho } }}\), where ξ=(ϕ −1−1)/(M−1) and \(\frac {1}{{\sin c\rho }} = \frac {{\pi \rho }}{{\sin \pi \rho }}=\Gamma \left ({1 + \rho } \right)\Gamma \left ({1 - \rho } \right)\). Note that ω is given by: \(\omega = \left \{ \begin {array}{ll} {\kappa _{M + 1}}, &if\ \xi = 1,\\ \frac {{{\kappa _2}}}{{{{\left ({1 - \xi } \right)}^{M - 1}}}} - \sum \limits _{m = 0}^{M - 2} {\frac {{{\xi ^{1 + \rho }}{\kappa _{m + 2}}}}{{{{\left ({1 - \xi } \right)}^{M - m - 1}}}}},&otherwise. \end {array} \right.\)
Please refer to Appendix 1. □
Remarks 1
From (7), it is easily observed that the close-form expression of the secrecy outage probability, \({P^{I}_{c,sop}}\), is negatively correlated with the base station density λ b and the D2D transmitter density λ d . In contrast, it is positively correlated with the eavesdropper density λ e . This is due to the fact that the average received aggregate interference to confuse the most detrimental eavesdropper will be stronger with the increase of λ b or λ d . However, the average received SINR at the most detrimental eavesdropper will be higher as λ e increases. Furthermore, the detailed reason why the secrecy outage probability decreases as λ b increases is given by the following Corollary 1.
In addition, from (7) we can easily know that λ b only affects μ. Hence, we can obtain the following corollary.
Corollary 1
The secrecy outage probability of the cellular link is monotonically non-increasing as λ b increases and it is independent of λ b when λ b is large enough.
This corollary implies that more base stations could improve the secrecy performance of the cellular link. This is because that the average received aggregate interference at each eavesdropper can be shown to scale with the base station density as (λ b p a )α/2. Since eavesdroppers follows HPPP on a two-dimensional plane, the received signal power at the most detrimental eavesdropper scales less than (λ b p a )α/2. Hence, the secrecy outage probability is negatively correlated with the base station density. However, from (4), we can know that λ b p a will approach λ u when the base station density is large enough. Therefore, in this case, the secrecy outage probability will be independent of λ b .
In the interference-limited network, we can further derive the simple expression of the secrecy outage probability, i.e., \({P^{I,int}_{c,sop}}\), as follows:
$$ {}{P^{I,int}_{c,sop}} = 1 - \exp \left( { - \frac{{{\lambda_{e}}}}{{{\lambda_{b}}{p_{a}}\omega \hat \gamma_{e}^{\rho} + \frac{{ {\lambda_{d}}\hat \gamma_{e}^{\rho} }}{{\sin c\rho }}{{\left({\frac{{{p_{d}}}}{{{p_{I}}}}} \right)}^{\rho} }}}{{\left({1 + {{\hat \gamma }_{e}}\xi} \right)}^{1 - M}}} \right). $$
Following from Theorem 1 by letting N 0→0. □
From (8), we can see that there are close relationships between the secrecy outage probability of the cellular link and some main system parameters, such as the number of antennas M, the power allocation ratio ϕ. To evaluate the effect of ϕ and M on the secrecy outage performance, next we will derive the asymptotic expression of \({P^{I}_{c,sop}}\) when the number of antennas at each base station approaches infinity. We firstly give the following lemma when the number of antennas approaches infinityFootnote 3.
Lemma 1
\(\mathop {\lim }\limits _{M \to \infty } {\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {W}}_0}} \right \|^{2}} = M - 1\), \(\mathop {\lim }\limits _{M \to \infty } {\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {W}}_i}} \right \|^{2}} = M - 1\).
We can easily obtain Lemma 1 due to the fact that \({\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {W}}_0}} \right \|^{2}}\sim {\text {Gamma}}\left ({M - 1,1} \right)\), \({\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {W}}_i}} \right \|^{2}} \sim \text {Gamma}\left ({M - 1,1} \right)\). □
According to Lemma 1, when the number of antennas at each base station approaches infinity, the received asymptotic SINR at the eavesdropper located at x z can be rewritten as:
$$ \text{SINR}_{e}^{\infty} \left({{x_{z}}} \right) = \frac{{{p_{I}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }}{{\left| {{\mathbf{g}}_{0e}^{T}{{\mathbf{w}}_{0}}} \right|}^{2}}}}{{{p_{A}}\left({M - 1} \right){{\left\| {{x_{z}}} \right\|}^{- \alpha }} + I_{e\backslash \left\{ 0 \right\}}^{\infty} + {N_{0}}}}, $$
where \(I_{e\backslash \left \{ 0 \right \}}^{\infty } = \sum \limits _{{x_i} \in {\Phi ^{a}_b}\backslash \left \{ 0 \right \}} {\left ({{p_I}{{\left | {{\mathbf {g}}_{ie}^{T}{{\mathbf {w}}_i}} \right |}^{2}}{{+ }}{p_A}} \right){{\left \| {{x_i} - {x_z}} \right \|}^{- \alpha }}} + \sum \limits _{{y_i} \in {\Phi _d}} {{p_d}{{\left | {{h_i}} \right |}^{2}}{{\left \| {{y_i} - {x_z}} \right \|}^{- \alpha }}}\) denotes the cumulative interference from legitimate transmitters when the number of antennas at each base station approaches infinity. For notational conciseness, we define \(I_{_{e,c - e}}^{\infty } = \sum \limits _{{x_i} \in {\Phi ^{a}_b}\backslash \left \{ 0 \right \}} {\left ({{p_I}{{\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {w}}_i}} \right \|}^{2}}{{+ }}{p_A}} \right)} {\left \| {{x_i} - {x_z}} \right \|^{- \alpha }}\), which similarly denotes the cumulative interference induced by both the information-bearing signal and the artificial noise from other base stations equipped with infinity antennas (except the typical base station). \(I_{_{e,d - e}}^{\infty } = \sum \limits _{{y_i} \in {\Phi _d}} {{p_d}{{\left | {{h_i}} \right |}^{2}}{{\left \| {{y_i} - {x_z}} \right \|}^{- \alpha }}}\). Thus, we can obtain \(I_{e\backslash \left \{ 0 \right \}}^{\infty } = I_{_{e,c - e}}^{\infty } + I_{_{e,d - e}}^{\infty } \).
Then, we have the following proposition.
When the number of antennas at each base station approaches infinity, the asymptotic expression of the secrecy outage probability can be further given by:
$$ {P^{I,asy}_{c,sop}} = 1 - \exp \left({ -\upsilon_{1} \int_{0}^{\infty} {{e^{- s{N_{0}}}}\exp \left({-\mu_{1}} \right)rdr}} \right), $$
where we define \(\upsilon _{1} =2\pi {\lambda _e}{e^{- {{\hat \gamma }_e}\left ({M - 1} \right)\xi }}\) and \(\mu _{1} = \left ({\pi {\lambda _b}{p_a}\Gamma \left ({1 - \rho } \right)\Psi p_{I}^{\rho } + \frac {{\pi {\lambda _d}p_{d}^{\rho } }}{{\sin c\rho }}} \right){s^{\rho }}\phantom {\dot {i}}\) for notational conciseness. Note that \(\Psi = \Gamma \left ({1 + \rho,\left ({{\phi ^{- 1}} - 1} \right)} \right){e^{\left ({{\phi ^{- 1}} - 1} \right)}}\phantom {\dot {i}}\).
Since ξ=(ϕ −1−1)/(M−1), we can easily obtain that the asymptotic secrecy outage probability is independent of the number of antennas from Proposition 1.
By letting N 0→0, it is straightforward to obtain the following corollary for the interference-limited case when the number of antennas at each base station approaches infinity.
In the interference-limited network, the asymptotic expression of the secrecy outage probability can be further derived as:
$$ \begin{aligned} {} {P^{I,asy,int}_{c,sop}} = 1 - \exp \left({ - \frac {\upsilon_{1}}{ 2 \left({\pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right)\Psi \hat \gamma_{e}^{\rho} + \frac{{\pi {\lambda_{d}}\hat \gamma_{e}^{\rho} }}{{\sin c\rho }}{{\left({\frac{{{p_{d}}}}{{{p_{I}}}}} \right)}^{\rho} }}\right)}} \right). \end{aligned} $$
Following from Proposition 1 by letting N 0→0. □
When eavesdroppers have no multi-user decedability, even if each base station has no transmission power to generate the artificial noise, the inter-cell interference induced by the cellular link and the intra-cell interference induced by the D2D link could also confuse eavesdroppers. Hence, the base station is unnecessary to inject the artificial noise at some specified conditions. Based on the analysis above, we can obtain the following corollary employing the asymptotic expression of \({P^{I,asy,int}_{c,sop}}\) in the interference-limited network when the number of antennas at each base station approaches infinity.
It is unnecessary to generate the artificial noise to confuse eavesdroppers under the following condition
$$ \begin{aligned} \frac{{{\lambda_{e}}}}{{\left({{\lambda_{b}}{p_{a}} + {\lambda_{d}}\Gamma \left({1 + \rho} \right){{\left({\frac{{{p_{d}}}}{p}} \right)}^{\rho} }} \right)\Gamma \left({1 - \rho} \right)\hat \gamma_{e}^{\rho} }} \le - \ln \left({1 - \varepsilon} \right), \end{aligned} $$
where ε represents the minimum secrecy requirement for the cellular link.
Since \({P^{I,asy,int}_{c,sop}}\) is a monotonic increasing function with respect to the power allocation ratio ϕ. The secrecy performance of the cellular link would be satisfied as long as the secrecy outage probability is no more than ε, which is determined by the value of ϕ. By substituting ϕ=1 into the above constraint, we can obtain Eq. (12) given by Corollary 4. This provides very useful insight for practical system designs. □
Scenario II
In this subsection, we consider a case, in which it is assumed that eavesdroppers are in a non-colluding way and have powerful multi-user decedability. In other words, eavesdroppers could distinguish every data stream sent for different legitimate users from legitimate transmitters. Thus, they could subtract the interference induced by the information-bearing signal from the base station and the D2D transmitter by employing multiuser detection techniques, just as done in [25, 26]. From the above analysis, considering the typical cellular downlink, the received SINR at the eavesdropper located at x z can be expressed as:
$$ \text{SINR}_{e}^{worse}\left({{x_{z}}} \right) = \frac{{{p_{I}}{{\left| {{\mathbf{g}}_{0e}^{T}{{\mathbf{w}}_{0}}} \right|}^{2}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }}}}{{\frac{{{p_{A}}}}{{M - 1}}{{\left\| {{\mathbf{g}}_{0e}^{T}{{\mathbf{W}}_{0}}} \right\|}^{2}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }} + I_{A{\backslash \left\{ 0 \right\}}} + {N_{0}}}}, $$
where \(\frac {{{p_A}}}{{M - 1}}{{\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {W}}_0}} \right \|}^{2}}{{\left \| {{x_z}} \right \|}^{- \alpha }}\) means the received interference induced by the injected artificial noise from the typical base station. \(I_{A{\backslash \left \{ 0 \right \}}}={\sum \limits _{{x_i} \in {\Phi ^{a} _b}\backslash \left \{ 0 \right \}} {\frac {{{p_A}}}{{M - 1}}{{\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {W}}_i}} \right \|}^{2}}} }{{\left \| {{x_i} - {x_z}} \right \|}^{- \alpha }}\) denotes the cumulative interference from other base stations (except the typical base station) induced by the artificial noise. Then, we will give the expression of the secrecy outage probability of the cellular link in this case in Theorem 2.
The close-form expression of the secrecy outage probability of the typical cellular link in the case where eavesdroppers have powerful multi-user decedability can be given by:
$$\begin{array}{*{20}l} {P^{II}_{c,sop}} = 1 - \exp \left({ - {\upsilon_{2}}\int_{0}^{\infty} {{e^{- s{N_{0}}}}\exp \left({ - {\mu_{2}}} \right)rdr}} \right). \end{array} $$
Note that \(\upsilon _{2} = 2\pi {\lambda _e} {\left ({1 + {{\hat \gamma }_e}\xi } \right)^{1 - M}}\) and μ 2=λ b p a C ρ,M Θ ρ r 2 are defined for notational conciseness, where \(\Theta ={\frac {{{{\hat \gamma }_e}\left ({1 - \phi } \right)}}{{\left ({M - 1} \right)\phi }}}\) and \({C_{\rho,M}} = \pi \frac {{\Gamma \left ({M - 1 + \rho } \right)\Gamma \left ({1 - \rho } \right)}}{{\Gamma \left ({M - 1} \right)}}\).
Theorem 2 implies that the secrecy outage probability is negatively correlated with the base station density λ b . In contrast, it is positively correlated with the eavesdroppers density λ e . This stated remark agrees well with the remark from Theorem 1. Nevertheless, it is independent of the D2D transmitters density λ d . This is because that eavesdroppers have the multi-user decedebility to remove the interference induced by the D2D link and thus result in no impact on the eavesdropping link.
In addition, we can easily obtain that \(p^{II}_{c,sop}\) increases as ϕ increases, which denotes the power allocation ratio of the total transmission power allocated to the information transmission power. This is because that only the artificial noise will confuse eavesdroppers in this worst case. While a higher ϕ represents a lower transmission power allocated to generate the artificial noise to confuse eavesdroppers at the base station. Therefore, this will result in a much higher secrecy outage probability with a higher ϕ.
In the interference-limited network, the secrecy outage probability of the typical cellular link when eavesdroppers have the multi-user decedability can be further derived as:
$$\begin{array}{*{20}l} {P^{II,int}_{c,sop}} = 1 - \exp \left({ - \frac{{\pi {\lambda_{e}}}}{{{\lambda_{b}}{C_{\rho,M}}{\Theta^{\rho} }}}{{\left({1 + {{\hat \gamma }_{e}}\xi} \right)}^{1 - M}}} \right). \end{array} $$
By letting N 0→0, we simplify the integral in (14) and yield the result in (15). □
To evaluate the effect of ϕ and M on the secrecy performance, similar to Proposition 1, next we will derive the asymptotic expression of the secrecy outage probability when eavesdroppers have the multi-user decedability and each base station has infinity antennas.
$$\begin{array}{*{20}l} {P^{II,asy}_{c,sop}} = 1 - \exp \left({ - {\upsilon_{2}}\int_{0}^{\infty} {{e^{- s{N_{0}}}}\exp \left({ - {\mu_{3}}} \right)rdr}} \right), \end{array} $$
where \({\upsilon _{2} = 2\pi {\lambda _e} {e^{- {{\hat \gamma }_e}\left ({M - 1} \right)\xi }}}\) and μ 3=π λ b p a Γ(1−ρ)(ϕ ps)ρ(ϕ −1−1)ρ are defined for notational conciseness.
When each base station has infinity antennas, from Lemma 1, the received SINR can be equivalently rewritten as:
$$\begin{array}{*{20}l} {\text{SINR}}_{e}^{\infty,w}\left({{x_{z}}} \right) = \frac{{{p_{I}}{{\left| {{\mathbf{g}}_{0e}^{T}{{\mathbf{w}}_{0}}} \right|}^{2}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }}}}{{{p_{A}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }} + I_{A\backslash \left\{ 0 \right\}}^{\infty} + {N_{0}}}}, \end{array} $$
where p A ∥x z ∥−α represents the received interference induced by the injected artificial noise from the typical base station equipped with infinity antennas. \(I^{\infty }_{A{\backslash \left \{ 0 \right \}}}={\sum \limits _{{x_i} \in {\Phi ^{a} _b}\backslash \left \{ 0 \right \}} {{{{p_A}}}{{\left \| {{x_i} - {x_z}} \right \|}^{- \alpha }}} }\) represents the cumulative interference from other base stations with infinity antennas (except the typical base station) induced by the artificial noise. □
Denote \(\hat \gamma _{_{0,e}}^{\infty } = \phi p\left ({{{\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {w}}_0}} \right \|}^{2}} - {{\hat \gamma }_e}\xi \left ({M - 1} \right)} \right)\). According to the definition of the secrecy outage probability, we can obtain
$$\begin{array}{*{20}l} &\mathbb{P}\left({\text{SINR}_{e}^{\infty,w}\left({{x_{z}}} \right) \le {{\hat \gamma }_{e}}} \right)\\ &= \mathbb{P}\left({\hat \gamma_{_{0,e}}^{\infty} \le {{\hat \gamma }_{e}}{{\left\| {{x_{z}}} \right\|}^{\alpha} }\left({I_{A\backslash \left\{ 0 \right\}}^{\infty} + {N_{0}}} \right)} \right)\\ &= 1 - {e^{- {{\hat \gamma }_{e}}\left({M - 1} \right)\xi }}{e^{- s{N_{0}}}}{{{\mathcal L}}_{{I_{A\backslash \left\{ 0 \right\}}^{\infty} }}}\left(s \right). \end{array} $$
When each base station has infinity antennas, employing ([27], Eq. (68)), we obtain:
$$\begin{array}{*{20}l} {} {{\mathcal L}}_{{I_{A\backslash \left\{ 0 \right\}}}}^{\infty} \left(s \right) = \exp \left({ - \pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right){{\left({\phi ps} \right)}^{\rho} }}\left({\phi^{-1}-1} \right)^{\rho} \right). \end{array} $$
Then, substituting (19), (18) into (5) and changing to a polar coordinate system to evaluate the integral, we can obtain the result in (16).
We can also easily observe that the asymptotic expression of the secrecy outage probability of the cellular link is independent of the number of antennas, M, which agrees with the conclusion drawn from Proposition 1.
Then, it is straightforward to obtain the following corollary by letting N 0→0.
In the interference-limited network, when eavesdroppers have the multi-user decedability and each base station has infinity antennas, the asymptotic secrecy outage probability can be further given by:
$$\begin{array}{*{20}l} {P^{II,asy,int}_{c,sop}} = 1 - \exp \left({ - \frac{{\pi {\lambda_{e}}}}{\eta}{e^{- {{\hat \gamma }_{e}}\left({M - 1} \right)\xi }}} \right). \end{array} $$
Note that we define \(\eta ={{\pi {\lambda _b}{p_a}\Gamma \left ({1 - \rho } \right)\left ({\phi ^{-1}-1} \right)^{\rho }{{{{\hat \gamma }_e}}^{\rho } }}}\) for notational conciseness.
Connection outage probability of D2D links
Considering the typical D2D link whose receiver is located at the origin, we conduct the expression of the connection outage probability and analyze some properties in this subsection. It is assumed that the typical D2D transmitter is located l away from the typical D2D receiver. Since the injected artificial noise at each base station is just in the null space of the wireless channel of the desired cellular user, it will degrade the reliable communication of the typical D2D link. Based on the analysis above, the received SINR at the typical D2D receiver is recast as:
$$ \text{SINR}_{d} = \frac{{{p_{d}}{h_{0d}}{l^{- \alpha }}}}{{{{I_{d}}}}+N_{0}}, $$
where p d is the transmission power of all D2D transmitters. \({{I_d} \,=\, \sum \limits _{{x_i} \in {\Phi ^{a} _b}} { p_{I}{{\left \| {{x_i}} \right \|}^{- \alpha }}{{\left | {{\mathbf {g}}_{id}^{T}{{\mathbf {w}}_i}} \right |}^{2}}}\,+\, \sum \limits _{{x_i} \in {\Phi ^{a} _b}} {\frac {{p_A}}{{M - 1}}{{\left \| {{x_i}} \right \|}^{- \alpha }}}} {{{\left \| {{\mathbf {g}}_{id}^{T}{{\mathbf {W}}_i}} \right \|}^{2}} + \sum \limits _{{y_i} \in {\Phi _d}\backslash \left \{ {{y_0}} \right \}} {{p_d}{h_{id}}{{\left \| {{y_i}} \right \|}^{- \alpha }}}}\) represents the totally cumulative interference from the base station that is located at x i and other D2D transmitters (except the typical D2D transmitter located at y 0). \({I_{c - d}} = \sum \limits _{{x_i} \in \Phi _{b}^{a}} {\left ({{p_I}{{\left | {{\mathbf {g}}_{i}^{T}{{\mathbf {w}}_i}} \right |}^{2}} + \frac {{{p_A}}}{{M - 1}}{{\left \| {{\mathbf {g}}_{i}^{T}{{\mathbf {W}}_i}} \right \|}^{2}}} \right){{\left \| {{x_i}} \right \|}^{- \alpha }}} \) represents the interference induced by the information-bearing signal and the artificial noise from all the base station. \({I_{d - d}} = \sum \limits _{{y_i} \in {\Phi _d}\backslash \left \{ {{y_0}} \right \}} {{p_d}{h_{id}}{{\left \| {{y_i}} \right \|}^{- \alpha }}}\) represents the interference from other D2D links sharing the same resource except the typical D2D transmitter. h id represents the small-scale fading channel from the D2D transmitter located at y i , especially, h 0d means the small-scale fading channel from the typical D2D transmitter. Similarly, g id represents the small-scale fading channel from the base station located at x i to the typical D2D receiver. It is assumed that h id follows the exponential distribution with unit mean, i.e., h id ∼ exp(1). N 0 represents the covariance of the additive Gaussian noise at the typical D2D receiver. Hence, we can have I d =I c−d +I d−d .
Given the target transmission rate R d , the connection outage probability of the typical D2D receiver can be expressed as:
$$\begin{array}{*{20}l} {p_{d,cop}} &= \mathbb{P}\left\{ \text{SINR}_{d} < {{\hat \gamma_{d}}} \right\}\\ &= 1 - e^{-{N_{0}}\zeta}\mathbb{E}{_{{\Phi^{a}_{b}},}}_{{\Phi_{d}}}\left({{e^{-{{I_{d}}}\zeta }}} \right)\\ &= 1 - e^{-{N_{0}}\zeta}{{{\mathcal L}}_{{I_{d}}}}\left(\zeta \right), \end{array} $$
where \({\hat \gamma _d}= 2^{R_d}-1\) represents the SINR target threshold to satisfy the communication requirement of the typical D2D link and \(\zeta = \frac {{{{\hat {\gamma }_d}}{l^{\alpha } }}}{{{p_d}}}\). \({{{\mathcal L}}_{{I_d}}}\left (\zeta \right)\) denotes the Laplace transform of I d , i.e., \( {{{\mathcal {L}}}_{{I_d}}}\left (\zeta \right) = \mathbb {E}\left ({ - \zeta {I_d}} \right)\). According to the property of the Laplace transform, we can easily have \( {{{\mathcal L}}_{{I_d}}}\left (\zeta \right) = {{{\mathcal L}}_{I_{c - d}}}\left (\zeta \right) \bullet {{{\mathcal L}}_{{I_{d-d}}}}\left (\zeta \right)\) because of I d =I c−d +I d−d .
The close-form expression of the connection outage probability of the typical D2D link in artificial noise assisted D2D-enabled cellular network is given by:
$$ {p_{d,cop}} = 1-e^{-{N_{0}}\zeta}\exp \left({ - \left({\pi {\lambda_{b}}{p_{a}}k + \frac{{\pi {\lambda_{d}}p_{d}^{\rho} }}{{\sin c\rho }}} \right){\zeta^{\rho} }} \right). $$
Note that \({k \,=\, \left \{ \begin {array}{l} 2p_{I}^{\rho } {\kappa _{M + 1}}, \qquad \qquad \qquad \qquad \quad \;\,\, if\ \xi = 1, \\ 2p_{I}^{\rho } \left ( {\frac {{{\kappa _2}}}{{{{\left ( {1 - \xi } \right)}^{M - 1}}}} \,-\, \sum \limits _{m = 0}^{M - 2} {\frac {{{\xi ^{1 + \rho }}{\kappa _{m + 2}}}}{{{{\left ({1 - \xi } \right)}^{M - m - 1}}}} } } \right), otherwise. \end {array} \right.}\)
From the derived result in (23), it is obvious that the connection outage probability, p d,cop , has close relationships with various system parameters, such as the densities λ b , λ e , the total transmission power p of each base station, the D2D transmission power p d and so on. Especially, for the given λ b , λ e and ϕ, p d,cop is positively correlated with the transmission power ratio p / p d . This is because that a larger transmission power of each base station will introduce stronger interference to the typical D2D link from the cellular downlink, resulting in a larger connection outage probability of the typical D2D link.
The expression of the connection outage probability in (23) is derived just under the assumption that the distance l between the typical D2D transmitter and its corresponding D2D receiver is constant. The derived result can be easily expanded to the scenario where l is a random variable. The expression of the connection outage probability of the typical D2D link in the expanded scenario can be obtained by calculating the integral formula \(\int _{0}^{\infty } \mathbb {P} \left ({\left. {\text {SINR}_{d} < {\beta _d}} \right |l} \right){f_l}\left (l \right)dl\), where f l (l) denotes the PDF of the distance l.
By letting N 0=0, we will get the expression of the connection outage probability shown in the following corollary for the interference-limited network.
Considering the interference-limited in this hybrid network, the close-form expression of the connection outage probability of the typical D2D link is given by:
$$ {p^{int}_{d,cop}} = 1-\exp \left({ - \left({\pi {\lambda_{b}}{p_{a}}k + \frac{{\pi {\lambda_{d}}p_{d}^{\rho} }}{{\sin c\rho }}} \right){\zeta^{\rho} }} \right), $$
where k is given in (23).
Corollary 7 can be straightforwardly obtained from Theorem 3 with N 0→0. □
Similarly, next we will provide the asymptotic expression of the connection outage probability of the typical D2D link when each base station has infinity antennas.
When the number of antennas at each base station approaches infinity, we can obtain the asymptotic expression of the connection outage probability of the typical D2D link by:
$$ \begin{aligned} {P^{asy}_{d,cop}} = {1 - }{e^{- \zeta {N_{0}}}}\exp \left( { - \left( {\pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right)\Psi p_{I}^{\rho} + \frac{{\pi {\lambda_{d}}p_{d}^{\rho} }}{{\sin c\rho }}} \right){\zeta^{\rho} }} \right). \end{aligned} $$
According to Lemma 1, when the number of antennas at each base station approaches infinity, the received asymptotic SINR at the typical D2D receiver can be expressed as:
$$\begin{array}{*{20}l} \text{SINR}_{d}^{\infty} = \frac{{{p_{d}}{h_{0}}{l^{- \alpha }}}}{{{I_{d}} + {N_{0}}}}, \end{array} $$
where \(I_{d}^{\infty } = \sum \limits _{{x_i} \in {\Phi ^{a}_b}} {\left ({{p_I}{{\left | {{\mathbf {g}}_{id}^{T}{{\mathbf {w}}_i}} \right |}^{2}} + {p_A}} \right){{\left \| {{x_i}} \right \|}^{- \alpha }}} + \sum \limits _{{y_i} \in {\Phi _d}\backslash \left \{ {{y_0}} \right \}} {{p_d}{h_{id}}{{\left \| {{y_i}} \right \|}^{- \alpha }}}\) represents the totally cumulative interference when each base station has infinity number of antennas. \(I_{d,c - d}^{\infty } = \sum \limits _{{x_i} \in {\Phi ^{a}_b}} {\left ({{p_I}{{\left | {{\mathbf {g}}_{id}^{T}{{\mathbf {w}}_i}} \right |}^{2}} + {p_A}} \right){{\left \| {{x_i}} \right \|}^{- \alpha }}}\) represents the cumulative interference from all the base stations, \({I_3} = \sum \limits _{{y_i} \in {\Phi _d}\backslash \left \{ {{y_0}} \right \}} {{p_d}{h_{id}}{{\left \| {{y_i}} \right \|}^{- \alpha }}}\) represents the cumulative interference from other D2D transmitters. Hence, it is intuitive that \(I_{d}^{\infty } = I_{d,c - d}^{\infty } + {I_3}\). □
Denote \(I_{d,c - d\backslash \left \{ 0 \right \}}^{\infty } = \sum \limits _{{x_i} \in {\Phi ^{a}_b}\backslash \left \{ 0 \right \}} {\left ({{p_I}{{\left \| {{\mathbf {g}}_{id}^{T}{{\mathbf {w}}_i}} \right \|}^{2}} + {p_A}} \right){{\left \| {{x_i}} \right \|}^{- \alpha }}}\). Owing to Slivnyak-Mecke Theorem [24], we can have \({{{\mathcal L}}_{I_{d,c - d\backslash \left \{ 0 \right \}}^{\infty } }}\left (s \right) = {{{\mathcal L}}_{I_{d,c - d}^{\infty } }}\left (s \right)\). Employing ([27], Eq. (68)), we can obtain:
$$\begin{array}{*{20}l} {{{\mathcal L}}_{I_{d,c - d}^{\infty} }}\left(s \right) = \exp \left({ - \pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right){{\left({\phi p\zeta} \right)}^{\rho} }\Psi} \right), \end{array} $$
where Ψ is given by (10).
Substituting (27), (41) into (22) could yield the result in (25). Then, we can easily get the following corollary from Proposition 3.
In the interference-limited network, when the number of antennas at each base station approaches infinity, the asymptotic expression of the connection outage probability of the typical D2D link can be given by:
$$ \begin{aligned} {P^{inf,int}_{d,cop}} = 1 -\exp \left({ - \left({\pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right)\Psi p_{I}^{\rho} + \frac{{\pi {\lambda_{d}}p_{d}^{\rho} }}{{\sin c\rho }}} \right){\zeta^{\rho} }} \right). \end{aligned} $$
Numerical results and analysis
In this suction, more detailed simulation and numerical results are provided to evaluate the theoretical analysis. The path loss exponent is α=3 and μ=3.5 for the nearest base station association. The total transmission power of each base station is 60 dBm and the transmission power of all the D2D transmitters is 20 dBm. The density of the cellular user is 0.0005/m 2, i.e., λ c =0.0005/m 2. The number of antennas equipped at each base station is set to be M=10. For simplicity, it is assumed that the distance between D2D pairs is l=1m.
The results of the secrecy outage probability in scenario I and in scenario II versus the density of the base station, λ b , are respectively plotted in Figs. 2 and 3 with different system parameters. We can observe that it will improve the secrecy performance of the cellular link with the density of the base station increasing. This is because more base stations will bring more interference to confuse eavesdroppers to overhear confidential messages of the cellular link. From Figs. 2 and 3, we can observe that theoretical results are quite close to asymptotic results. In addition, we can see that the secrecy outage probability in scenario I is lower than the one in scenario II because the information-bearing signal except for the artificial noise in scenario I also can confuse eavesdroppers, resulting in lower secrecy outage probability.
The secrecy outage probability of the typical cellular link versus the density of the base station λ b under different densities of the eavesdropper λ e in scenario I where λ d =0.001/m 2, ϕ=0.6. Figure 2 illustrates the close-form expression and asymptotic expression of the secrecy outage probability of the typical cellular link in scenario I
The secrecy outage probability of the typical cellular link versus the density of the base station λ b under different densities of the eavesdropper λ e in scenario II where λ d =0.001/m 2, ϕ=0.6. Figure 3 shows the close-form expression and asymptotic expression of the secrecy outage probability of the typical cellular link in scenario II
Additional, the results of the secrecy outage probability of the typical cellular link in both scenario I and scenario II versus the density of the cellular user, λ c , are illustrated in Fig. 4 with different system parameters. It is obvious that it will improve the secrecy performance of the cellular link with the density of the cellular user increasing, as shown in Fig. 4. This is because that more base stations will be more possible to be active to serve for the nearest cellular user as the number of the cellular user increases, which can be also easily obtained from Eq. (4). Hence, it will bring more interference to confuse eavesdroppers to overhear confidential messages of the cellular link, thus improving the secrecy performance of the cellular link. More particularly, although eavesdroppers have the multi-user decidability in scenario II and the interference brought by information-bearing signals has no impact on eavesdroppers, more active base stations will generate the artificial noise to degrade channel capacity over the eavesdropping link, resulting in much lower secrecy outage probability.
The secrecy outage probability of the typical cellular link in both scenario I and scenario II versus the density of the cellular user λ c under different densities of the eavesdropper λ e where λ b =0.0005/m 2. Figure 4 shows the relationship between the secrecy outage probability of the typical cellular link in both scenario I and scenario II and the density of the cellular user
On the other hand, the results of the secrecy outage probability of the typical cellular link in scenario I versus the density of the D2D user, λ d , are depicted in Fig. 5 with different system parameters. Since eavesdroppers have the multi-user decidability in scenario II, the number of D2D users has no impact on the secrecy outage probability of the cellular link in scenario II. Hence, in Fig. 5, we just investigate the impact of the number of D2D users on the cellular performance in scenario I. We can can observe that it will improve the secrecy performance of the cellular link with the density of the D2D user increasing, as illustrated in Fig. 5. This is because more D2D users will bring more inter-cell interference to confuse eavesdroppers to overhear confidential messages of the cellular link, resulting in much lower secrecy outage probability.
The secrecy outage probability of the typical cellular link in scenario I versus the density of the D2D user λ d under different densities of the eavesdropper λ e where λ b =0.0001/m 2. Figure 5 illustrates the relationship between the secrecy outage probability of the typical cellular link in scenario I and the density of the D2D user
The secrecy outage probability of the cellular link in scenario I versus the power allocation ratio ϕ is demonstrated in Fig. 6. From Fig. 6, we can observe that the secrecy outage probability increases with ϕ increasing. This is because that the signal power received at eavesdroppers is proportional to ϕ, while the average received aggregate interference is independent of ϕ. In addition, Fig. 7 reveals the power allocation ratio of the information-bearing signal to the total transmission power versus the density of the base station λ b . From Fig. 7, deploying more base stations and increasing λ c represent that more base stations will be active to serve for cellular users and meanwhile generate the artificial noise to confuse eavesdroppers, which will cause the secrecy outage probability to decrease. Hence, the secrecy requirement of the cellular link will be satisfied even if each base station allocates lower transmission power to generate the artificial noise, thus resulting in a larger ϕ.
The secrecy outage probability of the typical cellular link in scenario I versus power allocation ratio ϕ. Figure 6 depicts the relationship between the secrecy outage probability of the typical cellular link and the power allocation ratio
The power allocation ratio in scenario I versus the density of the base station λ b where λ e =0.0003/m 2, R e =0.5 bps/Hz, ε=0.1. Figure 7 illustrates the transmission power allocated to the information-bearing signal under the density of the base station, and shows that it is unnecessary to generate the artificial noise under specific condition
Figure 8 shows the results of the connection outage probability versus the density of the base station, λ b , under some system parameters. From Fig. 8, we can observe that deploying more base station will degrade the connection performance of the cellular link. Since both the information-bearing signal and the artificial noise of each base station can bring the interference to the typical D2D link, more base station will bring more interference to degrade the reliable communication of the typical D2D link. From Fig. 8, we can also see that the theoretical results are quite close to the asymptotic results.
The connection outage probability of the typical D2D link versus the density of the base station λ b where ϕ=0.3, R d =1 bps/Hz. Figure 8 represents the close-form expression and asymptotic expression of connection outage probability of the typical D2D link
We also investigate the impact of the power allocation ratio, ϕ, on the connection outage probability of the typical D2D link in Fig. 9. As expected, the connection outage probability of the typical D2D link will increase with a larger R d from Fig. 9. In Fig. 9, we can also observe that the connection outage probability will be smaller with a larger ϕ, which implies that the information-bearing signal has less impact on the reliable communication of the typical D2D link compared with the artificial noise. However, the difference of the connection outage probability under different power allocation ratios is very little because both the information-bearing signal and the artificial noise will degrade the reliable communication of the typical D2D link. That is to say, the average aggregate interference at the typical D2D transmitter keeps approximately same under different power allocation ratios when the transmission power of the base station is constant.
The connection outage probability of the typical D2D link versus power allocation ratio ϕ. Figure 9 depicts the relationship between the connection outage probability of the typical D2D link and the power allocation ratio
Furthermore, we exploit the impact of the total transmission power p of each base station on the connection outage probability of the typical D2D link, as demonstrated in Fig. 10. As expected, the connection outage probability of the typical D2D link will be larger as p increases. This is because that it will bring stronger interference to the typical D2D link with a larger p, resulting in a larger connection outage probability.
The connection outage probability of the typical D2D link versus R d under different total transmission power p where p d =10 dBm. Figure 10. describes comparison results of the connection outage probability of the typical D2D link under different total transmission power of each base station
In this paper, secure communication for the cellular downlink is investigated in this hybrid network. A case was considered, in which each base station has no CSI from D2D users because they are generally deployed in the cell edge. To guarantee secure communication of the cellular link, each base station employed the artificial noise assisted transmission strategy. Firstly, we considered two different scenarios depending on whether eavesdroppers have the multi-user decedability or not and derived the close-form expression of the secrecy outage probability of the cellular link. To characterize the reliable communication of the D2D link, its close-form expression of connection outage probability was derived and some comprehensive analysis were provided to guide the system design. Finally, simulation results are provided to validate the effective of the theoretical results. Furthermore, more complex D2D scenes need to be studied.
Let us define \({\gamma _{0,{e}}} = {p_I}\left ({{{\left | {{\mathbf {g}}_{0e}^{T}{{\mathbf {w}}_0}} \right |}^{2}} - \xi {{\hat \gamma }_e}{{\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {W}}_0}} \right \|}^{2}}} \right)\).
Because \({\left | {{\mathbf {g}}_{0e}^{T}{{\mathbf {w}}_0}} \right |^{2}}\sim {\exp } \left (1 \right)\), \({\left \| {{\mathbf {g}}_{0e}^{T}{{\mathbf {W}}_0}} \right \|^{2}}\sim {\text {Gamma}}\left ({N - 1,1} \right)\), we can obtain its cumulative distribution function (CDF) \({F_{{\gamma _{0,{e}}}}}\left (x \right)\) as follows:
$$ {F_{{\gamma_{0,{e}}}}}\left(x \right) = 1 - {\left({1 + {{\hat \gamma }_{e}}\xi} \right)^{1 - M}}{e^{- \frac{x}{{{p_{I}}}}}}. $$
According to the definition of the secrecy outage probability, then, we can easily obtain:
$$ \begin{array}{l} \mathbb{P}\left({\text{SINR}{_{e}}\left({{x_{z}}} \right) \le {{\hat \gamma }_{e}}} \right) \\ =\mathbb{P}\left({{{\hat \gamma }_{0,{e}}} \le {{\hat \gamma }_{e}}{{\left\| {{x_{z}}} \right\|}^{\alpha }}\left({{I_{e\backslash \left\{ 0 \right\}}} + {N_{0}}} \right)} \right)\\ = 1 - {\left({1 + {{\hat \gamma }_{e}}\xi} \right)^{1 - M}}{e^{- s{N_{0}}}}{{{\mathcal L}}_{{I_{e\backslash \left\{ 0 \right\}}}}}\left(s \right), \end{array} $$
where \({{{\mathcal {L}}}_{{I_{e\backslash \left \{ 0 \right \}}}}}\left (s \right)\) denotes the Laplace transform of I e∖{0}, i.e., \( {{{\mathcal L}}_{{I_{e\backslash \left \{ 0 \right \}}}}}\left (s \right) = \mathbb {E}\left ({ - s{{I_{e\backslash \left \{ 0 \right \}}}}} \right)\). Since I e∖0=I e,c−e +I e,d−e , then it is straightforward to obtain according to the property of PPP [24]:
$$ {{{\mathcal L}}_{{I_{e\backslash 0}}}}\left(s \right) = {{{\mathcal L}}_{{I_{e,c - e}}}}\left(s \right) \bullet {{{\mathcal L}}_{{I_{e,d - e}}}}\left(s \right). $$
Owing to the property of PPP [24] that the coordinates translations will not change the distribution of PPP, we shift the coordinates so that the eavesdropper at x z is located at the origin. Then, employing ([27], Eq. (64)) we can obtain:
$$ {{{\mathcal L}}_{{I_{e,c - e}}}}\left(s \right) = \exp \left({ - \pi {\lambda_{b}}{p_{a}}\omega {{\left({{p_{I}}s} \right)}^{\rho} }} \right), $$
where ω is given by (7).
Because \({I_{e, d - e}} = \sum \limits _{{y_i} \in {\Phi _d}} {{p_d}{h_{id}}{{\left \| {{y_i}} \right \|}^{- \alpha }}}\) and employing ([13], Eq. (7)) we can directly obtain:
$$ {{{\mathcal L}}_{{I_{e,d - e}}}}\left(s \right) = \exp \left({ - \frac{{\pi {\lambda_{d}}p_{d}^{\rho} {s^{\rho} }}}{{\sin c\rho }}} \right). $$
where \(\frac {1}{{\sin c\rho }}=\frac {{\pi \rho }}{{\sin \pi \rho }} =\Gamma \left ({1 + \rho } \right)\Gamma \left ({1 - \rho } \right)\).
By plugging (33), (32), (31) into (30), we can have:
$$\begin{array}{*{20}l} \mathbb{P}\left({\text{SINR}{_{e}}\left({{x_{z}}} \right) \le {{\hat \gamma }_{e}}} \right) &= 1 - {\left({1 + {{\hat \gamma }_{e}}\xi} \right)^{1 - M}}{e^{- s{N_{0}}}}\\ &\exp \left({ - \left({\pi {\lambda_{b}}{p_{a}}\omega p_{I}^{\rho} + \frac{{\pi {\lambda_{d}}p_{d}^{\rho} }}{{\sin c\rho }}} \right){s^{\rho} }} \right). \end{array} $$
Substituting (34) into (5) and changing to a polar coordinate system to evaluate the integral yields the result in (7).
Appendix 2: Proof of Proposition 1
Since \(I_{_{e,c - e}}^{\infty } = \sum \limits _{{x_i} \in {\Phi ^{a}_b}\backslash \left \{ 0 \right \}} {\left ({{p_I}{{\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {w}}_i}} \right \|}^{2}} + {p_A}} \right)} {\left \| {{x_i} - {x_z}} \right \|^{- \alpha }}\), we first shift the coordinates so that the eavesdropper at x z is located at the origin. Then, by employing ([27], Eq. (68)) it is direct to yield:
$$ {{\mathcal L}}_{{I_{e,c - e}}}^{\infty} \left(s \right) = \exp \left({ - \pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right){{\left({\phi ps} \right)}^{\rho} }\Psi} \right), $$
Since \({I^{\infty }_{e\backslash 0}} = {I^{\infty }_{e, c - e}} + {I^{\infty }_{e, d - e}}\), we can have \({{\mathcal L}}_{_{{I_{e\backslash 0}}}}^{\infty } \left (s \right) = {{\mathcal L}}_{_{{I_{e,c - e}}}}^{\infty } \left (s \right) \bullet {{\mathcal L}}_{_{{I_{e,d - e}}}}^{\infty } \left (s \right)\). When the number of antennas at each base station approaches infinity, the cumulative interference from D2D transmitters at the most detrimental eavesdropper is same with. Hence, its Laplace transform can be obtained from (33). Then it is straightforward to obtain:
$$ {{\mathcal L}}_{_{{I_{e\backslash 0}}}}^{\infty} \left(s \right) = \exp \left( { - \left({\pi {\lambda_{b}}{p_{a}}\Gamma \left({1 - \rho} \right)\Psi p_{I}^{\rho} + \frac{{\pi {\lambda_{d}}p_{d}^{\rho} }}{{\sin c\rho }}} \right){s^{\rho} }} \right). $$
Substituting (36) into (5) and changing to a polar coordinate system to evaluate the integral yields the results in (10).
According to the definition of the secrecy outage probability and similar to (18), we can have:
$$ \begin{array}{l} \mathbb{P}\left({\text{SINR}_{e}^{worse}\left({{x_{z}}} \right) \le {{{\hat \gamma }_{e}}}} \right) \\ =\mathbb{P}\left({{{\hat \gamma }_{0,{e}}} \le {{\hat \gamma }_{e}}{{\left\| {{x_{z}}} \right\|}^{- \alpha }}\left({{I_{A\backslash \left\{ 0 \right\}}} + {N_{0}}} \right)} \right)\\ = 1 - {\left({1 + {{\hat \gamma }_{e}}\xi} \right)^{1 - M}}{e^{- s{N_{0}}}}{{{\mathcal L}}_{{I_{A\backslash \left\{ 0 \right\}}}}}\left(s \right), \end{array} $$
where \({{{\mathcal L}}_{{I_{A\backslash \left \{ 0 \right \}}}}}\left (s \right)\) denotes the Laplace transform of I A∖{0}, i.e., \( {{{\mathcal L}}_{{I_{A\backslash \left \{ 0 \right \}}}}}\left (s \right) = \mathbb {E}\left ({ - s{{I_{A\backslash \left \{ 0 \right \}}}}} \right)\).
In this case, only the artificial noise has the impact on the secrecy performance of the typical cellular link. Due to the property of PPP [24] that the coordinates translations will not change the distribution of PPP, we shift the coordinates so that the eavesdropper at x z is located at the origin. Because \({\left \| {{\mathbf {g}}_{ie}^{T}{{\mathbf {W}}_i}} \right \|^{2}} \sim \text {Gamma}\left ({M - 1,1} \right)\) and using ([25], Eq. (56)) we can obtain:
$$\begin{array}{*{20}l} {{{\mathcal L}}_{{I_{A\backslash \left\{ 0 \right\}}}}}\left(s \right) &= \exp \left({ - {\lambda_{b}}{p_{a}}{C_{\rho,M}}{{\left({\frac{{{p_{A}}}}{{M - 1}}} \right)}^{\rho} }{s^{\rho} }} \right) \\ &= \exp \left({ - {\lambda_{b}}{p_{a}}{C_{\rho,M}}{{\left({\frac{{{{\hat \gamma }_{e}}\left({1 - \phi} \right)}}{{\left({M - 1} \right)\phi }}} \right)}^{\rho} }{r^{2}}} \right). \end{array} $$
Then, substituting (38), (37) into (5) and changing to a polar coordinate system to evaluate the integral, we can get the result in (14).
We define the received interference power at the typical D2D receiver from each base station as \({X_i} = {p_I}\left ({{{\left | {{\mathbf {g}}_{i}^{T}{{\mathbf {w}}_i}} \right |}^{2}} + \xi {{\left \| {{\mathbf {g}}_{i}^{T}{{\mathbf {W}}_i}} \right \|}^{2}}} \right)\), since g i , w i and W i are independent of each other, \({{\left | {{\mathbf {g}}_{i}^{T}{{\mathbf {w}}_i}} \right |}^{2}}\sim \mathrm {exp(1)}\), \({{\left \| {{\mathbf {g}}_{i}^{T}{{\mathbf {W}}_i}} \right \|}^{2}}\sim \mathrm {Gamma\ (M-1,1)}\). Using ([25], Lemma 1), then we can derive the probability density function (pdf) of X i as:
$$\begin{array}{*{20}l} {f_{{X_{i}}}}\left(x \right) = \left\{ \begin{array}{ll} \frac{{{x^{M - 1}}}}{{\left({M - 1} \right)!p_{I}^{M}}}{e^{- \frac{x}{{{p_{I}}}}}}, &if\ \xi = 1,\\ \frac{{{{\left({1 - \xi} \right)}^{1 - M}}}}{{\left({M - 2} \right)!{p_{I}}}}{e^{- \frac{x}{{{p_{I}}}}}}\gamma \left({M - 1,\frac{{\left({1 - \xi} \right)x}}{{\xi {p_{I}}}}} \right), & otherwise. \end{array} \right. \end{array} $$
The Laplace transform of I c−d can be expressed as:
$$\begin{array}{*{20}l} \begin{array}{ll} {{{\mathcal L}}_{{I_{c - d}}}}\left(\zeta \right) &= {\mathbb{E}_{{I_{c - d}}}}\left\{ {\exp \left({ - \zeta \sum\limits_{{x_{i}} \in \Phi_{b}^{a}} {{X_{i}}{{\left\| {{x_{i}}} \right\|}^{- \alpha }}}} \right)} \right\}\\ &= {\mathbb{E}_{\Phi_{b}^{a}}}\left\{ {\prod\limits_{{x_{i}} \in \Phi_{b}^{a}} {{\mathbb{E}_{{X_{i}}}}\left\{ {\exp \left({ - \zeta {X_{i}}{{\left\| {{x_{i}}} \right\|}^{- \alpha }}} \right)} \right\}}} \right\}\\ &\mathop = \limits^{\left(a \right)} \exp \left({ - 2\pi {\lambda_{b}}{p_{a}}\int_{0}^{\infty} {\left({1 - \chi} \right)rdr}} \right), \end{array} \end{array} $$
where (a) follows from the PGFL over PPP. Let us define the integral \(\chi = \int _{0}^{\infty } {{e^{- \zeta x{r^{- \alpha }}}}{f_{{X_i}}}\left (x \right)} dx\) as the Laplace transform of X i . Employing ([28], Eq. (8.352.1)) we can have:
$$\begin{array}{*{20}l} \chi = \left\{ \begin{array}{ll} {\left({1 + \varphi {r^{- \alpha }}} \right)^{- M}}, & if \ \xi = 1, \\ \frac{{{\left({1 - \xi} \right)}^{1 - M}}}{1 + \varphi {r^{- \alpha }}} - \sum\limits_{m = 0}^{M - 2}\frac{{\xi {{\left({1 - \xi} \right)}^{m + 1 - M}}}}{{{{\left({1 + \xi \varphi {r^{- \alpha }}} \right)}^{m + 1}}}}, & otherwise, \end{array}\right. \end{array} $$
where φ=ϕ p ζ. We define \({\tau _1} = \int _{0}^{\infty } {\left ({1 - \chi } \right)rdr}\), and by using ([29], Eq. (8)) it can be derived as:
$$\begin{array}{*{20}l} {\tau_{1}} = \left\{ \begin{array}{ll} {\varphi^{\rho} }{\kappa_{M + 1}}, &if \ \xi = 1, \\ \frac{{\varphi^{\rho} }{\kappa_{2}}}{{\left({1 - \xi} \right)}^{M - 1}} - \sum\limits_{m = 0}^{M - 2} \frac{{{\varphi^{\rho} }{\xi^{1 + \rho }}{\kappa_{m + 2}}}}{{{{\left({1 - \xi} \right)}^{M - m - 1}}}}, &otherwise, \end{array} \right. \end{array} $$
Hence, we can obtain:
$$\begin{array}{*{20}l} {{{\mathcal L}}_{I_{c - d}}}\left(\zeta \right) = \exp \left({ - \pi {\lambda_{b}}{p_{a}}k{\zeta^{\rho} }} \right), \end{array} $$
Employing ([13], Eq. (7-10)) we can easily obtain:
$$ {{{\mathcal L}}_{{I_{d-d}}}}\left(\zeta \right) = \exp \left({ - \frac{{\pi {\lambda_{d}}{p_{d}^{\rho} }{\zeta^{\rho} }}}{{\sin c\rho }}} \right). $$
Then, combing (43), (44) and plugging into (22), we can obtain the result in (23).
In the original publication [1] the name of author Yajun Chen was spelled wrong. The original article was updated to rectify this error.
The performance metrics to characterize the secrecy performance from different perspectives in existing works can be classified into two types: the secrecy outage probability and the achievable secrecy rate. The ergodic achievable secrecy rate is not suitable for the system having strictly real-time requirements. However, in the 5G mobile communication, it has higher real-time requirements. Hence, we focus on the secrecy outage probability to depict the secrecy performance of the cellular link in this paper.
In this paper, we adopt the TDMA scheme to derive the results and discuss the performance of this hybrid network, but the derived results can be easily expanded to other systems, such as the frequency-division multiple access (FDMA) scheme and so on.
For the null-space based beamforming, the matrix inversion in massive MIMO systems will incur very high computation cost. However, alternatively, we may use the random artificial noise scheme which has been adopted in [30]. From the derived results and numerical results [30], we can come to a conclusion that main system parameters have the same effect on the secrecy performance for different design schemes. The conclusions drawn from the derived result in this paper could also guide the system design when the artificial noise is in random form.
M Agiwal, A Roy, N Saxena, Next Generation 5G Wireless Networks: A Comprehensive Survey. IEEE Commun. Surv. Tutor. 18(3), 1617–1655 (2016).
A Asadi, Q Wang, V Mancuso, A survey on device-to-device communication in cellular networks. IEEE Commun. Surv. Tutor. 16(4), 1801–1819 (2014).
G Ding, J Wang, Q Wu, Q Yao, Y Song, F Tsiftsis, Cellular-base-station-assisted device-to-device communications in TV white space. IEEE J. Sel. Areas Commun. 34(3), 107–121 (2016).
Technical specification group services and system aspects; feasibility study for proximity services (ProSe). Cedex, France, 3GPP TR 22.803, 2012, Rel-12.
LS on agreements from TSG RAN on work on public safety related use cases in Release 12. Cedex, France, 3GPP TD SP-130478, Sep. 2013.
J Yue, C Ma, H Yu, W Zhou, Secrecy-based access control for device-to-device communication underlaying cellular networks. IEEE Commun. Lett. 17(11), 2068–2071 (2013).
R Zhang, X Cheng, L Yang, Cooperation via spectrum sharing for physical layer security in device-to-device communications underlaying cellular networks. IEEE Trans. Wirel. Commun. 15(8), 5651–5663 (2016).
L Sun, Q Du, P Ren, Y Wang, Two birds with one stone: Towards secure and interference-free D2D transmissions via constellation rotation. IEEE Trans. Veh. Technol. 65(10), 8767–8774 (2016).
X Kang, X Ji, K Huang, X Li, Security-oriented distributed access selection for D2D underlaying cellular networks. IET Electron. Lett. 53(1), 32–34 (2017).
Y Chen, X Ji, K Huang, X Kang, Secrecy-outage-probability-based Access Strategy for Device-to-device Communications Underlaying Cellular Networks. J. Commun. 37(8), 86–94 (2016).
Y Chen, X Ji, K Huang, B Li, X Kang, Opportunistic access control for enhancing security in D2D-enabled cellular networks. Sci China Inf Sci (2016). doi:10.1007/s11432-017-9160-y.
Y Liu, L Wang, S Zaidi, M Elkashlan, T Duong, Secure D2D Communication in Large-Scale Cognitive Cellular Networks: A Wireless Power Transfer Model. IEEE Trans. Commun. 64(1), 329–342 (2016).
C Ma, J Liu, X Tian, H Yu, Y Cui, X Wang, Interference exploitation in D2D-enabled cellular networks: A secrecy perspective. IEEE Trans. Commun. 63(1), 229–242 (2015).
Z Chu, K Cumanan, M Xu, Z Ding, Robust secrecy rate optimizations for multiuser multiple-input-single-output channel with device-to-device communications. IET Commun.9(3), 396–403 (2015).
Z Chu, X Nguyen, T Le, M Karamanoglu, et al., Game theory based secure wireless powered D2D communications with cooperative jamming, (2017 Wireless Days, Porto, 2017).
S Goel, R Negi, Guaranteeing secrecy using artificail noise. IEEE Trans. Wirel. Commun. 7(6), 2180–2189 (2008).
Z Chu, Z Zhu, M Johnston, et al., Simultaneous Wireless Information Power Transfer for MISO Secrecy Channel. IEEE Trans. Veh. Technol. 65(9), 6913–6925 (2016).
X Kang, X Ji, K Huang, Secure D2D Underlaying Cellular Communication Based on Artificial Noise Assisted. J. Commun. 36(10), 149–156 (2015).
X Kang, X Ji, K Huang, Z Zhong, Secure D2D communication Underlaying Cellular Networks: Artificial Noise Assisted. in Proceedings of the IEEE International Conference on Vehicular Technology (VTC). (Montreal, 2016).
A Thangaraj, S Dihidar, AR Calderbank, SW McLaughlin, J-M Merolla, Applications of LDPC codes to the wiretap channel. IEEE Trans. Inf. Theory. 53(8), 2933–2945 (2007).
X Xu, B He, W Yang, X Zhou, Y Cai, Secure Transmission Design for Cognitive Radio Networks With Poisson Distributed Eavesdroppers. IEEE Trans. Inf. Forensic Secur. 11(2), 373–387 (2016).
C Li, J Zhang, KB Letaief, Throughput and energy efficiency analysis of small cell networks with multi-antenna base stations. IEEE Trans. Wirel. Commun. 13(5), 2505–2517 (2014).
C Liu, L Wang, Optimal cell load and throughput in green small cell networks with generalized cell association. IEEE J. Sel. Areas Commun. 34(5), 1058–1072 (2016).
D Stoyan, W Kendall, J Mecke, Stochastic Geometry and Its Applications, 2nd ed (Wiley, Hoboken, 1996).
X Zhang, X Zhou, MR McKay, Enhancing secrecy with multiantenna transmission in wireless ad hoc networks. IEEE Trans. Inf. Forensic Secur. 8(11), 1802–1814 (2013).
H Wang, T Zheng, J Yuan, D Towsley, M Lee, Physical Layer Security in Heterogeneous Cellular Networks. IEEE Trans. Commun. 64(3), 1204–1219 (2016).
W Wang, KC Teh, KH Li, Artificial Noise Aided Physical Layer Security in Multi-Antenna Small-Cell Networks. IEEE Trans. Inf. Forensic Secur. 12(6), 1470–1483 (2017).
I Gradshteyn, I Ryzhik, A Jeffrey, D Zwillinger, S Technica, Table of Integrals, Series, and Products (Academic, New York, 2007).
M Haenggi, JG Andrews, F Baccelli, O Dousse, M Franceschetti, Stochastic geometry and random graphs for the analysis and design of wireless networks. IEEE J. Sel. Areas Commun. 27(7), 1029–1046 (2009).
J Zhu, R Schober, V Bhargava, Secure transmission in multi-cell massive MIMO systems. IEEE Trans. Wirel. Commun. 13(9), 4766–4781 (2014).
This work is supported in part by China's High-Tech R&D Program (863 Program) SS2015AA011306; the open research fund of National Mobile Communications Research Laboratory, Southeast University (No.2013D09) and National Natural Science Foundation of China under Grants No.61379006, 61521003, and 61401510.
National Digital Switching System Engineering and Technological R&D Center, No.7, Jianxue Road, Zhengzhou, 450002, China
Yajun Chen
, Xinsheng Ji
, Kaizhi Huang
, Jing Yang
, Xin Hu
& Yunjia Xu
National Mobile Communications Research Laboratory, Southeast University, No.2, Southeast University Road, Nanjing, 211189, China
Xinsheng Ji
National Engineering Lab for Mobile Networking Security, No.10, Westtucheng Road, Beijing, 100876, China
Search for Yajun Chen in:
Search for Xinsheng Ji in:
Search for Kaizhi Huang in:
Search for Jing Yang in:
Search for Xin Hu in:
Search for Yunjia Xu in:
YC put forward the idea and wrote the manuscript. XJ and KH took part in the discussion and they also guided, reviewed, and checked the writing. JY, XH, and YX carried out experiments and analyzed experimental results. All authors read and approved the final manuscript.
Correspondence to Yajun Chen.
The original version of this article was revised due to an error in author name Yajun Chen.
A correction to this article is available online at https://doi.org/10.1186/s13638-017-0984-2.
Chen, Y., Ji, X., Huang, K. et al. Artificial noise-assisted physical layer security in D2D-enabled cellular networks. J Wireless Com Network 2017, 178 (2017). https://doi.org/10.1186/s13638-017-0969-1
Accepted: 24 October 2017
Device-to-device (D2D) communication
Physical layer security
Artificial noise
Secrecy outage probability
Connection outage probability | CommonCrawl |
SimScale DocumentationSimulation SetupContacts
In many cases the simulation domain doesn't only consist of a single solid body, but multiple parts. A valid simulation setup requires all relations between parts to be fully defined.
Two bodies are said to be in contact when they share at least one common boundary and the boundaries are constrained by a relation (i.e. no relative movement).
Contacts in Solid Mechanics
In the case of solid mechanics simulations, parts in assemblies are discretized into multiple non-conforming mesh parts, i.e. the single bodies are meshed separately by the meshing algorithm and do not share the nodes lying on their contact entities. In order to ensure the connection between those bodies, they have to be tied via contact constraints that couple the affected degrees of freedom.
Automatic Contact Detection
In order to guarantee that the simulated domain is constrained, all contacts in the system will be detected automatically whenever a new CAD assembly is assigned to a simulation. This also includes simulation creation. By default, all contacts in the assembly will always be created as bonded contacts and can then be edited by the user.
Contact detection can also be triggered manually via the context menu of the contact node in the simulation tree.
Context menu for the contact setup.
While contacts are being detected, the contact node in the simulation tree is locked. The time required for contact detection depends on the size and complexity of the geometry and can take between a few seconds up to a few minutes. A loading indicator on the contact tree node signals that contact detection is ongoing.
Bulk Selection
Depending on the size and complexity of an assembly, the number of contacts created can become quite large. An easy way to edit multiple contacts at once is via bulk selection. The bulk selection panel exposes all contact options besides assignments to the user for editing.
Contacts can be selected in bulk via CTRL + Click and/or SHIFT + Click in the contact list or via the filter contacts by selection option in the viewer context menu. The 'Filter contacts by selection' option returns contacts based on the current selection. The following selection modes are possible:
One volume selected: All contacts that contain at least 1 face on the selected volume will be selected.
Two or more volumes selected: All contacts that contain at least one face on at least two of the selected volumes will be selected.
One or multiple faces on one body selected: All contacts that contain at least one of the selected faces will be selected.
Multiple faces across more than one volume selected: All contacts that contain at least one of the selected faces from at least two of the volumes will be selected.
Currently, there are four types of contact constraints available.
Bonded Contact
The bonded contact is a type of contact which allows no relative displacement between two connected solid bodies. This type of contact constraint is used to glue together different solids of an assembly.
You can assign faces or face sets that should be tied together via the assignment boxes. For numerical purposes, you have to choose one of these selections as master and the other one as slave. During the calculation, the degrees of freedom of slave nodes are constrained to the master surface.
When running contact analyzes, the position tolerance can be set manually or be turned off. The position tolerance defines the distance between any slave node and the closest point to the nearest master face. When turned on, only those slave nodes will be constrained, which are within the defined range from a master face. When the tolerance is set to Off, all slave nodes will be tied to the master surface absolutely. Therefore, if a larger face is used as master, one master node will be tied to multiple slave nodes leading to artificial stiffness in the slave surface.
If a larger surface (or surface with higher mesh density) is chosen as slave, the computation time will increase significantly and it might also result in a wrong solution, especially when no specific tolerance criteria is provided.
Sliding Contact
The sliding contact allows for displacement tangential to the contact surface but no relative movement along the normal direction. This type of contact constraint is used to simulate sliding movement in the assembly. The two surfaces that are in contact are classified as master and slave. Every node in slave surface (slave node) is tied to a node in the master surface (master node) by a constraint.
You can assign faces or face sets that should be tied together via the assignment boxes. For numerical purposes you have to choose one of these selections as master and the other one as slave. During the calculation, the degrees of freedom of slave nodes are constrained to the master surface while only allowing tangential movement.
When running contact analyses, the position tolerance can be set manually or be turned off. The position tolerance defines the distance between any slave node and the closest point to the nearest master face. When turned on, only those slave nodes will be constrained, which are within the defined range from a master face. When the tolerance is set to Off, all slave nodes will be tied to the master surface absolutely.
This is a linear constraint type, which is intended for planar sliding interfaces. Therefore, no large displacements and rotations are allowed in the proximity of a sliding contact.
Cyclic Symmetry Contact
The cyclic symmetry constraint enables to model only a section of a 360° cyclic periodic structure and reduces the computation time and memory consumption considerably. Required settings include the center and axis of the cyclic symmetry as well as the sector angle. The master and slave surfaces define the cyclic periodicity boundaries.
It's required to define the axis of revolution and the sector angle explicitly. The sector angle has to be given in degrees. Available ranges for the angle are from 0° to 180° and only values that divide 360° to an integer number are valid. The axis is defined by the axis origin and the axis direction. The Definition of Axis and Angle has to be in accordance with the right hand rule such that it defines a rotation that maps the slave to the master surface. For an example see the picture below.
The Slave surface is highlighted red, revolution axis is chosen as the negative y-Axis (0,-1,0) and the sector angle is 36°
Resulting displacement on sector (left) and transformed on the full 360° model (right)
All DOFs of the slave nodes will be constrained, adding an additional constraint on those nodes could lead to an overconstrained system.
This is a linear constraint, so no large rotations or large deformations are allowed in the proximity of cyclic symmetry boundaries.
A cyclic symmetry condition is only valid if geometry and loading conditions are symmetric.
The cyclic symmetry constraint has been discontinued for single solid simulation domains and will be re-introduced as regular boundary condition in the near future.
Nonlinear Contact
Nonlinear (or "Physical") contacts enable you to calculate realistic contact interaction between two bodies of the domain as well as self-contact of different faces of one body. Unlike for constrained contacts those faces are not just connected via linear relations but the actual contact forces are calculated.
In order to enable a nonlinear interaction you have to define contact pairs of faces or face sets. For those faces the distance between each other is tested during the simulation and in case a face pair gets in contact the interaction forces that prevent those faces from interpenetrating each other are taken into account. As those forces only occur in case of contact the interaction is a nonlinear phenomenon and thus only applies for nonlinear analyses.
The solution method for the nonlinear contact has to chosen on a per-simulation basis. While specific settings can be adjusted for each nonlinear contact, the solution method is a global setting. It's possible to choose between the "Penalty method" and the "Lagrangian method".
Penalty Contact
In a penalty contact solution method the contact interaction between the bodies is handled via spring elements that model the stiffness of the contact. Hence, in a penalty approach it is possible that the faces in contact penetrate each other slightly depending on the defined contact stiffness that couples the interpenetration with the consequential reaction forces. As the interpenetration causes forces that try to prevent further intersection and penalize this behavior it is called Penalty method.
The contact stiffness of a penalty contact is defined by the stiffness coefficient of the linear penetration model. The higher the penalty coefficient, the stiffer the contact gets, which is desired in most cases, as bodies usually are not intended to penetrate at all. However, convergence becomes increasingly more difficult for larger penalty coefficients. A tradeoff between a realistic behavior and optimization for converge needs to be found.
A good starting point for the penalty coefficient usually is between 5-50 times the youngs modulus.
Lagrangian Contact
In a Lagrangian contact solution method the contact interaction between the bodies is handled via additional Lagrange equations that acccount for the contact conditions. Opposed to the penalty method the contact equations are solved exactly and thus no penetration between the contact faces may occur.
Although the Lagrangian contact gives generally more accurate results than the penalty contact, it is not as robust. Also the additional Lagrange equations introduce new DOFs which will increase the system size and thus the solution time.
Conflict Resolution and Optimization
The two surfaces that are in contact are classified as master and slave. Every node in the slave surface (slave nodes) is tied to a node in the master surface (master node) by a constraint.
Please be aware that one face can not be a slave face of different contact definitions simultaneously.
Generally the more refined of the two periodic boundary surfaces should be chosen to be the slave. In the case of a cyclic symmetry, this will in most cases not matter since both faces should be meshed with nearly the same element sizes.
There are some general rules that help you to decide which of the contact faces or sets to choose as master and which to choose as slave entities. Of course, those rules do not work in every case but they provide a good starting point. Choose as slave entities, face(s) that are:
considerably smaller than their counterpart.
strongly curved compared to the other part of the contact pair.
not as stiff as the other part, especially if the other part is even rigid.
that have a considerably finer mesh than their counterpart.
Automatic contact detection tries to always find an optimized solution, therefore it is preferable to use automatic contact detection instead of manually constraining the system. Conflicting contacts are marked with a warning icon in the contact list. A more detailed description of the conflict type and how to resolve it can be found on top of the contact settings panel.
Another warning in case of remaining conflicts is shown on run creation, along with an additional check to detect under constraints in the system.
In case conflicts can not be resolved manually or by automatic contact detection, consider imprinting your CAD geometry.
Interfaces in Conjugate Heat Transfer
In a CHT analysis, an interface defines the physical behavior between the common boundaries of two regions that are in contact, e.g. solid-solid, or solid-fluid.
Automatic Interface Detection
When creating a new CHT simulation, all possible interfaces will automatically be detected and populated in the simulation setup tree. Interfaces will be grouped together and defined as Coupled thermal interface with No-slip velocity condition.
How To Modify Specific Interfaces?
Individual interfaces or a group of interfaces can be filtered via entity selection. Select the entities (faces or solids) for which you want to select all interfaces that exist between them. Then choose the "Filter contacts by selection" option in the viewer context menu or in the simulation setup tree.
Specific interfaces can be selected individually or in bulk by selecting one or multiple entities in the viewer and then using the 'Filter contacts by selection' option. In the example above, all interfaces between the processor and its heat sink can be retrieved by selecting both the heat sink and the processor entity and then using the 'Filter contacts by selection' option.
All interfaces that exist between two of the selected entities will be bulk selected and exposed in the contact tree individually.
All interfaces that are returned by the filter will be selected in bulk and exposed individually in the contacts tree. By customizing their settings, individual interfaces will stay exposed in the tree.
It is also possible to select only one entity before filtering, which will return all interfaces between this entity and any other entity in the model.
Interfaces which differ in settings from the standard bulk interfaces group will stay exposed individually in the simulation setup tree.
Partial Contacts
An interface is required to always be defined between two congruent surfaces, meaning that these surfaces must have the same area and overlap completely. After contact detection, the platform will also perform a check for partial contacts. If partial contacts are detected, the platform will show a warning and recommend an imprinting operation.
Partial contact warning after automatic contact detection in Conjugate Heat Transfer analyses.
Imprinting is a single-click operation built into SimScale, which splits existing faces into smaller ones in order to guarantee perfect overlap between contacting faces. It is recommended to perform an imprint operation in order to guarantee accurate heat transition modeling for the simulation.
By default, any detected partial contact will be defined as an adiabatic interface, and not participate in heat conduction unless specified otherwise.
Contact Detection Errors
As all possible interfaces are detected automatically, it is no longer possible to manually add an interface or to change the entity assignment for a specific interface. In case no interfaces can be detected automatically, SimScale will show an error message.
It is not possible to continue with the current simulation setup in case automatic contact detection fails for the currently assigned geometry. Investigate your CAD model and ensure that contacting parts are indeed in contact.
In this case, it is not possible to create a mesh or start a simulation run for this simulation. Instead, the CAD model needs to be investigated for potential errors which prevent successful contact detection. Please reach out to support via email or chat in case you encounter this issue.
The Velocity options define the fluid velocity conditions at the interface. For each interface, the momentum (velocity) profile can be set to either slip or no-slip condition. If the interface is between two solids, this option is irrelevant.
By default, the velocity profile is set to no-slip condition, which imposes a friction wall (or real wall) condition by setting the velocity components (tangential and normal) to Zero value at the interface.
$$V_t=V_n=0$$
The 'slip' option imposes a frictionless wall condition. In this case, the tangential velocities at the interface are adjusted according to the flow conditions, while the normal component is zero.
The Thermal options define the heat exchange conditions at the interface. The five Thermal types available for the interfaces are reported below:
The coupled thermal interface models a perfect heat transfer across the interface. This is the default setting, in case an interface is not defined by the user.
Adiabatic
In this case, thermal energy cannot be exchanged between the domains across the interface.
Total Resistance
The Total Resistance interface allows users to model an imperfectly matching interface (e.g. due to the surface roughness) which reduces the heat exchange across it. The total resistance is defined as:
$$R = \frac{1}{K A} = \frac{1}{\frac{\kappa}{t} A}$$
It is worth noticing that the area of the interface appears in the definition. So this option must be assigned only to the relevant face. Let's suppose that a heat exchanger is being simulated. The effect of solid sediment on the tube's wall is only known as a total resistance. A first simulation proves that heat exchange performance is insufficient. Consequently, the length of the tubes is increased. The new simulation will only be correct if the total resistance is changed according to the new area of the tubes.
Specific Conductance
This interface type is very similar to the Contact Interface material (below). It only requires users to set the specific conductance of the interface which is defined as:
$$K = \frac{\kappa}{t}$$
with thickness t [m] and thermal conductivity κ [W/mK] between the two interface regions.
For instance, this option may be used for an interface where the layer thickness is negligible or unknown, i.e., a radiator for which the paint coating's specific conductance may be given instead of its thickness and κ.
Contact Interface Material
The contact interface material allows modelling a layer with thickness t and thermal conductivity κ between the two interface regions.
For example, it is possible to model the thermal paste between a chip and a heat sink without needing to resolve it in the geometry. The latter operation is usually a problem, considering that the thickness of these layers is two or three orders of magnitude smaller than other components in the assembly.
CAD and Mesh Requirements
A CHT simulation always requires a multi-region mesh. As far as the mesh is concerned, it is fundamental that the cell size at the interface is similar between the two faces. As a rule of thumb, the cells on one face should be less than 1.5 times the size of the others. The figure below shows an example of this issue. In the left case, the cells at the interface on the inner region are too small with respect to those on the outer body. In the case on the right side, the cells on the interface are approximately the same size.
Left: Cell sizes at the interface do not match closely enough to ensure a robust simulation run. Right: Cell sizes are matching closely. This is the intended multi-region mesh interface for use in a CHT analysis.
Last updated: November 11th, 2019
Previous article: Contacts
Skip to next topic: Post-Processing | CommonCrawl |
BMC Veterinary Research
Seasonal fluctuations in body weight during growth of Thoroughbred racehorses during their athletic career
Yuji Takahashi ORCID: orcid.org/0000-0002-2139-61421 &
Toshiyuki Takahashi1
BMC Veterinary Research volume 13, Article number: 257 (2017) Cite this article
Domesticated horses adapt to environmental conditions through seasonal fluctuations in their metabolic rate. The seasonal change of metabolic rates of domesticated horses in pastures is documented. However, there are few investigations on seasonal body weight change of domesticated horses housed in stables, which are provided constant energy intake throughout the year. Both seasonal changes and gain in body weight of racehorses during their athletic career is known to a lesser extent because their body weight are not measured in most countries. Here, we used a seasonal-trend decomposition method to conduct a time series analysis of body weight of Thoroughbred racehorses participating in flat races held by the Japan Racing Association from 1 January 2002 to 31 December 2014.
We acquired 640,431 body weight measurements for race starts and included 632,540 of these in the time series analysis. Based on seasonal component analysis, the body weight of male and gelding horses peaked in autumn and winter and reached its nadir in summer. In contrast, the body weight of female horses peaked in autumn and reached the nadir in spring. Based on trend component analysis, most of the increase in body weight was observed when all sexes approached 5 years of age. The slope of the body weight gain was smaller after that, and an approximately 30 kg gain was observed during their careers.
These results indicate that the body weight of a Thoroughbred racehorse fluctuates seasonally, and that there may be sex differences in energy balance mechanisms. Moreover, the present results suggest that the physiological development of Thoroughbred racehorses is completed just before they reach 5 years of age.
Seasonal changes in food availability and quality are inevitable. Because of changes in forage in the wild, the body weight of a wild herbivore varies seasonally [1, 2]. Przewalski horses, an ancestor of domesticated horses, which live in a semi-natural environment, exhibit annual fluctuations in body weight that peak in autumn and reach the nadir in spring [3]. These fluctuations are associated with energy quality and quantity derived from dry matter intake or with dietary composition [4]. Further, the energy expenditure of Przewalski horses is low in winter to adjust to food shortages and high in spring owing to pregnancy [5]. Energy balance varies with seasonal fluctuations depending on the relationship between energy intake and expenditure [6]. Therefore, body weight increases during an energetically abundant season and decreases during an energetically deficient season [3, 6].
In contrast to wild animals, the amount of forage available to domesticated animals is not affected by the season. However, recent studies reveal that domesticated mammals can change their energy expenditure depending on seasonal environmental changes such as temperature or photoperiod [7,8,9]. For example, the metabolic rate of a Shetland pony mare is high in summer and low in winter, according to climate change [8]. This seasonal metabolic change suggests that the body weight of a domesticated horse can change with seasonal fluctuations that are high in winter and low in summer, despite constant feeding. Although some studies investigated seasonal changes of the body weight of domesticated horses, including racehorses [7, 8, 10,11,12], most of the horses were kept on pastures with amounts of forage that varied between seasons. Therefore, little is known about seasonal changes of body weight that occur when energy intake is constant throughout the year.
In the equine industry, many investigations report growth curves from birth to the yearling stage [13,14,15,16] as well as the change in the body weight of mares [10, 17]. These studies show how horses grow from birth to approximately 700 days of age and how body weight changes during reproduction. This information would be helpful for providing appropriate nutrition during these times to prevent diseases such as laminitis or metabolic syndrome [3, 18, 19].
During a horse's athletic career, controlling body weight is considered important, because body weight can be associated with the risk of injury [20] or performance level [21]. From the perspective of skeletal development, we can expect an increase in body weight until the horse approaches 5 years of age, the time of the latest epiphysis closure [22]. However, few published investigations measured body weight changes over several years during a horse's athletic career [11] because body weight is not measured in most countries, except for some Asian countries, including Japan.
In Japan, there is no off-season for racing and about 65 flat races, from Maiden Races up to Group One (the highest class), are held every week throughout the year. The horses are all given an official birthday of January 1st to keep the age groups easily defined for race conditions; horses aged 2 years are allowed to debut in June, although most horses, which cannot win a Maiden Race until the end of September aged 3 years, are forced to retire due to the lack of their prize money. The Japan Racing Association (JRA) records body weight data for all horses participating in races. Therefore, the analysis of these data provides an opportunity to improve our understanding of the growth and seasonal body weight changes of racehorses.
Although Cho et al. investigated the average body weight of Thoroughbred horses of all sexes according to month and age separately [11], it is difficult to identify the component that determines body weight change during growth. This is because body weight during growth can be affected by seasonal effects, as described above, or by growth, genetic effects, nutrition, and so on. Therefore, dividing body weight data into seasonal and growth components is valuable and could have important implications for planning the nutritional management of athletic horses to prevent injury, increase performance, or both.
To investigate how the body weight of Thoroughbred racehorses changes during their careers, we conducted a time-series analysis by dividing the data into seasonal, trend and remainder components. We hypothesized that body weight changes between seasons and increases just before the age of 5 years.
We acquired body weights of racehorses at flat races held by the JRA from 1 January 2002 to 31 December 2014 from the JRA's official database. Permission to use this dataset for the present study was given by the Equine Research Institute of JRA. Each year, approximately 3400 flat races are held by the JRA, including 48,000 race starts. The JRA operates racecourses at Sapporo, Hakodate, Fukushima, Niigata, Tokyo, Nakayama, Chukyo, Kyoto, Hanshin and Kokura, which range in latitude from N 34° to N 43°.
In the present study, all horses were stabled at the Miho (N 36, E 140°) or the Ritto (N 35°, E 136) Training Centre in Japan for at least 10 days before races. Except during training, the horses were housed individually in stalls (2.8 × 4.0 m) under natural photo-thermoperiod conditions and at the ambient temperature. Training was typically performed for 90 to 120 min each day, 6 days a week, and workout training was performed once or twice each week; this is classified as very heavy work according to the National Research Council [23]. Stable staff controlled the care of the racehorses. However, we recommended to staff that the nutritional energy requirements of the horses should be calculated according to the following equation, and stable staff should feed the horses accordingly. The equation, which is for horses undergoing very heavy work, is as follows [23]:
$$ \mathrm{Digestible}\ \mathrm{energy}\ \left(\mathrm{Mcal}/\mathrm{day}\right)=\left(0.0363\times \mathrm{body}\ \mathrm{weight}\right)\times 1.9 $$
The horses received their daily quantity of food as either two or three meals and water was available ad libitum in each stall. Generally, the dietary treatment consisted of mixed feed at 0.8%–1.2% body weight, oats at 0.5%–1.0% body weight, timothy or alfalfa hay at 1%–1.25% body weight as fresh matter, and small quantities of vitamin and mineral supplements; however, it is likely that the quantity of feed provided would differ somewhat between the stables. There was no pasture at either Training Centre. Because training and general animal care were performed by stable staff not associated with the research team, the feeding practice was not controlled precisely. All horses were transported to the racecourse in a horse trailer on the day before or the day of the race. All racehorses were Thoroughbreds, and their body weights were measured approximately 80 min before post time (i.e., the time the horses entered the starting gate). Body weight was measured at the racecourse using regularly calibrated electronic scales and recorded to the nearest 2 kg. Body weight, age and sex of each horse were recorded. The average body weight and standard error in each month were calculated for time series analysis. The JRA allows horses, which are 2 years of age, to debut in June. Therefore, data were collected after that time. We excluded data for horses ≥8 years of age because of the relatively small sample size of females and geldings. Further, the data for 2-year-old geldings between June and September were not used because of small sample size.
To investigate the seasonal change of body weight and growth as a function of age, we used a seasonal-trend decomposition procedure based on locally weighted regression (STL), which decomposes the time series into seasonal, trend and remainder components [24]. STL is a filtering procedure that iteratively applies locally weighted regression to the observations of the smoothing process in moving time windows, which allows analysis of large numbers of trends and seasonal smoothing [24]. For smoothing parameters, we used the algorithm implemented in the R language. We used STL to investigate seasonal cycles and trends for males, geldings and females, applying it first to all the data and then, in a subgroup analysis, to horses aged 2–4 years and those over 5 years for each sex. These analyses were performed using 2.13.0 R software [25].
We acquired 640,431 body weight measurements for race starts between 2002 and 2014. This included horses that competed in several races (range 1–84 races; median, 6 races). The age of the horses ranged from 2 to 13 years. For STL decomposition, we used 632,540 body-weight measurements of race starts for horses ranging in age from 2 to 7 years, comprising 377,301 males, 19,100 geldings (except for horses 2 years of age from June to September) and 236,139 females. The distribution of age in each month is shown in Table 1. Figs. 1, 2 and 3 show the STL decomposition analyses of body weight for all ages according to sex. The data split by age subgroups is shown in Table 2. Table 3 shows the mean average monthly temperature during the study period at the Miho and Ritto Training Centres.
Table 1 Distribution of horses by age and sex competing in Japan Racing Association races, by month
Seasonal-trend decomposition analysis of the body weights of male racehorses. The data are for male horses aged 2–7 years that competed in races held by the Japan Racing Association in the years 2002–2014 (n = 377,301). a The mean body weight of the horses by age and month. b The seasonal component. c The trend component. d The remainder component after fitting the seasonal and trend components. The dotted vertical lines indicate the month each year when body weight peaked, and the dashed vertical lines indicate the month when the body weight was at its nadir. The grey bars to the right of each panel show the relative scales of the components. Each grey bar represents the same length, but because the plots are on different scales, the bars vary in size
Seasonal-trend decomposition analysis of the body weights of gelding racehorses (n = 19,100). a The mean body weight of the horses by age and month. b The seasonal component. c The trend component. d The remainder component after fitting the seasonal and trend components. For further details, see the legend for Fig. 1
Seasonal-trend decomposition analysis of the body weights of female racehorses (n = 236,139). a The mean body weight of the horses by age and month. b The seasonal component. c The trend component. d The remainder component after fitting the seasonal and trend components. For further details, see the legend for Fig. 1
Table 2 The months corresponding to the peaks and nadirs of body weight, by age subgroup
Table 3 The average monthly temperatures (°C) at the Miho and Ritto Training Centres (mean ± SEM values for 2002–2014)
The mean body weight of the horses increased from the time of their debuts, with seasonal fluctuations, in all sexes. Over the course of their athletic careers, the mean body weights changed as follows: males, from 461 ± 0.7 kg to 493 ± 0.6 kg; geldings, from 458 ± 2.5 kg to 484 ± 1.7 kg; and females, from 442 ± 0.7 kg to 472 ± 2.4 kg (Figs. 1a, 2a and 3a, respectively). For the male horses, the seasonal component shown in Fig. 1b indicates that their body weight increased in winter, peaked in January and decreased in summer to its lowest value in August. The body weight changes in geldings were similar to those of the intact male horses, although they showed broader peaks in autumn and winter. The body weight of the geldings reached its peak in November and decreased to its lowest value in July (Fig. 2b). The seasonal pattern for the female horses differed from those of the males and geldings. The seasonal component shows that the body weight of females increased in autumn and winter, particularly in October, and decreased in spring, particularly in March (Fig. 3b). The magnitudes of the seasonal fluctuations were 7 kg, 8 kg and 6 kg for the males, geldings and females, respectively.
The plots of the STL trend components show that body weight increased approximately linearly from the horse's debut until nearly 5 years of age in all sexes (Figs. 1c, 2c, 3c). The trend component in all sexes increased by around 20 kg up to the end of 4 years of age, and by a further 5 kg (i.e., an overall 25 kg increase) at the end of 7 years of age.
In males, there was the same seasonal pattern in both younger horses and older horses, which reached its peak in January and nadir in August (Table 2). In geldings, the peak month was October for younger horses and December for older horses and the nadir month was June in both groups (Table 2). In females, the peak months for younger and older horses were October and September, respectively, and the nadir months were April and February (Table 2).
The present study demonstrates that the body weight of a Thoroughbred racehorse that consumes a relatively constant diet varied circannually. The present study shows that a horse's sex is associated with the seasonal cycle of body weight. Furthermore, although body weight increased up to approximately 7 years of age, most of the increase occurred before the age of approximately 5 years.
Body weight change depends on energy balance, which is defined as the relationship between energy intake and energy expenditure [26, 27]. This applies to horses [28]. The energy intake of Przewalski horses, which were originally wild herbivores of Central Asia, peaks in autumn and reaches its nadir in late winter [4], whereas energy expenditure is particularly low in winter compared with that in spring and summer [5]. The energy balance of Przewalski horses is positive in autumn, indicating that energy intake is higher than expenditure, and is negative in spring, indicating lower energy intake compared with expenditure. The consequence of these seasonal energy balances would lead to a seasonal change of body weight, which is high in autumn and low in spring [3].
In the present study, the body weights of the Thoroughbred male and gelding racehorses that received sufficient annual nutrition showed seasonal variations that peaked in autumn and winter and reached their nadir in summer. Unlike the Przewalski horses, the racehorses should have received a constant amount of food throughout the year. However, a limitation of the present study is its observational design. Although it is recommended that racehorses should be fed diets according to the results of the equation described in the Methods section, the racehorses studied here were managed by individual stable staff and so we lacked detailed information of the actual energy intake of each horse during the study period. Although establishing the detailed energy intake information by precise feeding control would be ideal, investigating a large sample size of racehorses throughout their athletic career would be difficult. Indeed, because this study analyzed data for a very large number of horses over a period of 13 years, any local or short-term variations in feeding would be expected to be "averaged out" and unlikely to result in noticeable effects. To produce a systematic seasonable effect as we observed would require a consistent, widespread seasonable variation in feed provision that lasted throughout the study period. However, we are unaware of any stable staff changing the feeding protocol according to the season (personal communication). In addition, the annual fluctuation in the body weight of the horses in this study was within 8 kg, which is equivalent to less than 2% of their mean body weight. This compares with 22 kg fluctuations in the annual mean amplitude of the body weight of Przewalski horses in a semi-natural condition, equivalent to 7% of their mean body weight [3]. This suggests that the horses in the present study were fed more consistently than horses housed under semi-natural conditions.
Thus, the results of this study suggest that male and gelding racehorses were energetically abundant in autumn and winter, and deficient in summer. The assumption that there was little change in energy intake suggest that these results are due to the seasonal changes in energy expenditure (high in summer and low in winter); this is consistent with a previous report [8].
Seasonal environmental factors such as temperature or photoperiod may explain the seasonal changes in metabolic rate. The range in average monthly temperature over the year at both training centres was about 23 °C, with the highest temperatures in August and the lowest in January. The horses were stabled at the ambient temperature, so the high temperatures in summer may have increased their metabolic rate and the low temperatures in winter may have reduced it, which would be consistent with the results of previous studies [8, 29, 30]. Although a long photoperiod induces a higher metabolic rate than a shorter photoperiod in rats and cats [9, 31], there are no published studies, to our knowledge, on the direct relationship between photoperiod and energy expenditure by horses. Further studies are required to confirm this potential relationship.
In contrast to male and gelding horses, the body weight of female horses peaked in autumn and reached the nadir in spring, consistent with a previous report [11]. These findings suggest that female horses employ a mechanism that maintains energy balance that differs from that of male and gelding horses. The sexual cycle of female horses may provide an explanation. For example, the locomotor activity of female Thoroughbred horses is higher during breeding season [32]. The metabolic rate can be higher in breeding season, similar to behavioural changes, because Thoroughbred racehorses stabled in the training centres maintain their oestrous cycle during training [33]. Further studies are required to identify sex-specific mechanisms that regulate body weight.
The seasonal fluctuations in body weight may have important implications for equine clinicians, racehorse trainers, or both. Our previous study shows that racehorses that are heavy at race time are at higher risk of superficial digital flexor tendon injury compared with horses that weigh less [20]. Further, increases in body weight might affect the results of the submaximal exercise test [21]. Therefore, equine clinicians and racehorse staff should adjust the amount of feed according to seasonal body weight changes such as the reduction of feeding amount in autumn and winter in males and geldings, and in autumn in females.
We show here that the body weight of Thoroughbred racehorses increased 25 to 30 kg during their athletic career. According to the trend component, most of the increase was observed until horses approaches approximately 5 years of age. To our knowledge, this is the first report that determined the time interval associated with the increase in body weight of Thoroughbred racehorses during their athletic career. This finding is reasonable because the last epiphysis closure in a cervical vertebra occurs at the age of 4–5 years [22], indicating that horses mature by the end of their fourth year. Moreover, racing performance or average racing speed peaks at approximately 4.5–5 years of age [34, 35]. Together, the present and previous studies indicate that the physiological development of Thoroughbred racehorses is complete just before 5 years of age. It would be interesting to investigate the association between body weight increase and anatomical development, such as the development of bones and joints. However, this would be difficult to achieve for a large sample size. The mean body weights of the Thoroughbred racehorses at the completion of physiological development were approximately 490 kg, 480 kg and 465 kg for males, geldings and females, respectively. We propose the use of these values as standards for Thoroughbred racehorses in Japan.
The remainder component is an indication of the residual data after subtracting the seasonal and trend components [24] and shows variations due to random features. However, there appeared to be a small element of seasonality in the remainder component for the young male horses, especially those aged 2–3 years (Fig. 1d). Another age-related factor was that there were fewer older horses than younger ones, especially among the females (Table 1). It is possible that different number between older and younger horses may have had different seasonal patterns according to ages. For these reasons, we divided the data into two groups for each sex, younger horses (2–4 years) and older horses (those over 5 years). However, the results showed almost the same seasonal pattern as the undivided datasets, with the body weight of the males and geldings reaching a peak in autumn and winter, and their nadir in summer whereas the body weight of the females peaked in autumn and was at its lowest in spring, although the divided data showed a slightly wider range than the combined data for all ages. These additional results suggest that the datasets were appropriately divided into seasonal, trend and remainder components.
We should take care to correctly interpret the data acquired from the analysis of body weight change that occurs each month or season. For example, overall average body weight in JRA may be lower in June compared with May, because 2-year-old horses with lighter body weights compared with older horses are admitted to the JRA in June. To avoid such a misleading analysis, we applied STL techniques to body weight data classified according to age, which is a useful technique in diverse disciplines [36, 37]. The STL technique is an effective tool for visualizing and clarifying time series events by dividing them into seasonal, trend and remainder components [24]. We used this technique here to identify the seasonal change associated with growth.
Due to the JRA system described in the Introduction section, the distribution of age and month relatively changed according to the season. Especially after August, the number of 2-year-old horses increased while the number of 3-year-old horses decreased. If there were seasons specific for any age or gender, racing system can have possible role on body weight change. For example, if autumn and winter season were for only 2-year-old horses, reduction of workload for over 3-year-old horses might cause overall increase of body weight. However, races from Maiden to Group One are held every week regardless of age and gender in JRA. Therefore, we speculate that JRA system did not affect our interpretation.
The present study shows that Thoroughbred racehorses exhibited annual rhythms of body weight, suggesting that they maintain a seasonal energy balance. Further, there was a sex difference in the seasonal pattern of body weight changes, that is, the body weights of the male horses and geldings peaked in autumn and winter and reached their nadir in summer whereas the body weight of the female horses peaked in autumn and reached its nadir in spring. Additionally, most of the increase of body weight of Thoroughbred racehorses during their athletic careers is completed just before the age of 5 years. These results should be useful for optimizing the nutritional management of athletic horses.
Mitchell B, McCowan D, Nicholson IA. Annual cycles of body weight and condition in Scottish Red deer. J Zool. 1976;180(1):107–27.
Adamczewski JZ, Flood PF, Gunn A. Seasonal patterns in body composition and reproduction of female muskoxen (Ovibos moschatus). J Zool. 1997;241(2):245–69.
Scheibe KM, Streich WL. Annual rhythm of body weight in Przewalski horses (Equus ferus przewalskii). Biol Rhythm Res. 2003;34(4):383–95.
Kuntz R, Kubalek C, Ruf T, Tataruch F, Arnold W. Seasonal adjustment of energy budget in a large wild mammal, the Przewalski horse (Equus ferus przewalskii) I. Energy intake. J Exp Biol. 2006;209(22):4557–65.
Arnold W, Ruf T, Kuntz R. Seasonal adjustment of energy budget in a large wild mammal, the Przewalski horse (Equus ferus przewalskii) II. Energy expenditure. J Exp Biol. 2006;209(22):4566–73.
Parker KL, Gillingham MP, Hanley TA, Robbins CT. Foraging efficiency: energy expenditure versus energy gain in free-ranging black-tailed deer. Can J Zoolog. 1996;74(3):442–50.
Brinkmann L, Gerken M, Riek A. Adaptation strategies to seasonal changes in environmental conditions of a domesticated horse breed, the Shetland pony (Equus ferus caballus). J Exp Biol. 2012;215(7):1061–8.
Brinkmann L, Gerken M, Hambly C, Speakman JR, Riek A. Saving energy during hard times: energetic adaptations of Shetland pony mares. J Exp Biol. 2014;217(24):4320–7.
Kappen KL, Garner LM, Kerr KR, Swanson KS. Effects of photoperiod on food intake, activity and metabolic rate in adult neutered male cats. J Anim Physiol Anim Nutr. 2014;98(5):958–67.
Fitzgerald BP, McManus CJ. Photoperiodic versus metabolic signals as determinants of seasonal anestrus in the mare. Biol Reprod. 2000;63(1):335–40.
Cho KH, Son SK, Cho BW, Lee HK, Kong HS, Jeon GJ, et al. Effects of change of body weight on racing time in Thoroughbred racehorses. J Anim Sci Technol. 2008;50(6):741–6.
Giles SL, Rands SA, Nicol CJ, Harris PA. Obesity prevalence and associated risk factors in outdoor living domestic horses and ponies. Peer J. 2014;2:e299.
Staniar WB, Kronfeld DS, Treiber KH, Splan RK, Harris PA. Growth rate consists of baseline and systematic deviation components in Thoroughbreds. J Anim Sci. 2004;82(4):1007–15.
Brown-Douglas CG, Parkinson TJ, Firth EC, Fennessy PF. Bodyweights and growth rates of spring-and autumn-born Thoroughbred horses raised on pasture. New Zeal Vet J. 2005;53(5):326–31.
Morel PCH, Bokor A, Rogers CW, Firth EC. Growth curves from birth to weaning for Thoroughbred foals raised on pasture. New Zeal Vet J. 2007;55(6):319–25.
Onoda T, Yamamoto R, Sawamura K, Inoue Y, Matsui A, Miyake T, et al. Empirical growth curve estimation using sigmoid sub-functions that adjust seasonal compensatory growth for male body weight of Thoroughbred horses. J Equine Sci. 2011;22(2):37–42.
Lawrence LM, DiPietro J, Ewert K, Parrett D, Moser L, Powell D. Changes in body weight and condition of gestating mares. J Equine Vet Sci. 1992;12(6):355–8.
Geor RJ. Metabolic predispositions to laminitis in horses and ponies: obesity, insulin resistance and metabolic syndromes. J Equine Vet Sci. 2008;28(12):753–9.
Wylie CE, Collins SN, Verheyen KLP, Newton JR. Risk factors for equine laminitis: a case-control study conducted in veterinary-registered horses and ponies in Great Britain between 2009 and 2011. Vet J. 2013;198(1):57–69.
Takahashi T, Kasashima Y, Ueno Y. Association between race history and risk of superficial digital flexor tendon injury in Thoroughbred racehorses. J Am Vet Med Assoc. 2004;225(1):90–3.
Ellis JM, Hollands T, Allen DE. Effect of forage intake on bodyweight and performance. Equine Vet J Suppl. 2002;34:66–70.
Butler JA, Colles CM, Dyson SJ, Kold SE, Poulos PW. The spine. In: Clinical radiology of the Horse. Oxford: Blackwell Scientific; 1993. p. 355–98.
NRC. Nutrient requirement of horses 6th edition (online). Washington, DC: National Academy Press; 2007. https://www.nap.edu/read/11653/chapter/3. Accessed 3 Dec 2016.
Cleveland RB, Cleveland WS, McRae JE, Terpenning I. STL: a seasonal-trend decomposition procedure based on loess. J Off Stat. 1990;6(1):3–73.
R Development Core Team, 2011; R: a language and environment for statistical computing. Vienna; R Foundation for Statistical Computing. ISBN 3-900051-07-0. http://www.r-project.org. Accessed 7 June 2017.
Edholm OG, Adam JM, Best TW. Day-to-day weight changes in young men. Ann Hum Biol. 1974;1(1):3–12.
Ma Y, Olendzki BC, Li W, Hafner AR, Chiriboga D, Hebert JR, et al. Seasonal variation in food intake, physical activity, and body weight in a predominantly overweight population. Eur J Clin Nutr. 2006;60:519–28.
Amato C, Martin L, Dumon H, Jaillardon L, Nguyen P, Siliart B. Variations of plasma leptin in show horses during a work season. J Anim Physiol Anim Nutr. 2012;96(5):850–9.
Brinkmann L, Gerken M, Riek A. Seasonal changes of total body water and water intake in Shetland ponies measured by an isotope dilution technique. J Anim Sci. 2013;91(8):3750–8.
McBride GE, Christopherson RJ, Sauer W. Metabolic rate and plasma thyroid hormone concentrations of mature horses in response to changes in ambient temperature. Can J Anim Sci. 1985;65(2):375–82.
Boon P, Visser H, Daan S. Effect of photoperiod on body mass, and daily energy intake and energy expenditure in young rats. Physiol Behav. 1997;62(4):913–9.
Bertolucci C, Giannetto C, Fazio F, Piccione G. Seasonal variations in daily rhythms of activity in athletic horses. Animal. 2008;2(7):1055–60.
Takahashi Y, Akai M, Murase H, Nambo Y. Seasonal changes in serum progesterone levels in Thoroughbred racehorses in training. J Equine Sci. 2015;26(4):135–9.
Gramm M, Marksteiner R. The effect of age on thoroughbred racing performance. J Equine Sci. 2010;21(4):73–8.
Takahashi T. The effect of age on the racing speed of Thoroughbred racehorses. J Equine Sci. 2015;26(2):43–8.
Ohshige K. Reduction in ambulance transports during a public awareness campaign for appropriate ambulance use. Acad Emerg Med. 2008;15(3):289–93.
Lee HS, Levine M, Guptill-Yoran C, Johnson AJ, Kamecke P, Moore GE. Regional and temporal variations of Leptospira seropositivity in dogs in the United States, 2000–2010. J Vet Int Med. 2014;28(3):779–88.
The authors would like to thank Enago (www.enago.jp) for the English language review.
This study was supported by the Japan Racing Association.
The datasets analysed in this study are available from the corresponding author on request.
YT performed the analysis of data and drafted the manuscript. TT collected the data and revised the manuscript. We all read and approved the final manuscript.
Sports Science Division, Equine Research Institute, Japan Racing Association, 1400-4, Shiba, Shimotsuke, Tochigi, 329-0412, Japan
Yuji Takahashi & Toshiyuki Takahashi
Yuji Takahashi
Toshiyuki Takahashi
Correspondence to Yuji Takahashi.
Takahashi, Y., Takahashi, T. Seasonal fluctuations in body weight during growth of Thoroughbred racehorses during their athletic career. BMC Vet Res 13, 257 (2017). https://doi.org/10.1186/s12917-017-1184-3
Racehorse
Seasonal change
Sex difference
Submission enquiries: [email protected] | CommonCrawl |
Spreading speeds and transition fronts of lattice KPP equations in time heterogeneous media
DCDS Home
Asymptotic analysis of parabolic equations with stiff transport terms by a multi-scale approach
September 2017, 37(9): 4677-4696. doi: 10.3934/dcds.2017201
Stability of stationary solutions to the compressible bipolar Euler-Poisson equations
Hong Cai 1, and Zhong Tan 2,,
School of Mathematical Sciences, Xiamen University, Fujian, Xiamen 361005, China,
School of Mathematical Sciences and Fujian Provincial Key Laboratory, on Mathematical Modeling and Scientific Computing, Xiamen University, Fujian, Xiamen 361005, China
* Corresponding author: Zhong Tan, [email protected]
Received April 2016 Revised April 2017 Published June 2017
Fund Project: The authors are supported by National Natural Science Foundation of China-NSAF (No.11271305, 11531010)
In this paper, we study the compressible bipolar Euler-Poisson equations with a non-flat doping profile in three-dimensional space. The existence and uniqueness of the non-constant stationary solutions are established under the smallness assumption on the gradient of the doping profile. Then we show the global existence of smooth solutions to the Cauchy problem near the stationary state provided the $H^3$ norms of the initial density and velocity are small, but the higher derivatives can be arbitrarily large.
Keywords: Bipolar Euler-Poisson equations, stability, global solution, energy method.
Mathematics Subject Classification: Primary: 35M10, 35Q35; Secondary: 35Q60.
Citation: Hong Cai, Zhong Tan. Stability of stationary solutions to the compressible bipolar Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4677-4696. doi: 10.3934/dcds.2017201
G. Alí and A. Jüngel, Global smooth solutions to the multi-dimensional hydrodynamic model for two-carrier plasmas, J. Differential Equations, 190 (2003), 663-685. doi: 10.1016/S0022-0396(02)00157-2. Google Scholar
P. Degond and P. A. Markowich, On a one-dimensional steady-state hydrodynamic model, Appl. Math. Lett., 3 (1990), 25-29. doi: 10.1016/0893-9659(90)90130-4. Google Scholar
P. Degond and P. A. Markowich, A steady-state potential flow model for semiconductors, Ann. Mat. Pura Appl., 165 (1993), 87-98. doi: 10.1007/BF01765842. Google Scholar
D. Donatelli, M. Mei, B. Rubino and R. Sampalmieri, Asymptotic behavior of solutions to Euler-Poisson equations for bipolar hydrodynamic model of semiconductors, J. Differential Equations, 255 (2013), 3150-3184. doi: 10.1016/j.jde.2013.07.027. Google Scholar
W. F. Fang and K. Ito, Steady-state solutions of a one-dimensional hydrodynamic model for semiconductors, J. Differential Equations, 133 (1997), 224-244. doi: 10.1006/jdeq.1996.3203. Google Scholar
I. Gamba, Stationary transonic solutions of a one-dimensional hydrodynamic model for semiconductor, Comm. Partial Differential Equations, 17 (1992), 553-577. doi: 10.1080/03605309208820853. Google Scholar
I. Gasser, L. Hsiao and H. L. Li, Large time behavior of solutions of the bipolar hydrodynamical model for semiconductors, J. Differential Equations, 192 (2003), 326-359. doi: 10.1016/S0022-0396(03)00122-0. Google Scholar
Y. Guo and W. Strauss, Stability of semiconductor states with insulating and contact boundary conditions, Arch. Ration. Mech. Anal., 179 (2006), 1-30. doi: 10.1007/s00205-005-0369-2. Google Scholar
Y. Guo and Y. J. Wang, Decay of dissipative equations and negative Sobolev spaces, Comm. Partial Differential Equations, 37 (2012), 2165-2208. doi: 10.1080/03605302.2012.696296. Google Scholar
L. J. Han, J. J. Zhang and B. L. Guo, Global smooth solution for a kind of two-fluid system in plasmas, J. Differential Equations, 252 (2012), 3453-3481. doi: 10.1016/j.jde.2011.12.004. Google Scholar
L. Hsiao, P. A. Markowich and S. Wang, The asymptotic behavior of globally smooth solutions of the multidimensional isentropic hydrodynamic model for semiconductors, J. Differential Equations, 192 (2003), 111-133. doi: 10.1016/S0022-0396(03)00063-9. Google Scholar
L. Hsiao and K. J. Zhang, The global weak solution and relaxation limits of the initial boundary value problem to the bipolar hydrodynamic model for semiconductors, Math. Models Methods Appl. Sci., 10 (2000), 1333-1361. doi: 10.1142/S0218202500000653. Google Scholar
F. M. Huang and Y. P. Li, Large time behavior and quasineutral limit of solutions to a bipolar hydrodynamic model with large data and vacuum, Discrete Contin. Dyn. Syst., 24 (2009), 455-470. doi: 10.3934/dcds.2009.24.455. Google Scholar
F. M. Huang, M. Mei and Y. Wang, Large time behavior of solutions to $n$-dimensional bipolar hydrodynamic model for semiconductors, SIAM J. Math. Anal., 43 (2011), 1595-1630. doi: 10.1137/100810228. Google Scholar
F. M. Huang, M. Mei, Y. Wang and T. Yang, Long-time behavior of solutions to the bipolar hydrodynamic model of semiconductors with boundary effect, SIAM J. Math. Anal., 44 (2012), 1134-1164. doi: 10.1137/110831647. Google Scholar
F. M. Huang, M. Mei, Y. Wang and H. M. Yu, Asymptotic convergence to stationary waves for unipolar hydrodynamic model of semiconductors, SIAM J. Math. Anal., 43 (2011), 411-429. doi: 10.1137/100793025. Google Scholar
F. M. Huang, M. Mei, Y. Wang and H. M. Yu, Asymptotic convergence to planar stationary waves for multi-dimensional unipolar hydrodynamic model of semiconductors, J. Differential Equations, 251 (2011), 1305-1331. doi: 10.1016/j.jde.2011.04.007. Google Scholar
Q. C. Ju, Asymptotic behavior of global smooth solutions to the Euler-Poisson system in semiconductors, J. Partial Differential Equations, 15 (2002), 89-96. Google Scholar
Q. C. Ju, Global smooth solutions to the multidimensional hydrodynamic model for plasmas with insulating boundary conditions, J. Math. Anal. Appl., 336 (2007), 888-904. doi: 10.1016/j.jmaa.2007.03.038. Google Scholar
H. L. Li, P. Markowich and M. Mei, Asymptotic behavior of solutions of the hydrodynamic model of semiconductors, Proc. Roy. Soc. Edinburgh Sect. A, 132 (2002), 359-378. Google Scholar
Y. P. Li, Global existence and asymptotic behavior of solutions to the nonisentropic bipolar hydrodynamic models, J. Differential Equations, 250 (2011), 1285-1309. doi: 10.1016/j.jde.2010.08.018. Google Scholar
Y. P. Li and X. F. Yang, Global existence and asymptotic behavior of the solutions to the three-dimensional bipolar Euler-Poisson systems, J. Differential Equations, 252 (2012), 768-791. doi: 10.1016/j.jde.2011.08.008. Google Scholar
Q. Q. Liu and C. J. Zhu, Asymptotic stability of stationary solutions to the compressible Euler-Maxwell equations, Indiana Univ. Math. J., 62 (2013), 1203-1235. doi: 10.1512/iumj.2013.62.5047. Google Scholar
T. Luo, R. Natalini and Z. P. Xin, Large time behavior of the solutions to a hydrodynamic model for semiconductors, SIAM J. Appl. Math., 59 (1999), 810-830. doi: 10.1137/S0036139996312168. Google Scholar
P. A. Markowich, On steady state Euler-Poisson models for semiconductors, Z. Angew. Math. Phys., 42 (1991), 389-407. doi: 10.1007/BF00945711. Google Scholar
M. Mei, B. Rubino and R. Sampalmieri, Asymptotic behavior of solutions to the bipolar hydrodynamic model of semiconductors in bounded domain, Kinet. Relat. Models, 5 (2012), 537-550. doi: 10.3934/krm.2012.5.537. Google Scholar
R. Natalini, The bipolar hydrodynamic model for semiconductors and the drift-diffusion equations, J. Math. Anal. Appl., 198 (1996), 262-281. doi: 10.1006/jmaa.1996.0081. Google Scholar
S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a hydrodynamic model for semiconductors, Osaka J. Math., 44 (2007), 639-665. Google Scholar
S. Nishibata and M. Suzuki, Asymptotic stability of a stationary solution to a thermal hydrodynamic model for semiconductors, Arch. Ration. Mech. Anal., 192 (2009), 187-215. doi: 10.1007/s00205-008-0129-1. Google Scholar
Y. J. Peng and J. Xu, Global well-posedness of the hydrodynamic model for two-carrier plasmas, J. Differential Equations, 255 (2013), 3447-3471. doi: 10.1016/j.jde.2013.07.045. Google Scholar
N. Tsuge, Uniqueness of the stationary solutions for a fluid dynamical model of semiconductors, Osaka J. Math., 46 (2009), 931-937. Google Scholar
Y. J. Wang, Decay of the Navier-Stokes-Poisson equations, J. Differential Equations, 253 (2012), 273-297. doi: 10.1016/j.jde.2012.03.006. Google Scholar
D. H. Wang, Global solutions to the Euler-Poisson equations of two-carrier types in one dimension, Z. Angew. Math. Phys., 48 (1997), 680-693. doi: 10.1007/s000330050056. Google Scholar
Y. Wang and Z. Tan, Stability of steady states of the compressible Euler-Poisson system in $\mathbb{R}^3$, J. Math. Anal. Appl., 422 (2015), 1058-1071. doi: 10.1016/j.jmaa.2014.09.047. Google Scholar
Z. Y. Zhao and Y. P. Li, Global existence and optimal decay rate of the compressible bipolar Navier-Stokes-Poisson equations with external force, Nonlinear Anal. Real World Appl., 16 (2014), 146-162. doi: 10.1016/j.nonrwa.2013.09.014. Google Scholar
C. Zhu and H. Hattori, Stability of steady state solutions for an isentropic hydrodynamic model of semiconductors of two species, J. Differential Equations, 166 (2000), 1-32. doi: 10.1006/jdeq.2000.3799. Google Scholar
La-Su Mai, Kaijun Zhang. Asymptotic stability of steady state solutions for the relativistic Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 981-1004. doi: 10.3934/dcds.2016.36.981
Yeping Li, Jie Liao. Stability and $ L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1281-1302. doi: 10.3934/cpaa.2019062
A. Alexandrou Himonas, Gerard Misiołek, Feride Tiǧlay. On unique continuation for the modified Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2007, 19 (3) : 515-529. doi: 10.3934/dcds.2007.19.515
Yeping Li. Existence and some limit analysis of stationary solutions for a multi-dimensional bipolar Euler-Poisson system. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 345-360. doi: 10.3934/dcdsb.2011.16.345
Masahiro Suzuki. Asymptotic stability of stationary solutions to the Euler-Poisson equations arising in plasma physics. Kinetic & Related Models, 2011, 4 (2) : 569-588. doi: 10.3934/krm.2011.4.569
Yongcai Geng. Singularity formation for relativistic Euler and Euler-Poisson equations with repulsive force. Communications on Pure & Applied Analysis, 2015, 14 (2) : 549-564. doi: 10.3934/cpaa.2015.14.549
Manwai Yuen. Cylindrical blowup solutions to the isothermal Euler-Poisson equations. Conference Publications, 2011, 2011 (Special) : 1448-1456. doi: 10.3934/proc.2011.2011.1448
Jiang Xu, Ting Zhang. Zero-electron-mass limit of Euler-Poisson equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4743-4768. doi: 10.3934/dcds.2013.33.4743
Haigang Li, Jiguang Bao. Euler-Poisson equations related to general compressible rotating fluids. Discrete & Continuous Dynamical Systems - A, 2011, 29 (3) : 1085-1096. doi: 10.3934/dcds.2011.29.1085
Sasho Popov, Jean-Marie Strelcyn. The Euler-Poisson equations: An elementary approach to integrability conditions. Journal of Geometric Mechanics, 2018, 10 (3) : 293-329. doi: 10.3934/jgm.2018011
Ming Mei, Yong Wang. Stability of stationary waves for full Euler-Poisson system in multi-dimensional space. Communications on Pure & Applied Analysis, 2012, 11 (5) : 1775-1807. doi: 10.3934/cpaa.2012.11.1775
Zhigang Wu, Weike Wang. Pointwise estimates of solutions for the Euler-Poisson equations with damping in multi-dimensions. Discrete & Continuous Dynamical Systems - A, 2010, 26 (3) : 1101-1117. doi: 10.3934/dcds.2010.26.1101
Xueke Pu. Quasineutral limit of the Euler-Poisson system under strong magnetic fields. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2095-2111. doi: 10.3934/dcdss.2016086
Shu Wang, Chundi Liu. Boundary Layer Problem and Quasineutral Limit of Compressible Euler-Poisson System. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2177-2199. doi: 10.3934/cpaa.2017108
Myoungjean Bae, Yong Park. Radial transonic shock solutions of Euler-Poisson system in convergent nozzles. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 773-791. doi: 10.3934/dcdss.2018049
Masahiro Suzuki. Asymptotic stability of a boundary layer to the Euler--Poisson equations for a multicomponent plasma. Kinetic & Related Models, 2016, 9 (3) : 587-603. doi: 10.3934/krm.2016008
Qiangchang Ju, Hailiang Li, Yong Li, Song Jiang. Quasi-neutral limit of the two-fluid Euler-Poisson system. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1577-1590. doi: 10.3934/cpaa.2010.9.1577
Zhong Tan, Yong Wang, Fanhui Xu. Large-time behavior of the full compressible Euler-Poisson system without the temperature damping. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1583-1601. doi: 10.3934/dcds.2016.36.1583
Corrado Lattanzio, Pierangelo Marcati. The relaxation to the drift-diffusion system for the 3-$D$ isentropic Euler-Poisson model for semiconductors. Discrete & Continuous Dynamical Systems - A, 1999, 5 (2) : 449-455. doi: 10.3934/dcds.1999.5.449
Anthony Suen. Existence and a blow-up criterion of solution to the 3D compressible Navier-Stokes-Poisson equations with finite energy. Discrete & Continuous Dynamical Systems - A, 2020, 40 (3) : 1775-1798. doi: 10.3934/dcds.2020093
HTML views (32)
Hong Cai Zhong Tan | CommonCrawl |
A DNA methylation atlas of normal human cell types – Nature
Human tissue samples
Tissue dissociation and FACS sorting of purified cell populations
WGBS
WGBS computational processing
Genomic segmentation into multisample homogenous blocks
Segmentation and clustering analysis
Cell-type-specific markers
Enrichment for gene set annotations
Enrichment for chromatin marks
Motif analysis
Methylation marker–gene associations
A catalogue of unmethylated loci and putative enhancers for each cell type
Interindividual variation in cell type methylation
CTCF ChIP–seq analysis
Endodermal marker analysis
UXM fragment-level deconvolution algorithm
In silico simulation of WGBS deconvolution
WGBS deconvolution
Deconvolution of 450K array data
Human tissues were obtained from various sources, as detailed in Supplementary Table 1. The majority (148) of the 205 samples analysed were sorted from tissue remnants obtained at the time of routine, clinically indicated surgical procedures at the Hadassah Medical Center. In all cases, normal tissue distant from any known pathology was used. Surgeons and/or pathologists were consulted before removal of tissue to confirm that its removal would not compromise the final pathologic diagnosis in any way. For example, in patients undergoing right colectomy for carcinoma of the caecum, the most distal part of the ascending colon and most proximal part of the terminal ileum were obtained for cell isolation. Normal bone marrow was obtained at the time of joint replacement in patients with no known haematologic pathology. The patient population included 135 individuals (n = 60 males, n = 74 females) aged 3–83 years. The majority of donors were White. Approval for collection of normal tissue remnants was provided by the Institutional Review Board (IRB, Helsinki Committee), Hadassah Medical Center, Jerusalem, Israel. Written informed consent was obtained from each donor or legal guardian before surgery.
As described in Supplementary Table 1, some cells and tissues were obtained through collaborative arrangements: pancreatic exocrine and liver samples (cadaveric organ donors, n = 5) from M. Grompe, Oregon Health & Science University; adipocytes (subcutaneous adipocytes at time of cosmetic surgery following weight loss, n = 3), oligodendrocytes and neurons (brain autopsies, n = 14) from K. L. Spalding and H. Druid, Karolinska Institute, Stockholm; and research-grade cadaveric pancreatic islets from J. Shapiro, University of Alberta (n = 16). In all cases, tissues were obtained and transferred in compliance with local laws and after the approval of the local ethics committee on human experimentation. Sixteen cell types were obtained from commercial sources, including 15 from Lonza and one from Sigma-Aldrich. Three pancreatic islet preparations were obtained from the Integrated Islet Distribution Program (https://iidp.coh.org).
Fresh tissue obtained at the time of surgery was trimmed to remove extraneous tissue. Cells were dispersed using enzyme-based protocols optimized for each tissue type. The resulting single-cell suspension was incubated with the relevant antibodies and FACS sorted to obtain the desired cell type (Extended Data Fig. 2 and Supplementary Information).
Purity of live sorted cells was determined by messenger RNA analysis for key known cell-type-specific genes, whereas the purity of cells fixed before sorting was determined using previously validated cell-type-specific methylation signals (Extended Data Fig. 2c and Supplementary Information). DNA was extracted using the DNeasy Blood and Tissue kit (no. 69504, Qiagen) according to the manufacturer's instructions, and stored at −20 °C for bisulfite conversion and whole-genome sequencing.
Up to 75 ng of sheared genomic DNA was subjected to bisulfite conversion using the EZ-96 DNA Methylation Kit (Zymo Research), with liquid handling on a MicroLab STAR (Hamilton). Dual-indexed sequencing libraries were prepared using Accel-NGS Methyl-Seq DNA library preparation kits (Swift BioSciences) and custom liquid handling scripts executed on the Hamilton MicroLab STAR. Libraries were quantified using KAPA Library Quantification Kits for Illumina Platforms (Kapa Biosystems). Four uniquely dual-indexed libraries, along with the 10% PhiX v.3 library (Illumina), were pooled and clustered on an Illumina NovaSeq 6000 S2 flow cell followed by 150 bp, paired-end sequencing. Total read count and average sequencing depth (in read pairs), as well as percentage of CpGs, per sample, at 1× and 10×, are detailed in Supplementary Table 1. Also listed are average methylation levels, per sample, at CpG, nonCpG and CC dinucleotides. Intriguingly, sorted neuron samples showed higher CpA methylation (approximately 10%) compared with other samples (approximately 1%).
Paired-end FASTQ files were mapped to the human (hg19, hg38), lambda, pUC19 and viral genomes using bwa-meth (v.0.2.0)51 then converted to BAM files using SAMtools (v.1.9)52. Duplicated reads were marked by Sambamba (v.0.6.5) with parameters '-l 1 -t 16 –sort-buffer-size 16000 –overflow-list-size 10000000' (ref. 53). Reads with low mapping quality, duplicated or not mapped in a proper pair were excluded using SAMtools view with parameters '-F 1796 -q 10'. Reads were stripped from nonCpG nucleotides and converted to PAT files using wgbstools (v.0.1.0)54.
We developed and implemented a multichannel dynamic Pprogramming segmentation algorithm to divide the genome into continuous genomic regions (blocks), showing homogeneous methylation levels across multiple CpGs for each sample54. A generative probabilistic model is used, each block inducing a Bernoulli distribution with some \({\theta }_{i}^{k}\), where i is the block index and k the sample index (k = 1,…, K), and each observation (occurence of one CpG at one sequenced fragment) is represented by a random variable sampled i.i.d. (independent and identically distributed) from the same beta value Ber \({\theta }_{i}^{k}\). The log-likelihood of all sequencing data is the sum of log-likelihoods across all blocks, each decomposing as the sum of log-likelihoods across all samples. The log-likelihood of the ith block can therefore be formalized as:
$${\rm{score}}({{\rm{block}}}_{i})={ll}_{i}={{\varSigma }^{K}}_{k=1}({{({N}_{C})}_{i}}^{k}\times \log ({{\hat{\theta }}_{i}}^{k})+{{({N}_{T})}_{i}}^{k}\times \log (1-{{\hat{\theta }}_{i}}^{k}))$$
where \({({N}_{C})}_{i}^{k}\,,\,{({N}_{T})}_{i}^{k}\) is the number of methylated and unmethylated observations, respectively, in the ith block in the kth sample, whereas \({{\hat{\theta }}_{i}}^{k}\) marks a Bayes estimator of the Bernoulli distribution parameter, calculated with \({a}_{C},{a}_{T}\) pseudocounts for each block/sample:
$${{\hat{\theta }}_{i}}^{k}=\frac{{{({N}_{C})}_{i}}^{k}+{\alpha }_{C}}{{{({N}_{C})}_{i}}^{k}+{{({N}_{T})}_{i}}^{k}+{\alpha }_{C}+{\alpha }_{T}}$$
These hyperparameters are used for regularization, to control the trade-off between overfitting (shorter blocks) and generalization (longer blocks). Dynamic programming is then used to find the optimal segmentation across the genome. Briefly, we maintain a 1 × N table T (N = 28,217,448 CpGs) for optimal segmentation scores across all prefixes. Specifically, T[i] holds the score of the optimal segmentation of all CpG sites from 1 through to i, and T[N] holds the final, optimal, score across the entire genome. The table itself is updated sequentially from 1 to N, where the optimal segmentation up to position i is achieved by the addition of a new block to a shorter optimal segmentation (for example, up to position i′):
$$T[i]=\mathop{\max }\limits_{i{\prime} < i}\{T[i{\prime} ]+{\rm{score}}({\rm{block}}[i{\prime} +1,...,i])\}$$
For this, all previous optimal segmentations are considered and a new block is added from position (iʹ + 1) to position i (with a maximal block size of 5,000 bp). The combination that maximizes log-likelihood is selected as the optimal segmentation from 1 to i, and the start index of the last block is recorded in a traceback table. Once the score of optimal segmentation is calculated in T[N], the traceback table is used to retrieve the full segmentation. An upper bound on block length (5,000 bases) is set to improve running times and each chromosome is run separately. The linear distance between consecutive CpGs is ignored under this model. The model and segmentation algorithm are further described in Supplementary Information.
We segmented the genome into 7,104,162 blocks using wgbstools (with parameters 'segment –max_bp 5000') with all of the 205 samples as reference, and retained 2,099,681 blocks covering at least four CpGs. For hierarchical clustering (Fig. 2) we selected the top 1% (20,997) blocks showing the highest variability in average methylation across all samples. Blocks with sufficient coverage of at least ten observations (calculated as sequenced CpG sites) across two-thirds of the samples were further retained. We then computed the average methylation for each block and sample calculated using wgbstools (–beta_to_table -c 10), marked blocks with fewer than ten observations as missing values and imputed their methylation values using sklearn KNNImputer (v.0.24.2)55. The 205 samples were clustered with the unsupervised agglomerative clustering algorithm23, using scipy (v.1.6.3)56 and L1 norm. The fanning diagram was plotted using ggtree (v.2.2.4)57.
The 205 atlas samples were divided into 51 groups by cell type, yielding 39 basic groups and 12 composite supergroups (Supplementary Table 3). We then performed a one-versus-all comparison to identify differentially methylated blocks unique for each cell type. For this we used wgbstools' 'find_markers' function to first identify blocks covering at least five CpGs (length 10–1,500 bp) to calculate the average methylation per block/sample and rank the blocks according to the difference in average methylation between target samples versus all other samples. To allow some flexibility, this difference was computed (for unmethylated markers) as the difference between the 75th percentile in target samples (typically allowing one outlier) versus the 2.5th percentile in the background group (typically allowing about five outlier samples). For methylated markers, this was computed as the difference between the 25th and 97.5th percentiles (Supplementary Information). Low-coverage blocks (fewer than 25 observations), in which the estimation error of average methylation was around 10%, were replaced by a default value of 0.5 which is neither unmethylated nor methylated, thus reducing the block's methylation difference and downgrading its rank. For cell type-specific markers we selected the top 25 per cell type, for a total of 1,246 markers (Supplementary Table 4a).
Atlases for 450K/EPIC, RRBS and hybrid capture panels were identified similarly while examining a subset of genomic regions, overlapping various probe sets or genomic regions (-b option). Chromatin analysis was performed on the top 250 markers per cell type (total of 11,713 markers; Supplementary Table 4b). Motif analysis was performed on the top 1,000 markers per cell type (total of 50,286 markers; Supplementary Table 4b) using the difference between the 25th and 75th percentile, to allow putative enhancers unmethylated in additional cell types.
Analysis of gene set enrichment was performed using GREAT31. For each cell type we selected the top 250 differentially unmethylated regions and ran GREAT via batch web interface using default parameters. Enrichments for 'Ensembl Genes' were ignored, and a significance threshold of binomial false discovery rate ≤0.05 was used.
For each cell type we analysed the top 250 differentially unmethylated regions versus published ChIP–seq (H3K27ac and H3K4me1) and DNase sequencing from the Roadmap Epigenomics project (downloaded from ftp.ncbi.nlm.nih.gov/pub/geo/DATA/roadmapepigenomics/by_experiment and http://egg2.wustl.edu/roadmap/data/byDataType/dnase/BED_files_enh) in bigWig and bed formats. These include E032 for B cell markers, E034 for T cell markers, E029 for monocyte/macrophage markers, E066 for liver hepatocytes, E104 for heart cardiomyocytes and fibroblasts and E109 and E110 for gastric/small intestine/colon4. Annotations for chromHMM were downloaded (15-states version) from https://egg2.wustl.edu/roadmap/data/byFileType/chromhmmSegmentations/ChmmModels/coreMarks/jointModel/final3, and genomic regions annotated as enhancers (7_Enh) were extracted and reformatted in bigWig format. Raw single-cell ATAC–seq data were downloaded from GEO GSE165659 (ref. 32) as 'feature' and 'matrix' files for 70 samples. For each sample, cells of the same type were pooled to output a bedGraph file, which was mapped from hg38 to hg19 using UCSC liftOver58. Overlapping regions were dropped using bedtools (v.2.26.0)59. Finally, bigWig files were created using bedGraphToBigWig (v.4)60. Heatmaps and average plots were prepared using deepTools (v.3.4.1)61, with the functions 'computeMatrix', 'plotHeatmap' and 'plotProfile'. We used default parameters except for 'referencePoint=center', 15 kb margins and 'binSize=200' for ChIP–seq, DNaseI and chromHMM data, and 75 kb margins with 'binSize=1000' for ATAC–seq data.
For each cell type we analysed the top 1,000 differentially unmethylated regions for known motifs (Supplementary Table 6a) using the HOMER function 'findMotifsGenome.pl', with parameters '-bits' and '-size 250'39. Similar analyses were performed for the unmethylated regions in each cell type (Supplementary Table 6b), as well as unmethylated regions overlapping H3K27ac, but not H3K4me3, peaks (Supplementary Table 6c).
For each cell-type-specific marker we identified all neighbouring genes up to 500 kb apart. We then examined the expression levels of these genes across the GTEx dataset covering 50 tissues and cell types62. We then standardized the expression of each gene across all conditions, by replacing expression values with standard deviations (z-scores) above/below the average expression of that gene across samples. This was followed by column-wise standardization in which the relative enrichment of a gene under a given condition is normalized by the enrichment of other genes under that condition. This highlighted the most overexpressed genes for each tissue. We then classified each 'marker–gene–condition' combination as tier 1: distance ≤5 kb, expression ≥10 TPM and z-score ≥1.5; tier 2: same as tier 1 but with distance ≤50 kb; tier 3: up to 750 kb, expression ≥25 TPM and z-score ≥5; and tier 4: same as tier 3 but with z-score ≥3.5.
For each genomic region (blocks of at least four CpGs), and for any of the 39 cell type groups, fragments with at least four CpGs from all replicates were merged and classified as either U (fragment-level methylation 15% or less), M (at least 85%) or X (over 15% but below 85%). The percentage of U fragments was then calculated using 'wgbstools homog –threshold .15,.85', and blocks with at least 85% unmethylated fragments retained. These blocks were overlapped with genomic features based on UCSC hg19 annotations, including CpG islands and transcriptional start site regions (up to 1 kb from a gene start site). We also used narrowPeak annotations downloaded from Roadmap4 and ENCODE project5 (accessions listed in Supplementary Table 6d). hg38 bed files were converted to hg19 using liftOver58. For putative enhancers, nonpromoter active regulatory regions were defined as those overlapping H3K27ac, but not H3K4me3, peaks under matching conditions. TF binding sites were downloaded from JASPAR 2022 (ref. 63).
We define a similarity score between two samples as the fraction of blocks containing at least three CpGs and at least ten binary observations (sequenced CpG sites) in which the average methylation of the two samples differs by at least 0.5. Only cell types with n ≥ 3 FACS-sorted replicates from different donors are considered (136 samples in total).
CTCF ChIP–seq data were downloaded from the ENCODE project5 as 168 bigWig files, covering 61 tissues/cell types (hg19). Samples of the same cell type were averaged using multiBigwigSummary (v.3.4.1)61.
All 892 endodermal hypomethylated markers were found using wgbstools function 'find_markers' (v.0.2.0), with parameters '–delta_quants 0.4 –tg_quant 0.1 –bg_quant 0.1' (ref. 54). For endoderm-derived epithelium, 51 samples were compared with 103 nonepithelial samples from mesoderm or ectoderm. Blocks were selected as markers if the average methylation of the 90th percentile of the epithelial samples was lower than the tenth percentile of the nonepithelial samples by at least 0.4.
We developed a fragment-level deconvolution algorithm: each fragment was annotated as U (mostly unmethylated), M (mostly methylated) or X (mixed) depending on the number of methylated and unmethylated CpGs64. We then calculated, for each genomic region (marker) and across all cell types, the proportion of U/X/M fragments with at least k CpGs. Here we used k = 4 and thresholds of less than or equal to 25% methylated CpGs for U reads, and more than or equal to 75% methylated CpGs for M reads. We then constructed reference atlas A with 1,232 regions (top 25 markers per cell type), in which the Ai,j cell holds the U proportion of the ith marker in the jth cell type. Given an input sample, the U proportion at each marker is computed to form a 1,232 × 1 vector b. Then, NNLS is applied to infer coefficient vector x by minimizing \({| A\times x-b| }_{2}\) subject to non-negative x, normalized to \({\Sigma }_{j}{x}_{j}=1\). Alternatively, each marker can be weighed differently based on fragment coverage in the input sample. For this, b can be defined as the number of U fragments in each region and the rows of A similarly multiplied by Ci, the total number of fragments in each region, thus minimizing \({| {\rm{diag}}(C)\times A\times x-b| }_{2}\). Additional details are available in Supplementary Information.
Simulated mixtures were performed for cardiomyocytes (n = 4), bladder epithelium (n = 5), breast epithelium (n = 7), endothelial cells (n = 19) and erythrocyte progenitors (n = 3) in a leave-one-out manner. For this, one sample was held out and segmentation and marker selection (25 per cell type) were rerun using the remaining 204 samples. We then simulated mixtures by sampling and mixing reads from the held-out sample at 10, 3, 1, 0.3, 0.1, 0.03 and 0% into a background of leukocyte samples. This was repeated ten times. Finally, mixed samples were analysed using the UXM fragment-level algorithm with markers from the reduced (204) atlas, using fragments with at least three CpGs. Merging, splitting and mixing of reads were performed using wgbstools (v.0.1.0)54.
Array-based analysis was performed by computing, for each mixed set of fragments, average methylation levels across each of around 480,000 CpG sites present in the 450K array ('wgbstools beta_to_450k'). We then deconvolved these data according to the method of Moss et al.28 (https://github.com/nloyfer/meth_atlas).
We also simulated four-way mixtures in which background plasma methylomes were simulated as a combination of 90% fragments from leukocytes, 7.5% from a vascular endothelial sample and 2.5% from a hepatocyte sample. As described above, this was done by holding out the three samples (for example, cardiomyocytes, endothelial cells and hepatocytes) and then rererunning segmentation and marker selection on the (202 = 205 – 3) remaining samples, to obtain a set of markers that was then used for fragment-level deconvolution of mixtures.
Leukocytes and matching plasma samples (n = 23) were processed as described above and analysed using the WGBS methylation atlas, including 1,246 markers plus (for plasma samples) an additional 25 megakaryocyte markers. Fifty-two plasma samples from 28 patients with SARS-CoV-2 (ref. 44) downloaded as FASTQ files were processed as described above. Because of the low coverage (1–2×) of these samples, we extended the marker set from the top 25 to the top 250 markers per cell type (Supplementary Table 4b), and also included 250 megakaryocyte markers65. Roadmap4 and ENCODE5 samples were processed as described above and analysed using the UXM algorithm.
Previously published 450K array data were downloaded from either The Cancer Genome Atlas (lung and breast biopsies)49,50 or GEO accession no. GSE62640 (ref. 48) and deconvoluted with meth_atlas NNLS software (https://github.com/nloyfer/meth_atlas) using our array-adapted atlas (Supplementary Table 12). Breast biopsies were grouped using PAM50 classifications66.
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. | CommonCrawl |
How and when was the $\mathsf{CO_2}$ scrubbed when Earth was still primordial?
How and when was the $\mathsf{CO_2}$ removed in primordial Earth atmosphere? What is the main mechanism of this removal of $\mathsf{CO_2}$? Is it dissolving in water? Or rock weathering? Or was the bulk of $\mathsf{CO_2}$ actually removed by early life?
Meatball PrincessMeatball Princess
$\begingroup$ Related?: earthscience.stackexchange.com/questions/5021/… $\endgroup$ – Keith McClary Jun 12 '19 at 17:18
$\begingroup$ Simple answer: it really wasn't removed until the Great Oxygenation Event: en.wikipedia.org/wiki/Great_Oxygenation_Event $\endgroup$ – jamesqf Jun 12 '19 at 19:36
As you probably realise, the Earth's early atmosphere was mainly composed of CO2, as are the atmospheres of Mars and Venus to this very day. You are right in thinking that most of this CO2 was removed by biological activity; if all the earth's fossil fuels and limestone rocks were converted back into CO2, we would have an atmosphere similar to that which existed nearly 4 billion years ago. It was this same biological activity which built up the oxygen level to its present 21 percent. The process continues right up to the present day. The CO2 content of the atmosphere is to some extent self regulating: the higher the levels, the more vigorously plants and photosynthetic micro-organisms grow, extracting more and more carbon dioxide from the atmosphere, but this natural process could be swamped if the build-up of CO2 exceeded the capacity of plants to remove it. It is not generally known that algae and phytoplankton in the sea remove more CO2 and generate more oxygen than all the rainforests.
Not the answer you're looking for? Browse other questions tagged atmosphere or ask your own question.
Why has CO2 decreased in the history of the Earth?
What would be the temperature of earth if there was no atmosphere?
Is it possible for a natural fluorescent mineral to glow from the sun's rays?
The differences in gases produced by asteroids impacting crystalline and sedimentary rocks
How much atmoshphere is there compared to land and water
What was the density and composition of Earth's atmosphere during the Cretaceous warmest period?
Polar Ice caps are melting? Questions on enviromental impact
Relationship Between Ratio of Atmospheric Gases and Ocean Gases | CommonCrawl |
DCDS-S Home
June 2016, 9(3): 791-813. doi: 10.3934/dcdss.2016029
Serge Nicaise 1, and Cristina Pignotti 2,
Université de Valenciennes et du Hainaut Cambrésis, LAMAV and FR CNRS 2956, Le Mont Houy, Institut des Sciences et Techniques de Valenciennes, 59313 Valenciennes Cedex 9
Dipartimento di Matematica Pura e Applicata, Università di L'Aquila, Via Vetoio, Loc. Coppito, 67010 L'Aquila
Received March 2015 Revised July 2015 Published April 2016
Keywords: delay feedbacks, Wave equation, stabilization..
Mathematics Subject Classification: Primary: 35L05; Secondary: 93D1.
Citation: Serge Nicaise, Cristina Pignotti. Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 791-813. doi: 10.3934/dcdss.2016029
K. Ammari and S. Gerbi, Interior feedback stabilization of wave equations with dynamic boundary delay,, , (). Google Scholar
K. Ammari, S. Nicaise and C. Pignotti, Stability of abstract-wave equation with delay and a Kelvin-Voigt damping,, Asymptot. Anal., 95 (2015), 21. doi: 10.3233/ASY-151317. Google Scholar
G. Chen, Control and stabilization for the wave equation in a bounded domain I,, SIAM J. Control Optim., 17 (1979), 66. doi: 10.1137/0317007. Google Scholar
G. Chen, Control and stabilization for the wave equation in a bounded domain II,, SIAM J. Control Optim., 19 (1981), 114. doi: 10.1137/0319009. Google Scholar
R. Datko, Not all feedback stabilized hyperbolic systems are robust with respect to small time delays in their feedbacks,, SIAM J. Control Optim., 26 (1988), 697. doi: 10.1137/0326040. Google Scholar
R. Datko, J. Lagnese and M. P. Polis, An example on the effect of time delays in boundary feedback stabilization of wave equations,, SIAM J. Control Optim., 24 (1986), 152. doi: 10.1137/0324007. Google Scholar
V. Girault and P. A. Raviart, Finite Element Methods for Navier-Stokes Equations. Theory and Algorithms,, Springer Series in Computational Mathematics, 5 (1986). doi: 10.1007/978-3-642-61623-5. Google Scholar
F. Huang, Characteristic conditions for exponential stability of linear dynamical systems in Hilbert spaces,, Ann. Differential Equations, 1 (1985), 43. Google Scholar
V. Komornik, Rapid boundary stabilization of the wave equation,, SIAM J. Control Optim., 29 (1991), 197. doi: 10.1137/0329011. Google Scholar
V. Komornik, Exact Controllability and Stabilization. The Multiplier Method,, RAM: Research in Applied Mathematics, 36 (1994). Google Scholar
V. Komornik and E. Zuazua, A direct method for the boundary stabilization of the wave equation,, J. Math. Pures Appl., 69 (1990), 33. Google Scholar
J. Lagnese, Decay of solutions of wave equation in a bounded region with boundary dissipation,, J. Differential Equations, 50 (1983), 163. doi: 10.1016/0022-0396(83)90073-6. Google Scholar
J. Lagnese, Note on boundary stabilization of wave equations,, SIAM J. Control and Optim., 26 (1988), 1250. doi: 10.1137/0326068. Google Scholar
I. Lasiecka and R. Triggiani, Uniform exponential energy decay of wave equations in a bounded region with $L_2(0,T;L_2(\Sigma))$-feedback control in the Dirichlet boundary conditions,, J. Differential Equations, 66 (1987), 340. doi: 10.1016/0022-0396(87)90025-8. Google Scholar
J. L. Lions, Contrôlabilité Exacte, Perturbations et Stabilisation de Systèmes Distribués. Tome 1,, Recherches en Mathématiques Appliquées [Research in Applied Mathematics] Masson, (1988). Google Scholar
K. Liu and B. Rao, Exponential stability for the wave equations with local Kelvin-Voigt damping,, Z. angew. Math. Phys., 57 (2006), 419. doi: 10.1007/s00033-005-0029-2. Google Scholar
Ö. Mörgul, On the stabilization and stability robustness against small delays of some damped wave equations,, IEEE Trans. Automat. Control., 40 (1995), 1626. doi: 10.1109/9.412634. Google Scholar
S. Nicaise and C. Pignotti, Stability and instability results of the wave equation with a delay term in the boundary or internal feedbacks,, SIAM J. Control Optim., 45 (2006), 1561. doi: 10.1137/060648891. Google Scholar
S. Nicaise and C. Pignotti, Exponential stability of second-order evolution equations with structural damping and dynamic boundary delay feedback,, IMA J. Math. Control Inform., 28 (2011), 417. doi: 10.1093/imamci/dnr012. Google Scholar
J. Prüss, On the spectrum of $C_{0}$-semigroups,, Trans. Amer. Math. Soc., 284 (1984), 847. doi: 10.2307/1999112. Google Scholar
G. Q. Xu, S. P. Yung and L. K. Li, Stabilization of wave systems with input delay in the boundary control,, ESAIM: Control Optim. Calc. Var., 12 (2006), 770. doi: 10.1051/cocv:2006021. Google Scholar
E. Zuazua, Exponential decay for the semilinear wave equation with locally distributed damping,, Comm. Partial Differential Equations, 15 (1990), 205. doi: 10.1080/03605309908820684. Google Scholar
Mokhtari Yacine. Boundary controllability and boundary time-varying feedback stabilization of the 1D wave equation in non-cylindrical domains. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021004
Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021022
Xinyu Mei, Yangmin Xiong, Chunyou Sun. Pullback attractor for a weakly damped wave equation with sup-cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 569-600. doi: 10.3934/dcds.2020270
Takiko Sasaki. Convergence of a blow-up curve for a semilinear wave equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1133-1143. doi: 10.3934/dcdss.2020388
Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243
Ludovick Gagnon, José M. Urquiza. Uniform boundary observability with Legendre-Galerkin formulations of the 1-D wave equation. Evolution Equations & Control Theory, 2021, 10 (1) : 129-153. doi: 10.3934/eect.2020054
Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074
Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033
Oussama Landoulsi. Construction of a solitary wave solution of the nonlinear focusing schrödinger equation outside a strictly convex obstacle in the $ L^2 $-supercritical case. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 701-746. doi: 10.3934/dcds.2020298
Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434
Michael Winkler, Christian Stinner. Refined regularity and stabilization properties in a degenerate haptotaxis system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 4039-4058. doi: 10.3934/dcds.2020030
Hai-Yang Jin, Zhi-An Wang. Global stabilization of the full attraction-repulsion Keller-Segel system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3509-3527. doi: 10.3934/dcds.2020027
Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021002
Simone Göttlich, Elisa Iacomini, Thomas Jung. Properties of the LWR model with time delay. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020032
Jong-Shenq Guo, Ken-Ichi Nakamura, Toshiko Ogiwara, Chang-Hong Wu. The sign of traveling wave speed in bistable dynamics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3451-3466. doi: 10.3934/dcds.2020047
Xu Zhang, Chuang Zheng, Enrique Zuazua. Time discrete wave equations: Boundary observability and control. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 571-604. doi: 10.3934/dcds.2009.23.571
Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364
Serge Nicaise Cristina Pignotti | CommonCrawl |
Mat. Zametki:
Mat. Zametki, 1999, Volume 66, Issue 3, Pages 336–350 (Mi mz1174)
This article is cited in 20 scientific papers (total in 20 papers)
Two related extremal problems for entire functions of several variables
E. E. Berdysheva
Ural State University
DOI: https://doi.org/10.4213/mzm1174
Mathematical Notes, 1999, 66:3, 271–282
UDC: 517.5
Received: 23.03.1998
Revised: 29.03.1999
Citation: E. E. Berdysheva, "Two related extremal problems for entire functions of several variables", Mat. Zametki, 66:3 (1999), 336–350; Math. Notes, 66:3 (1999), 271–282
\Bibitem{Ber99}
\by E.~E.~Berdysheva
\paper Two related extremal problems for entire functions of several variables
\jour Mat. Zametki
\mathnet{http://mi.mathnet.ru/mz1174}
\crossref{https://doi.org/10.4213/mzm1174}
\jour Math. Notes
\crossref{https://doi.org/10.1007/BF02676434}
http://mi.mathnet.ru/eng/mz1174
https://doi.org/10.4213/mzm1174
http://mi.mathnet.ru/eng/mz/v66/i3/p336
D. V. Gorbachev, "Extremum problems for entire functions of exponential spherical type", Math. Notes, 68:2 (2000), 159–166
A. I. Kozko, A. V. Rozhdestvenskii, "On Jackson's Inequality for Generalized Moduli of Continuity", Math. Notes, 73:5 (2003), 736–741
A. I. Kozko, A. V. Rozhdestvenskii, "On Jackson's inequality for a generalized modulus of continuity in $L_2$", Sb. Math., 195:8 (2004), 1073–1115
D. V. Gorbachev, S. A. Strankovskii, "An extremal problem for even positive definite entire functions of exponential type", Math. Notes, 80:5 (2006), 673–678
Li J., Liu Y., "The Jackson Inequality for the Best L-2-Approximation of Functions on [0,1] with the Weight x", Numerical Mathematics-Theory Methods and Applications, 1:3 (2008), 340–356
A. V. Ivanov, V. I. Ivanov, "Dunkl theory and Jackson inequality in $L_2(\mathbb R^d)$ with power weight", Proc. Steklov Inst. Math. (Suppl.), 273, suppl. 1 (2011), S86–S98
Ivanov A.V., "Zadacha logana dlya tselykh funktsii mnogikh peremennykh i konstanty dzheksona v vesovykh prostranstvakh", Izvestiya Tulskogo gosudarstvennogo universiteta. Seriya: Estestvennye nauki, 2011, no. 2, 29–58
Ivanov V.I., "Tochnye $l_2$-neravenstva dzheksona - chernykh - yudina v teorii priblizhenii", Izvestiya tulskogo gosudarstvennogo universiteta. estestvennye nauki, 2012, no. 3, 19–28
A. V. Ivanov, V. I. Ivanov, "Optimal Arguments in Jackson's Inequality in the Power-Weighted Space $L_2(\mathbb{R}^d)$", Math. Notes, 94:3 (2013), 320–329
D. V. Gorbachev, "An estimate of an optimal argument in the sharp multidimensional Jackson–Stechkin $L_2$-inequality", Proc. Steklov Inst. Math. (Suppl.), 288, suppl. 1 (2015), 70–78
V. I. Ivanov, A. V. Ivanov, "Optimal Arguments in the Jackson–Stechkin Inequality in $L_2(\mathbb{R}^d)$ with Dunkl Weight", Math. Notes, 96:5 (2014), 666–677
Liu Y.P. Song Ch.Yu., "Dunkl's Theory and Best Approximation By Entire Functions of Exponential Type in $L_2$-Metric With Power Weight", Acta. Math. Sin.-English Ser., 30:10 (2014), 1748–1762
R. A. Veprintsev, "Approximation of the Multidimensional Jacobi Transform in $L_2$ by Partial Integrals", Math. Notes, 97:6 (2015), 831–845
D. V. Gorbachev, V. I. Ivanov, "Approximation in $L_2$ by Partial Integrals of the Fourier Transform over the Eigenfunctions of the Sturm–Liouville Operator", Math. Notes, 100:4 (2016), 540–549
D. V. Gorbachev, V. I. Ivanov, R. A. Veprintsev, "Approximation in $L_2$ by partial integrals of the multidimensional Fourier transform in the eigenfunctions of the Sturm–Liouville operator", Proc. Steklov Inst. Math. (Suppl.), 300, suppl. 1 (2018), 97–113
D. V. Gorbachev, V. I. Ivanov, "Nekotorye ekstremalnye zadachi dlya preobrazovaniya Fure po sobstvennym funktsiyam operatora Shturma–Liuvillya", Chebyshevskii sb., 18:2 (2017), 34–53
D. V. Gorbachev, V. I. Ivanov, E. P. Ofitserov, O. I. Smirnov, "Nekotorye ekstremalnye zadachi garmonicheskogo analiza i teorii priblizhenii", Chebyshevskii sb., 18:4 (2017), 140–167
Ivanov V., Ivanov A., "Generalized Logan'S Problem For Entire Functions of Exponential Type and Optimal Argument in Jackson'S Inequality in l-2((3))", Acta. Math. Sin.-English Ser., 34:10 (2018), 1563–1577
D. V. Gorbachev, V. I. Ivanov, E. P. Ofitserov, O. I. Smirnov, "Vtoraya ekstremalnaya zadacha Logana dlya preobrazovaniya Fure po sobstvennym funktsiyam operatora Shturma–Liuvillya", Chebyshevskii sb., 19:1 (2018), 57–78
D. V. Gorbachev, V. I. Ivanov, "Turán, Fejér and Bohman extremal problems for the multivariate Fourier transform in terms of the eigenfunctions of a Sturm-Liouville problem", Sb. Math., 210:6 (2019), 809–835 | CommonCrawl |
Preparation of High Purity α-Alumina from Aluminum Black Dross by Redox Reaction
Shin, Eui-Sup;An, Eung-Mo;Lee, Su-Jeong;Ohtsuki, Chikara;Kim, Yun-Jong;Cho, Sung-Baek 445
We investigate the effects of redox reaction on preparation of high purity ${\alpha}$-alumina from selectively ground aluminum dross. Preparation procedure of the ${\alpha}$-alumina from the aluminum dross has four steps: i) selective crushing and grinding, ii) leaching process, iii) redox reaction, and iv) precipitation reaction under controlled pH. Aluminum dross supplied from a smelter was ground to separate metallic aluminum. After the separation, the recovered particles were treated with hydrochloric acid(HCl) to leach aluminum as aluminum chloride solution. Then, the aluminum chloride solution was applied to a redox reaction with hydrogen peroxide($H_2O_2$). The pH value of the solution was controlled by addition of ammonia to obtain aluminum hydroxide and to remove other impurities. Then, the obtained aluminum hydroxide was dried at $60^{\circ}C$ and heat-treated at $1300^{\circ}C$ to form ${\alpha}$-alumina. Aluminum dross was found to contain a complex mixture of aluminum metal, aluminum oxide, aluminum nitride, and spinel compounds. Regardless of introduction of the redox reaction, both of the sintered products are composed mainly of ${\alpha}$-alumina. There were fewer impurities in the solution subject to the redox reaction than there were in the solution that was not subject to the redox reaction. The impurities were precipitated by pH control with ammonia solution, and then removed. We can obtain aluminum hydroxide with high purity through control of pH after the redox reaction. Thus, pH control brings a synthesis of ${\alpha}$-alumina with fewer impurities after the redox reaction. Consequently, high purity ${\alpha}$-alumina from aluminum dross can be fabricated through the process by redox reaction.
Simulated Optimum Substrate Thicknesses for the BC-BJ Si and GaAs Solar Cells
Choe, Kwang-Su 450
In crystalline solar cells, the substrate itself constitutes a large portion of the fabrication cost as it is derived from semiconductor ingots grown in costly high temperature processes. Thinner wafer substrates allow some cost saving as more wafers can be sliced from a given ingot, although technological limitations in slicing or sawing of wafers off an ingot, as well as the physical strength of the sliced wafers, put a lower limit on the substrate thickness. Complementary to these economical and techno-physical points of view, a device operation point of view of the substrate thickness would be useful. With this in mind, BC-BJ Si and GaAs solar cells are compared one to one by means of the Medici device simulation, with a particular emphasis on the substrate thickness. Under ideal conditions of 0.6 ${\mu}m$ photons entering the 10 ${\mu}m$-wide BC-BJ solar cells at the normal incident angle (${\theta}=90^{\circ}$), GaAs is about 2.3 times more efficient than Si in terms of peak cell power output: 42.3 $mW{\cdot}cm^{-2}$ vs. 18.2 $mW{\cdot}cm^{-2}$. This strong performance of GaAs, though only under ideal conditions, gives a strong indication that this material could stand competitively against Si, despite its known high material and process costs. Within the limitation of the minority carrier recombination lifetime value of $5{\times}10^{-5}$ sec used in the device simulation, the solar cell power is known to be only weakly dependent on the substrate thickness, particularly under about 100 ${\mu}m$, for both Si and GaAs. Though the optimum substrate thickness is about 100 ${\mu}m$ or less, the reduction in the power output is less than 10% from the peak values even when the substrate thickness is increased to 190 ${\mu}m$. Thus, for crystalline Si and GaAs with a relatively long recombination lifetime, extra efforts to be spent on thinning the substrate should be weighed against the expected actual gain in the solar cell output power.
Effects of Hardeners on the Low-Temperature Snap Cure Behaviors of Epoxy Adhesives for Flip Chip Bonding
Choi, Won-Jung;Yoo, Se-Hoon;Lee, Hyo-Soo;Kim, Mok-Soon;Kim, Jun-Ki 454
Various adhesive materials are used in flip chip packaging for electrical interconnection and structural reinforcement. In cases of COF(chip on film) packages, low temperature bonding adhesive is currently needed for the utilization of low thermal resistance substrate films, such as PEN(polyethylene naphthalate) and PET(polyethylene terephthalate). In this study, the effects of anhydride and dihydrazide hardeners on the low-temperature snap cure behavior of epoxy based non-conductive pastes(NCPs) were investigated to reduce flip chip bonding temperature. Dynamic DSC(differential scanning calorimetry) and isothermal DEA(dielectric analysis) results showed that the curing rate of MHHPA(hexahydro-4-methylphthalic anhydride) at $160^{\circ}C$ was faster than that of ADH(adipic dihydrazide) when considering the onset and peak curing temperatures. In a die shear test performed after flip chip bonding, however, ADH-containing formulations indicated faster trends in reaching saturated bond strength values due to the post curing effect. More enhanced HAST(highly accelerated stress test) reliability could be achieved in an assembly having a higher initial bond strength and, thus, MHHPA is considered to be a more effective hardener than ADH for low temperature snap cure NCPs.
Cleaning Effects by NH4OH Solution on Surface of Cu Film for Semiconductor Devices
Lee, Youn-Seoung;Noh, Sang-Soo;Rha, Sa-Kyun 459
We investigated cleaning effects using $NH_4OH$ solution on the surface of Cu film. A 20 nm Cu film was deposited on Ti / p-Si (100) by sputter deposition and was exposed to air for growth of the native Cu oxide. In order to remove the Cu native oxide, an $NH_4OH$ cleaning process with and without TS-40A pre-treatment was carried out. After the $NH_4OH$ cleaning without TS-40A pretreatment, the sheet resistance Rs of the Cu film and the surface morphology changed slightly(${\Delta}Rs:{\sim}10m{\Omega}/sq.$). On the other hand, after $NH_4OH$ cleaning with TS-40A pretreatment, the Rs of the Cu film changed abruptly (${\Delta}Rs:till{\sim}700m{\Omega}/sq.$); in addition, cracks showed on the surface of the Cu film. According to XPS results, Si ingredient was detected on the surface of all Cu films pretreated with TS-40A. This Si ingredient(a kind of silicate) may result from the TS-40A solution, because sodium metasilicate is included in TS-40A as an alkaline degreasing agent. Finally, we found that the $NH_4OH$ cleaning process without pretreatment using an alkaline cleanser containing a silicate ingredient is more useful at removing Cu oxides on Cu film. In addition, we found that in the $NH_4OH$ cleaning process, an alkaline cleanser like Metex TS-40A, containing sodium metasilicate, can cause cracks on the surface of Cu film.
Influence of Raito of TGA(thioglycolic acid) on CdTe QDs Solution Stability for a Period of Time
Kim, Jong-Hwan;Kim, Tae-Hee;Gwoo, Dong-Gun;Kee, Kyung-Bum;Choi, Won-Gyu;Han, Kung-Seok;Ryu, Bong-Ki 465
This paper focuses on the after synthesis of CdTe quantum dots(QDs) in aqueous solution. CdTe nanoparticles were prepared in aqueous solution using mercaptocarboxylic acid or thioglycolic acid(TGA) as stabilizing agents. QDs emit light smaller than the nano size. The contents of the mercaptocarboxylic acid, and a kind of raw material, were revealed for a period of time. We succeeded in synthesizing a very high quality QDs solution; we discussed how to make QDs better and to keep them stabilized. TGA is known as one of the best stabilizing agents. Many papers have mentioned that TGA is a good stabilizing agent. We dramatically confirmed the state of QDs after the experiments. The QDs solution can be influenced by several factors. Different content of TGA can influence the stability of the CdTe solution. Most papers deal with the synthesis of CdTe, so we decided to discuss the after synthesis process for the stability of the CdTe solution.
Synthesis and Properties of La1-xSrxMnO3 System as Air Electrode for Solid Oxide Fuel Cell
Lee, You-Kee;Lee, Young-Ki 470
$La_{1-x}Sr_xMnO_3$(LSM,$0{\leq}x{\leq}0.5$) powders as the air electrode for solid oxide fuel cell were synthesized by a glycine-nitrate combustion process. The powders were then examined by X-ray diffraction(XRD) and scanning electron microscopy (SEM). The as-formed powders were composed of very fine ash particles linked together in chains. X-ray maps of the LSM powders milled for 1.5 h showed that the metallic elements are homogeneously distributed inside each grain and in the different grains. The powder XRD patterns of the LSM with x < 0.3 showed a rhombohedral phase; the phase changes to the cubic phase at higher compositions($x{\geq}0.3$) calcined in air at $1200^{\circ}C$ for 4 h. Also, the SEM micrographs showed that the average grain size decreases as Sr content increases. Composite air electrodes made of 50/50 vol% of the resulting LSM powders and yttria stabilized zirconia(YSZ) powders were prepared by colloidal deposition technique. The electrodes were studied by ac impedance spectroscopy in order to improve the performance of a solid oxide fuel cell(SOFC). Reproducible impedance spectra were confirmed using the improved cell, which consisted of LSM-YSZ/YSZ. The composite electrode of LSM and YSZ was found to yield a lower cathodic resistivity than that of the non-composite one. Also, the addition of YSZ to the $La_{1-x}Sr_xMnO_3$ ($0.1{\leq}x{\leq}0.2$) electrode led to a pronounced, large decrease in the cathodic resistivity of the LSM-YSZ composite electrodes.
Effect of PVP(polyvinylpyrrolidone) on the Ag Nano Ink Property for Reverse Offset Printing
Han, Hyun-Suk;Kwak, Sun-Woo;Kim, Bong-Min;Lee, Taik-Min;Kim, Sang-Ho;Kim, In-Young 476
Among the various roll-to-roll printing technologies such as gravure, gravure-offset, and reverse offset printing, reverse offset printing has the advantage of fine patterning, with less than 5 ${\mu}m$ line width. However, it involves complex processes, consisting of 1) the coating process, 2) the off process, 3) the patterning process, and 4) the set process of the ink. Each process demands various ink properties, including viscosity, surface tension, stickiness, and adhesion with substrate or clich$\acute{e}$; these properties are critical factors for the printing quality of fine patterning. In this study, Ag nano ink was developed for reverse offset printing and the effect of polyvinylpyrrolidone(PVP), used as a capping agent of Ag nano particles, on the printing quality was investigated. Ag nano particles with a diameter of ~60 nm were synthesized using the conventional polyol synthesis process. Ethanol and ethylene glycol monopropyl ether(EGPE) were used together as the main solvent in order to control the drying and absorption of the solvents during the printing process. The rheological behavior, especially ink adhesion and stickiness, was controlled with washing processes that have an effect on the offset process and that played a critical role in the fine patterning. The electrical and thermal behaviors were analyzed according to the content of PVP in the Ag ink. Finally, an Ag mesh pattern with a line width of 10 ${\mu}m$ was printed using reverse offset printing; this printing showed an electrical resistivity of 36 ${\mu}{\Omega}{\cdot}cm$ after sintering at $200^{\circ}C$.
Finite Element Simulation and Experimental Study on the Electrochemical Etching Process for Fabrication of Micro Metal Mold
Ryu, Heon-Yul;Im, Hyeon-Seung;Cho, Si-Hyeong;Hwang, Byeong-Jun;Lee, Sung-Ho;Park, Jin-Goo 482
To fabricate a precise micro metal mold, the electrochemical etching process has been researched. We investigated the electrochemical etching process numerically and experimentally to determine the etching tendency of the process, focusing on the current density, which is a major parameter of the process. The finite element method, a kind of numerical analysis, was used to determine the current density distribution on the workpiece. Stainless steel(SS304) substrate with various sized square and circular array patterns as an anode and copper(Cu) plate as a cathode were used for the electrochemical experiments. A mixture of $H_2SO_4$, $H_3PO_4$, and DIW was used as an electrolyte. In this paper, comparison of the results from the experiment and the numerical simulation is presented, including the current density distribution and line profile from the simulation, and the etching profile and surface morphology from the experiment. Etching profile and surface morphology were characterized using a 3D-profiler and FE-SEM measurement. From a comparison of the data, it was confirmed that the current density distribution and the line profile of the simulation were similar to the surface morphology and the etching profile of the experiment, respectively. The current density is more concentrated at the vertex of the square pattern and circumference of the circular pattern. And, the depth of the etched area is proportional to the current density.
Synthesis and Luminescence Properties of Tb3+-Doped K2BaW2O8 Phosphors
Jang, Kyoung-Hyuk;Koo, Jae-Heung;Seo, Hyo-Jin 489
Green phosphors $K_2BaW_2O_8:Tb^{3+}$(1.0 mol%) were synthesized by solid state reaction method. Differential thermal analysis was applied to trace the reaction processes. Three endothermic values of 95, 706, and $1055^{\circ}C$ correspond to the loss of absorbed water, the release of carbon dioxide, and the beginning of the melting point, respectively. The phase purity of the powders was examined using powder X-ray diffraction(XRD). Two strong excitation bands in the wavelength region of 200-310 nm were found to be due to the ${WO_4}^{2-}$ exciton transition and the 4f-5d transition of $Tb^{3+}$ in $K_2BaW_2O_8$. The excitation spectrum presents several lines in the range of 310-380 nm; these are assigned to the 4f-4f transitions of the $Tb^{3+}$ ion. The strong emission line at around 550 nm, due to the $^5D_4{\rightarrow}^7F_5$ transition, is observed together with weak lines of the $^5D_4{\rightarrow}^7F_J$(J = 3, 4, and 6) transitions. A broad emission band peaking at 530 nm is observed at 10 K, while it disappears at room temperature. The decay times of $Tb^{3+}$ $^5D_4{\rightarrow}^7F_5$ emission are estimated to be 4.8 and 1.4 ms, respectively, at 10 and 295 K; those of the ${WO_4}^{2-}$ exciton emissions are 22 and 0.92 ${\mu}s$ at 10 and 200 K, respectively.
Effect of Additives on the Compressive Strength of Geopolymerized Fly Ash
Hwang, Yeon 494
Geopolymer cements and geopolymer resins are newly advanced mineral binders that are used in order to reduce the carbon dioxide generation that accompanies cement production. The effect of additives on the compressive strength of geopolymerized class-F fly ash was investigated. Blast furnace slag, calcium hydroxide($Ca(OH)_2$), and silica fume powders were added to fly ash. A geopolymeric reaction was initiated by adding a solution of water glass and sodium hydroxide(NaOH) to the powder mixtures. The compressive strength of pure fly ash cured at room temperature for 28 days was found to be as low as 291 $kgf/cm^{-2}$, which was not a suitable value for use in engineering materials. On the contrary, addition of 20 wt% and 40 wt% of blast furnace slag powders to fly ash increased the compressive strength to 458 $kgf/cm^{-2}$ and 750 $kgf/cm^{-2}$, respectively. 5 wt% addition of $Ca(OH)_2$ increased the compressive strength up to 640 $kgf/cm^{-2}$; further addition of $Ca(OH)_2$ further increased the compressive strength. When 2 wt% of silica fume was added, the compressive strength increased to 577 $kgf/cm^{-2}$; the maximum strength was obtained at 6 wt% addition of silica fume. It was confirmed that the addition of CaO and $SiO_2$ to the fly ash powders was effective at increasing the compressive strength of geopolymerized fly ash. | CommonCrawl |
Notice: Undefined index: HTTP_REFERER in /home/sarent/domains/sarenteb.com/public_html/dudley-funeral-enjaq/xitucmwai.php on line 76
Notice: fwrite(): write of 8192 bytes failed with errno=122 Disk quota exceeded in /home/sarent/domains/sarenteb.com/public_html/dudley-funeral-enjaq/xitucmwai.php on line 155
Notice: fwrite(): write of 1 bytes failed with errno=122 Disk quota exceeded in /home/sarent/domains/sarenteb.com/public_html/dudley-funeral-enjaq/xitucmwai.php on line 159
Laplace transform using calculator
laplace transform using calculator So if we take the Laplace Transform of both sides of this, the right-hand side is going to be 2 over s squared plus 4. See full list on tutorial. " It takes in a function f(t) and spits out a new function F(). Using the Laplace transform technique we can solve for the homogeneous and particular solutions at the same time. The statement of the formula is as follows: Let f ( t ) be a continuous function on the interval [0, ∞) of exponential order, i. Integration of Transforms. Taking the Laplace transform of both sides of the equation with respect to t, we obtain Rearranging and substituting in the boundary condition U(x, 0) = 6e -3x , we get Note that taking the Laplace transform has transformed the partial differential equation into an ordinary differential equation. Not only is it an excellent tool to solve differential equations, but it also helps in obtaining a qualitative understanding of how a system will behave and how changing certain parameters will effect the dynamic response. Such systems occur frequently in control theory, circuit design, and other engineering applications. f (t) = 1 + 2t b. As we will show below: Now, we can invert Y(s). Jul 18, 2020 · The calculator above performs a regular Laplace Transform. Definition of Final Value Theorem of Laplace Transform If f (t) and f' (t) both are Laplace Transformable and sF (s) has no pole in jw axis and in the R. Using Mathcad Mathcad can help us in find both Laplace transform and inverse Laplace transform. We perform the Laplace transform for both sides of the given equation. This calculator, which makes calculations very simple and interesting. edu. The Laplace transform of exists only for complex values of s in a half-plane . L(sin(6t)) = 6 s2 +36. Method of Laplace Transform. A function fis piecewise continuous on an interval t2[a;b] if Take the Laplace transform of each differential equation using a few transforms. Free Laplace Transform calculator - Find the Laplace and inverse Laplace transforms of functions step-by-step This website uses cookies to ensure you get the best experience. However, the usefulness of Laplace transforms is by no means restricted to this class of problems. laplace transform calculator show steps laplace transform Jul 01, 2016 · Table of Laplace Transforms f(t) L[f(t)] = F(s) 1 1 s (1) eatf(t) F(s a) (2) U(t a) e as s (3) f(t a)U(t a) e asF(s) (4) (t) 1 (5) (t stt 0) e 0 (6) tnf(t) ( 1)n dnF(s) dsn (7) f0(t) sF(s) f(0) (8) fn(t) snF(s) s(n 1)f(0) (fn 1)(0) (9) Z t 0 f(x)g(t x)dx F(s)G(s) (10) tn (n= 0;1;2;:::) n! sn+1 (11) tx (x 1 2R) ( x+ 1) sx+1 (12) sinkt k s2 + k2 Jan 15, 2018 · Basic Laplace and Inverse Laplace Transforms. Before using functions set TI-92 MODE Complex Format to RECTANGULAR Angle to RADIAN Exact/Approx to AUTO You have to do these settings yourself because; the programs cannot change the mode setting on the calculator. f (t) = (t + 1)2 d. lamar. You can see this transform or integration process converts f (t), a function of the symbolic variable t, into another function F (s), with another variable s. Such ideas are seen in 2nd-year university mathematics courses. As an example, from the Laplace Transforms Table, we see that Written in the inverse transform notation L−1 � 6 s2 +36 � = sin(6t). We will use this idea to solve differential equations, but the method also can be used to sum series or compute integrals. May 12, 2019 · To solve this problem using Laplace transforms, we will need to transform every term in our given differential equation. Key Words: Laplace Transform, Differential Equation, Inverse Laplace Transform, Linearity, Convolution Theorem. 2. Aug 01, 2020 · Computer algebra systems have now replaced tables of Laplace transforms just as the calculator has replaced the slide rule. When a higher order differential equation is given, Laplace transform is applied to it which converts the equation into an algebraic equation, thus making it easier to handle. Redraw the circuit (nothing about the Laplace transform changes the types of elements or their interconnections). Following are the Laplace transform and inverse Laplace transform equations. Background: The Laplace transform is a primary way to study the stability and evolution of linearized dynamical systems, because it turns them into algebraic systems. Post's inversion formula for Laplace transforms, named after Emil Post, is a simple-looking but usually impractical formula for evaluating an inverse Laplace transform. com using variation of parameters or the method of undetermined coefficients. Laplace transform converts many time-domain operations such as differentiation, integration, convolution, time shifting into algebraic operations in s-domain. Your solution Answer Right from inverse laplace transform calculator to matrices, we have got all the pieces covered. Transform of Unit Step Functions; 5. 4 1. 2: Using the Heaviside function write down the piecewise function that is \(0 Inverse Laplace transforms for second-order underdamped responses are provided in the Table in terms of ω n and δ and in terms of general coefficients (Transforms #13–17). The inverse transform can also be computed using MATLAB. Some Important Properties of Laplace Transforms The Laplace transforms of difierent functions can be found in most of the mathematics and engineering books and hence, is not According to ISO 80000-2*), clauses 2-18. The multidimensional Laplace transform is given by . ], in the place holder type Laplace transform makes the equations simpler to handle. Below, the differential formula of a time-domain kind first changed to the algebraic equation of frequency… Roy — December 18, 2020 The equation above yields what the Laplace Transform is for any function of the form tneat, t n e a t, where n n and a a are arbitrary scalars. Here differential equation of time domain form is first transformed to algebraic equation of frequency domain form. The Nature of the s-Domain; Strategy of the Laplace Transform; Analysis of Electric Circuits; The Importance of Poles and Zeros; Filter Design in the s-Domain Laplace Transform Theory - 1 Existence of Laplace Transforms Before continuing our use of Laplace transforms for solving DEs, it is worth digressing through a quick investigation of which functions actually have a Laplace transform. , Fourier) domain! 0 A( ) A( ) vo vo σ ω s = = And therefore, for the inverting configuration: 2 1 () A( ) () oc . Using the Laplace Transform to Solve Initial Value Problems. Calculate differential equation using matlab, laplace transformation ti-89 Lars Frederiksen, square worksheets, where would i go to get help online with my Alebra homework. Laplace transform is yet another operational tool for solving constant coe- cients linear dierential equations. You can think of the Laplace transform as some kind of abstract \machine. Consider an LTI system exited by a complex exponential signal of the form x (t) = Ge st. The inverse Laplace transform We can also define the inverse Laplace transform: given a function X(s) in the s-domain, its inverse Laplace transform L−1[X(s)] is a function x(t) such that X(s) = L[x(t)]. I am trying to find out the inverse Laplace transform of the state transition matrix obtained using inv(S*I-A). Then using linearity of Laplace transformation and then the table, we have Essentially the trick is to reduce the given function to one of the elementary functions whose Laplace transform may be found in the table. using a calculator free worksheet online algebra games simplify and explain for dummies matrices in algebra laplace texas ti89 ti-83+ factoring program java Latin is a free inverse Laplace calculator for Windows. We can also do transformations to equations involving derivatives and integrals. Example 1: Find the inverse Laplace transform of each of the given function: F(s) = (2/s) + 3/(s - 4) The Laplace transform of a function, f(t), is defined as 0 Fs()f(t)ftestdt(3-1) ==L∫∞ − where F(s) is the symbol for the Laplace transform, Lis the Laplace transform operator, and f(t) is some function of time, t. A Useful Analogy. Let's see how we can use (14) as the starting point to determine a solution to Laplace's equation with specific boundary conditions. Usually, to find the Laplace Transform of a function, one uses partial fraction decomposition (if needed) and then consults the table of Laplace Transforms. Solve for the output variable. Put initial conditions into the resulting equation. 13. iLaplace transforms from Laplace to time domain. Laplace Transform Calculator Laplace Transform Calculator is a free online tool that displays the transformation of the real variable function to the complex variable. When we come to solve differential equations using Laplace transforms we shall use the following alternative notation: [ ] = L x x & [ ] = − L x s x x (0) &&[ ] 2 = − − L x s x s x x & (0) (0) . 6 The Transfer Function and the Convolution Integral. Since the Laplace Transform is a linear transform, we need only find three inverse transforms. Find the inverse transform of Y(s). The function is known as determining function, depends on. By using this website, you agree to our Cookie Policy. 7. Math stories or problems using elimination and substitution, how to find the lcd of fractions, GRAPH OF NEGATIVE X-CUBED, hard algebraic problem, cheating on your maths in Aug 12, 2017 · Laplace Transform is the method to find the solutions of ordinary differential equations. C. S. Conseqently, Laplace transforms may be used to solve linear differential equations with constant coefficients as follows: Take Laplace transforms of both sides of equation using property above to express derivatives Solve for F(s), Y(s), etc. Inverse Laplace transform inprinciplewecanrecoverffromF via f(t) = 1 2…j Z¾+j1 ¾¡j1 F(s)estds where¾islargeenoughthatF(s) isdeflnedfor<s'¾ surprisingly,thisformulaisn'treallyuseful! The Laplace transform 3{13 Aug 31, 2015 · Introduction Transformation in mathematics deals with the conversion of one function to another function that may not be in the same domain. All right, in this first example we will use this nice characteristics of the derivative of the Laplace transform to find transform for the function . 2-3 Circuit Analysis in the s Domain. Take the Laplace transforms of both sides of an equation. The Laplace transform extends this approach by incorporating damped as well as steady-state sinusoids. Transform of Periodic Functions; 6. If the money really an issue, you can get both of these used. For Laplace transforms, the second and third arguments will typically be t and s, respectively. Then, by definition, f is the inverse transform of F. An important property of the Laplace transform is: This property is widely used in solving differential equations because it allows to reduce the latter to algebraic ones. The Laplace Transform The Laplace transform of a function of time f (t) is given by the following integral − Laplace transform is also denoted as transform of f (t) to F (s). This transform is also extremely useful in physics and engineering. While tables of Laplace transforms are widely available, it is important to understand the properties of the Laplace transform so that you can construct your own table. Laplace transforms find uses in solving initial value problems that involve linear, ordinary differential equations with constant coefficients. In frequency-domainanalysis, we break the input x(t) into exponentials componentsof the form est, where s is the complex frequency: The Laplace transform is a technique for analyzing these special systems when the signals are continuous. We define the Laplace transform of a function f in the following way. Using Laplace transforms is a common method of solving linear systems of differential equations with initial conditions. 1 Transforms of derivatives. Solution. The integral R R f(t)e¡stdt converges if jf(t)e¡stjdt < 1;s = ¾ +j! A. To multiple two numbers we convert each number into their respective logarithm and add. (e) the Laplace Transform does not exist (singular at t = 0). Inverse of a Product L f g t f s ĝ s where Solution- Using the formula for taking the Laplace transform of a derivative, we get that the Laplace transform of the left side of the differential equation is: (s2X(s)−sx(0)− x′(0))−6(sX(s)−x(0))+8X(s). INTRODUCTION The Laplace Transform is a widely used integral transform Oct 22, 2020 · Laplace transformation is a technique for solving differential equations. To find the Laplace Transform of a piecewise defined function , select Laplace Transform in the Main Menu, next select option3 "Piecewise defined function" in the dropdown menu as shown below: Next, enter the two pieces/functions as shown below. It handles initial conditions up front, not at the end of the process. Solving initial value problems using the method of Laplace transforms To solve a linear differential equation using Laplace transforms, there are only 3 basic steps: 1. Inverse Laplace transform calculator is the quick online tool which can instantly give solution to the integrals. Let us see how the Laplace transform is used for differential equations. On page 370 it shows the following for Laplace Transform: Laplace transform converts a time domain function to s-domain function by integration from zero to infinity of the time domain function, multiplied by e-st. Apr 13, 2018 · 2. Inverse Laplace transform of: Variable of function: Time variable: Submit: Computing Get this widget. This makes the problem much easier to solve. 1 The Fourier transform and series of basic signals (Contd. I do not have any user variables defined as 's' or 't'. So, let's Or, we can use the Fourier transform Now, recall that the variable s is a complex frequency: sj=σ+ ω. Laplace Transforms with Examples and Solutions Solve Differential Equations Using Laplace Transform Find the Laplace transform of the matrix M. Solve by inverse Laplace transform: (tables) Solution is obtained by a getting the inverse Laplace transform from a table Alternatively we can use partial fraction expansion to compute the solution using simple inverse transforms Laplace Transform The Laplace transform can be used to solve di erential equations. To convert Laplace transform to Fourier tranform, replace s with j*w, where w is the radial frequency. See more ideas about math formulas, physics and mathematics, mathematics. Dec 16, 2019 · Laplace Transform of the Dirac Delta Function using the TiNspire Calculator To find the Laplace Transform of the Dirac Delta Function just select the menu option in Differential Equations Made Easy from www. The Laplace transform is a particularly elegant way to solve linear differential equations with constant coefficients. f (t) = e-t + 2e-2t + te-3t C. The transform allows equations in the "time domain" to be transformed into an equivalent equation in the Complex S Domain. 1 and 2-18. Example 6. Algebraic, Exponential, Logarithmic, Trigonometric, Inverse Trigonometric, Hyperbolic, and Inverse Hyperbolic Laplace transforms find important applications in solving ordinary differential equations with discontinuities. Be-sides being a di erent and e cient alternative to variation of parame-ters and undetermined coe cients, the Laplace method is particularly advantageous for input terms that are piecewise-de ned, periodic or im-pulsive. In time-domain analysis, we break input x(t) into impulsive component, and sum the system response to all these components. This simple equation is solved by purely algebraic manipulations. The Laplace transform of a constant multiplied by a function equals the constant multiplied by the transform: L { a f (t) } = a L { f (t) } An integer polynomial is a polynomial where each term has an integer coefficient, and a non-negative order Mar 05, 2016 · laplace transform 1. Use the Laplace transform version of the sources and the other components become impedances. 1. Find the inverse Laplace Transform of . Any voltages or currents with values given are Laplace-transformed using the functional and operational tables. LAPLACE TRANSFORM METHODS we get bx(s) = s2 (s¡1)(s2+2s¡3) a 2. To use Mathcad to find Laplace transform, we first enter the expres-sion of the function, then press [Shift][Ctrl][. Laplace transform is the dual(or complement) of the time-domain analysis. However, before we can solve differential equations, we need to look at the reverse process of finding functions of t from given Laplace transforms. Integro-Differential Equations and Systems of DEs; 10 Apr 29, 2012 · Mar 9, 2019 - Explore Mohammad Amir's board "Laplace transform" on Pinterest. laplace y′′−10y′ + 9y = 5t,y (0) = −1,y′ (0) = 2 laplace y′ + 2y = 12sin (2t),y (0) = 5 laplace y′′−6y′ + 15y = 2sin (3t),y (0) = −1,y′ (0) = −4 laplace dy dt + 2y = 12sin (2t),y (0) = 5 May 21, 2020 · A Special Video Presentation🎦Reverse Engineering Method for Finding the Laplace Transform of a Function Using Calculator! Shout out to @EngineeringWinsPH😍📑 Y Laplace Transform Calculator. And overlook to use the inverted Laplace transform side. in units of radians per second (rad/s). Property B For rational Laplace transforms the ROC does not contain any poles. Following table mentions Laplace transform of various functions. This is used to solve differential equations. 25. When such a differential equation is transformed into Laplace space, the result is an algebraic equation, which is much easier to solve. Let Y(s) be the Laplace transform of y(t). Properties of Laplace Transform; 4. Like all transforms, the Laplace transform changes one signal into another according to some fixed set of rules or equations. topic name: laplace transform electrical department student's name enrollment number anuj verma 141240109003 karnveer chauhan 141240109011 machhi nirav 141240109012 malek muajhidhusen 141240109013 dhariya parmar 141240109014 jayen parmar 141240109015 parth yadav 141240109016 harsh patel Otherwise, join us now to start using these powerful webMathematica calculators. The process of solution consists of three main steps: The given \hard" problem is transformed into a \simple" equation. We can apply the Laplace Transform integral to more than just functions. Transform the circuit. These slides are not a resource provided by your lecturers in this unit. It is easy to calculate Laplace transforms with Sage. When the arguments are nonscalars, laplace acts on them element-wise. Apr 05, 2019 · IVP's with Step Functions – This is the section where the reason for using Laplace transforms really becomes apparent. Using Inverse Laplace to Solve DEs; 9. This Laplace transform turns differential equations in time, into algebraic equations in the Laplace domain thereby making them easier to solve. How about the translation? That was taken care of by the exponential factor. T. Oct 25, 2020 · This can be done by using the property of Laplace Transform known as Final Value Theorem. where c is chosen so that all singular points of f ( s ) lie to the left of the line Re { s } = c in the complex plane s . Laplace Transform Calculator is online tool to find laplace transform of a given function f(t). See the Laplace Transforms workshop if you need to revise this topic rst. f (t) = t? + e-2t sin (3t) f. 1 Transforms of Derivatives The Main Identity To see how the Laplace transform can convert a differential equation to a simple algebraic equation, let us examine how the transform of a function's derivative, L f ′(t) s = L df dt s = Z ∞ 0 df e−st dt = Z ∞ e−st df dt , is related to the corresponding transform of the original Laplace Transform (inttrans Package) Introduction The laplace Let us first define the laplace transform: The invlaplace is a transform such that . f (t) 1 3t 5e2 2e 10t. Here time-domain is t and S-domain is s. Come to Algebra-equation. Transcribed Image Text 2. Here, s can be either a real variable or a complex quantity. Laplace transformation is a powerful method of solving linear differential equations. The Laplace Transform is applied to each terms at first and then the Inverse Laplace Transform is applied at the end after solving them to get the answer in our actual given domain. Definition 6. Lff(t)g= Z 1 0 e stf(t)dt= F(s); L 1fF(s)g= f(t) Apply the Laplace transform to u(x;t) and to the PDE. P. The same thing with the b. The Laplace transform is linear, and is the sum of the transforms for the two terms: If , i. 1 Introduction The Laplace transform provides an effective method of solving initial-value problems for linear differential equations with constant coefficients. The Laplace transform is used to quickly find solutions for differential equations and integrals. For most pharmacokinetic problems we only need the Laplace transform for a constant, a variable and a differential. Jan 16, 2005 · The Laplace Transform. Laplace transform with a Heaviside function by Nathan Grigg The formula To compute the Laplace transform of a Heaviside function times any other function, use L n u c(t)f(t) o = e csL n f(t+ c) o: Think of it as a formula to get rid of the Heaviside function so that you can just compute the Laplace transform of f(t+ c), which is doable. The Fourier transform of a multiplication of 2 functions is equal to the convolution of the Fourier transforms of each function: Convolution calculator; Laplace How to solve: Use the Laplace transform to solve the following initial value problem y'' - 4y' - 32y = 0 y(0) = 4, y'(0) = 3 (a) First, using Y for Teachers for Schools for Working Scholars Salzer's Method for Numerical Evaluation of Inverse Laplace Transform Involving a Bessel Function Housam Binous; Integral Evaluation Using the Monte Carlo Method Housam Binous and Brian G. Boyd EE102 Lecture 7 Circuit analysis via Laplace transform † analysisofgeneralLRCcircuits † impedanceandadmittancedescriptions † naturalandforcedresponse I am trying to find out the inverse Laplace transform of the state transition matrix obtained using inv(S*I-A). So far, the Laplace transform simply gives us another method with which we can solve initial value problems for linear di erential equa-tions with constant coe cients. Example: Compute the inverse Laplace transform q(t) of Q(s) = 3s (s2 +1)2 You could compute q(t) by partial fractions, but there's a less tedious way. If your instructor (or your personal inclination) allows/wants you to load applications/programs, the basic ones (which with all due respect, describe your courses) will work on both models. † Deflnition of Laplace transform, † Compute Laplace transform by deflnition, including piecewise continuous functions. From the application point of view, the Inverse Laplace Transform is very usefrl. Then, we nd y(t) using the formula y(t) = v(t t 0). Like the Fourier transform, it is used for solving the integral equations. Dirichlet's conditions are used to define the existence of Laplace transform. Take the Laplace Transform of the differential equation using the derivative property (and, perhaps, others) as necessary. From a table of Laplace transforms, we can redefine each term in the differential equation. Note that there is not a good symbol in the equation editor for the Laplace transform. Where s = any complex number = σ + j ω, Laplace transforms are a convenient method of converting differential equations into integrated equations, that is, integrating the differential equation. When solving initial-value problems using the Laplace transform, we perform the following steps in sequence: 1) Apply the Laplace Transform to both sides of the equation. Among these is the design and analysis of control systems featuring feedback from the output to the input. This operation is the inverse of the direct Laplace transform, where the function is found for a given function . Then we calculate the roots by simplification of this algebraic equation. Take inverse Laplace transform to attain ultimate solution of equation The Laplace transform The Laplace transform is a mathematical tool that is commonly used to solve differential equations. Where I is the identity matrix, and A is a state-space matrix(24x24 matrix). 2: Transforms of Derivatives and ODEs. Laplace Transform of the sine of at is equal to a over s squared plus a squared. Why is doing something like this important – there are tables of Laplace transforms all over the place, aren't they? The answer is to this is a firm "maybe". Complex frequency is defined as follows: To compute the direct Laplace transform, use laplace. TI-84 Plus SE. 647-649. For example, suppose that we wish to compute the Laplace transform of \(f(x) = t^3 e^t - \cos t\text{. Exercise 6. These slides cover the application of Laplace Transforms to Heaviside functions. This calculator performs the Inverse Laplace Transform of the input function. 1 Figure 1. f(t) = U2(t)*e^(-t) I know how to use the Laplace transform by using the calcolator but I don't know how to add the Unit Step Function (U2). Any time you actually need advice with math and in particular with laplace transform calculator or variable come visit us at Algebra1help. studysmarter. Here, a is 2. 3. Example: Again we can use this to find a new transforms: Use "Integration of transform" to find an inverse of a transform: Find : The Inverse Transform Lea f be a function and be its Laplace transform. Simplify algebraically the result to solve for L{y} = Y(s) in terms of s. L[tneat] = n! (s −a)n+1 L [ t n e a t] = n! (s - a) n + 1 In general, Laplace Transforms "operate on a function to yield another function" (Poking, Boggess, Arnold, 190). uwa. many many many thanks for any help. y''=s^2Y (s)-sy (0)-y' (0) y Dec 17, 2018 · The Laplace transform is an integral transform used in solving differential equations of constant coefficients. Inverse Laplace Transform is used in solving differential equations without finding the gen- eral solution and arbitrary constants. 2) Solve the resulting algebra problem from step 1. Higgins ; Comparing Four Methods of Numerical Inversion of Laplace Transforms (NILT) Claude Montella and Jean-Paul Diard It is obtained by taking the Laplace transform of impulse response h(t). If we set σ=0, then sj= ω, and the functions Zs() and A( ) vo s in the Laplace domain can be written in the frequency (i. Laplace transform calculator is the online tool which can easily reduce any given differential equation into an algebraic expression as the answer. Workshop resources:These slides are available online: www. 4-5 The Transfer Function and Natural Response. i. Definition 4. This is going to be 2 over s squared plus 4. These types of problems usually arise in modelling of phenomena. Transforms of Integrals; 7. Sep 23, 2015 · Around 1785, Pierre-Simon marquis de Laplace, a French mathematician and physicist, pioneered a method for solving differential equations using an integral transform. The possible advantages are that we Apr 13, 2018 · 2. For this purpose, let's use the example in Boas pp. com I'm looking for a "polite" way to calculate this integral using Laplace transform: $$ \int_0^{+\infty} \frac{e^{-ax} - e^{-bx} }{x} dx. Build your own widget But what about the second one? If I use the inverse Laplace Transform of the product $\cfrac{F(s)}{s^2+4}$, I have to compute the convolution between $\cos{2t}$ and $\cfrac{1}{4+\cos{2t}}$, which is $$\int_0^t \frac{\sin(2t-2u)}{4+\cos(2u)}\,du$$ Now, I could use the fact that $\sin(a-b)=\sin a\cos b-\sin b\cos a$. For more information about the application of Laplace transform in engineering, see this Wikipedia article and this Wolfram article . Algebraic, Exponential, Logarithmic, Trigonometric, Inverse Trigonometric, Hyperbolic, and Inverse Hyperbolic Laplace Transform The First Shift Theorem The first shift theorem states that if L {f (t)} = F (s) then L {e at f (t)} = F (s - a) Therefore, the transform L {e at f (t)} is thus the same as L {f (t)} with s everywhere in the result replaced by (s - a) The purpose of the Laplace Transform is to transform ordinary differential equations into algebraic equations. Jul 10, 2020 · Syntax : laplace_transform(f, t, s) Return : Return the laplace transformation and convergence condition. We will use Laplace transforms to solve IVP's that contain Heaviside (or step) functions. Now that we know how to find a Laplace transform, it is time to use it to solve differential equations. Using this terminology, the equation given above for the determinant of the 3 x 3 matrix A is equal to the sum of the products of the entries in the first row and their cofactors: This is called the Laplace expansion by the first row. 0237 and B = -5. If we let be 0 and rearrange the equation, The above is the transfer function that will be used in the Bode plot and can provide valuable information about the system. Conditions for Existence of Laplace Transform. A common situation is when f˜(s) is a polynomial in s, or more generally, a ratio of polynomials; then we use partial fractions to simplify the expressions. Here time-domain variable is t and S-domain variable is s. When transformed into the Laplace domain, differential equations become polynomials of s. I have read the manual. TiNspireApps. This will allow us to solve differential equations using Laplace Transforms. com. It converts a function of time, f(t), into a function of complex frequency. 1) L(f) = Z ∞ 0 e−stf(t)dt. Unilateral Laplace Transform Up: Laplace_Transform Previous: Higher Order Systems System Algebra and Block Diagram. }\) We can use the Sage command laplace. Laplace transform is a central feature of many courses and methodologies that build on the foundation provided by Engs 22. We denote Y(s) = L(y)(t) the Laplace transform Y(s) of y(t). 1: The Laplace transform as a metaphorical \machine. Inverse Laplace Transform. State Equations Complex Fourier transform is also called as Bilateral Laplace Transform. Using the definition of Laplace Transform in each case, the integration is reasonably straightforward: Solving PDEs using Laplace Transforms, Chapter 15 Given a function u(x;t) de ned for all t>0 and assumed to be bounded we can apply the Laplace transform in tconsidering xas a parameter. Laplace Transform Definition; 2a. We transform the equation from the t domain into the s domain. It can be shown that the Laplace transform of a causal signal is unique; hence, the inverse Laplace transform is uniquely defined as well. Laplace transform of: Variable of function: Transform variable: Calculate: Computing Get this widget. To understand the Laplace transform, use of the Laplace to solve differential equations, and The inverse Laplace transform of the function is calculated by using Mellin inverse formula: Where and . 1 Circuit Elements in the s Domain. The main advantage of using Laplace transforms is that the solution of the differential equations is reduced to algebraic Conseqently, Laplace transforms may be used to solve linear differential equations with constant coefficients as follows: Take Laplace transforms of both sides of equation using property above to express derivatives; Solve for F (s), Y (s), etc. They take three arguments - the item to be transformed, the original variable, and the transformed variable. The inverse Laplace transform of F(s), denoted L−1[F(s)], is the function f Use the convolution theorem to find inverse Laplace transform of F (s)= 1 s(s−4)2 F (s) = 1 s (s − 4) 2. (f) does not exist (infinite number of (finite) jumps), also not defined unless t is an integer. We begin with the definition: Laplace Transform 6. 1. The kinds of problems where the Laplace Transform is invaluable occur in electronics. The z-transform is a similar technique used in the discrete case. Dec 05, 2014 · Alternative notations for the Laplace transform of f(t) are L[f], F(), and fL(). Laplace Transform (inttrans Package) Introduction The laplace Let us first define the laplace transform: The invlaplace is a transform such that . Example 2: Find Laplace transform of Solution: Observe that 5t = e t log 5. Materials include course notes, practice problems with solutions, a problem solving video, and problem sets with solutions. humanities and science department. This topic identifies the key learning points of how to carry out circuit analysis using the Laplace transform, as well as the concept of the transfer function. LTI system means Linear and Time invariant system, according to the linear property as the input is zero then output also becomes zero. The Laplace transform describes signals and systems not as functions of time, but as functions of a complex variable s. The function f(t) has finite number of maxima and minima. f (t) = te-t + 2t cost 3. 7 The Transfer Function and the Steady-State Sinusoidal Response. 1: Verify Table 6. Usually, to find the Inverse Laplace Transform of a function, we use the property of linearity of the Laplace Transform. im/NcVLH. Without Laplace transforms solving these would involve quite a bit of work. Laplace transform is a powerful transformation tool, which literally transforms the original differential equation into an elementary algebraic expression. Laplace Transform of Array Inputs Find the Laplace transform of the matrix M. Recall the Laplace transform for f(t). To understand the Laplace transform, use of the Laplace to solve differential equations, and tions but it is also of considerable use in finding inverse Laplace transforms since, using the inverse formulation of the theorem of Key Point 8 we get: Key Point 9 Inverse Second Shift Theorem If L−1{F(s)} = f(t) then L−1{e−saF(s)} = f(t−a)u(t−a) Task Find the inverse Laplace transform of e−3s s2. It looks a little hairy. It is the easiest method to solve the differential equations. Property C If the Laplace transform of x(t) is rational then the ROC is the The Laplace transform is a well established mathematical technique for solving differential equations. The symbols ℱ and ℒ are identified in the standard as U+2131 SCRIPT CAPITAL F and U+2112 SCRIPT CAPITAL L, and in LaTeX, they can be produced using \mathcal{F} and \mathcal{L}. The Laplace transform is (1) X L (s) = 1 s + a Since a > 0, the ROC of X L (s) contains the imaginary axis, and the Fourier transform of x (t) is simply obtained by evaluating X L (s) on the imaginary axis s = j ω: (2) X F (ω) = X L (j ω) = 1 j ω + a Learn the Laplace Transform Table in Differential Equations and use these formulas to solve differential equation. We keep a huge amount of good quality reference information on matters varying from factoring trinomials to adding and subtracting rational expressions Laplace transform is the most commonly used transform in calculus to solve Differential equations. This property simply recognizes that the Laplace transform goes to infinity at a pole so the Laplace transform integral will not converge at that point and hence it cannot be in the ROC. Delay of a Transform L ebt f t f s b Results 5 and 6 assert that a delay in the function induces an exponential multiplier in the transform and, conversely, a delay in the transform is associated with an exponential multiplier for the function. The crucial idea is that operations of calculus on functions are replaced by operations of algebra on transforms. Find the Laplace transform of the matrix M. I found A = 5. See full list on dummies. If you want to compute the inverse Laplace transform of (8) 24 () + = ss F s, you can use the following command lines. Just determining the regular transform is a procedure, likewise, known as a unilateral Laplace transform. Proof of Laplace Transform of Derivatives $\displaystyle \mathcal{L} \left\{ f'(t) \right\} = \int_0^\infty e^{-st} f'(t) \, dt$ Using integration by parts, This section provides materials for a session on operations on the simple relation between the Laplace transform of a function and the Laplace transform of its derivative. It is "algorithmic" in that it follows a set process. Our online calculator, build on Wolfram Alpha system allows one to find the Laplace transform of almost any, even very complicated function. Solution: We can express this as four terms, including two complex terms (with A 3 =A 4 *) Cross-multiplying we get (using the fact that (s+1-2j)(s+1+2j)=(s 2 +2s+5)) Then equating like powers of s Inverse Laplace Transform Calculator. By applying Laplace's transform we switch from a function of time to a function of a complex variable s (frequency) and the differential equation becomes an algebraic equation. How to find the Laplace transform of a periodic function ? First, find the Laplace transform of the window function . Table of Laplace Transformations; 3. Taking the Laplace transform of the differential equation we have: The Laplace transform of the LHS L[y''+4y'+5y] is LaPlace Transform in Circuit Analysis Recipe for Laplace transform circuit analysis: 1. au !Numeracy and Maths !Online Resources The convolution property of the Laplace transform �1(�)∗�2(�)↔�1(�)�2(�) is very important in the analysis of LTI systems, since it allows us to deal with the transformed zero-state response as the product of two rational functions �𝑧�(�)=�(�)�(�)as opposed to the convolution integral, �𝑧�(�)=�(�)∗ℎ(�). mechanical system, How to use Laplace Transform in nuclear physics as well as Automation engineering, Control engineering and Signal processing. The key feature of the Laplace transform that makes it a tool for solving differential equations is that the Laplace transform of the derivative of a function is an algebraic expression rather than a differential expression. , grows without a bound when , the intersection of the two ROCs is a empty set, the Laplace transform does not exist. We use the letter s to denote complex frequency, and thus f(t) becomes F(s) after we apply the Laplace transform. Take inverse Laplace transform to attain ultimate solution of equation Nov 16, 2009 · For this we need the inverse Laplace transform of our H(s). 2, the Fourier transform of function f is denoted by ℱ f and the Laplace transform by ℒ f. using variation of parameters. 1 Introduction and Definition In this section we introduce the notion of the Laplace transform. For a signal f(t), computing the Laplace transform (laplace) and then the inverse Laplace transform (ilaplace) of the result may not return the original signal for t < 0. Advantages of using Laplace Transforms to Solve IVPs It converts an IVP into an algebraic process in which the solution of the equation is the solution of the IVP. f (t) = 3 cos (6t) e. First use partial fraction expansion, or your fancy calculator, to expand the transfer function. Inverse Laplace Transform Calculator is online tool to find inverse Laplace Transform of a given function F(s). com and uncover algebra course, logarithmic functions and a number of other math topics transformation of a function f(t) from the time domain into the complex frequency domain, F(s). The Laplace transform is a method of solving ODEs and initial value problems. The steps to using the Laplace and inverse Laplace transform with an initial value are as follows: 1) We need to know the transformations we have to apply, which are: Aug 01, 2020 · Computer algebra systems have now replaced tables of Laplace transforms just as the calculator has replaced the slide rule. Use some algebra to solve for the Laplace of the system component of interest. Example 1: Find the Laplace transform of the given function: f(t) = t 3 – 7e 4t Given function:t 3 – 7e 4t In order to find the Laplace transform for this function, we use the Standard Laplace Laplace Transform Calculator The above calculator is an online tool which shows output for the given input. The Laplace Transform can be used to solve differential equations using a four step process. Given The Laplace Transform in Circuit Analysis. Jun 17, 2017 · The Laplace transform is an integral transform that is widely used to solve linear differential equations with constant coefficients. Laplace transforms can be computed using a table and the linearity property, "Given f(t) and g(t) then, L\left\{af(t)+bg(t)\right\}=aF(s)+bG(s). Final value theorem and initial value theorem are together called the Limiting Theorems. transfer function and impulse response are only used in LTI systems. The Laplace transform provides us with a complex function of a complex variable. Hello, Is there a way to put the below equasion on the calculator to get the Laplace transfor. no hint Solution. math. This is the Laplace transform of the unit box function. c is the breakpoint. Inverse of the Laplace Transform; 8. Without any loss of meaning, we can use talk about finding the potential inside a sphere rather than the temperature inside a sphere. " Consider the Laplace transform: Some manipulations must be done before Y(s) can be inverted since it does not appear directly in our table of Laplace transforms. Solve Differential Equations Using Laplace Transform. We would like the script L, which is unicode character 0x2112 and can be found under the Lucida Sans Unicode font, but it can't be accessed from the equation editor. The best way to convert differential equations into algebraic equations is the use of Laplace transformation Inverse Laplace Transform Online Calculator. Deflnition: Given a function f(t), t ' 0, its Laplace transform F(s) = Lff(t)g is deflned as F(s) = Lff(t)g: = Z 1 0 e¡stf(t)dt = lim: A!1 Z A 0 e¡stf(t)dt We say the transform converges if the limit exists, and 6. f (t) = t cos (3) g. Inverse Laplace transforms work very much the same as the forward transform. Specify the independent and transformation variables for each matrix entry by using matrices of the same size. Pan 6 12. Laplace transforms are fairly simple and straightforward. Equations 1 and 4 represent Laplace and Inverse Laplace Transform of a signal x(t). Again, we are using the bare bone definition of the Laplace transform in order to find the question to our answer: Then, is nothing but or, short: and. This is denoted by L(f)=F L−1(F)=f. The Laplace transform of a function is defined to be . As you launch this software, it provides you two options: New quick conversion and Create New Conversion. The integral is computed using numerical methods if the third argument, s, is given a numerical value. Overview of the Transform of a Derivative and Steps for using Laplace Transforms to Solve ODEs; Example #1 – use Laplace transform calculator show solved: use transforms methods the complex analysis made easy step. 24 illustrates that inverse Laplace transforms are not unique. Use Laplace transform table to find the Laplace transform of the following time functions: a. Then, use the formula: F(s) f (t) FT (s) fT What does the Laplace transform do, really? At a high level, Laplace transform is an integral transform mostly encountered in differential equations — in electrical engineering for instance — where electric circuits are represented as differential equations. All of the these have complex roots Using the Laplace transform nd the solution for the following equation @ @t y(t) = 3 2t with initial conditions y(0) = 0 Dy(0) = 0 Hint. The only difference is that the order of variables is reversed. This is because we utilize one side of the Laplace transform (the typical side). sardar patel college of engineering,bakrol 2. By using the above Laplace transform calculator, we convert a function f (t) from the time domain, to a function F (s) of the complex variable s. However, it can be shown that, if several functions have the same Laplace transform, then at most one of them is continuous. A final property of the Laplace transform asserts that 7. The forward and inverse Laplace transform commands are simply laplace and invlaplace. Convolution Theorem of Laplace transform: The convolution theorem is helpful in determining May 02, 2017 · Laplace Transform Information using TI89's Differential Equations Made Easy. Example #1 : In this example, we can see that by using laplace_transform() method, we are able to compute the laplace transformation and return the transformation and convergence condition. Laplace Transform It's time to stop guessing solutions andfind a systematic way offinding solutions to non homogeneous linear ODEs. It is named in honor of the great French mathematician, Pierre Simon De Laplace (1749-1827). Laplace transforms applied to the tvariable (change to s) and the PDE simpli es to an ODE in the xvariable. " The statement means that after you've taken the transform of the individual functions, then you can add back any constants and add or subtract the results. 8. From the table, we see that the inverse of 1/(s-2) is exp(2t) and that inverse of 1/(s-3) is exp(3t). B Tables of Fourier Series and Transform of Basis Signals 325 Table B. Recall that the Laplace transform of a function is F (s) = L (f (t)) = ∫ 0 ∞ e − s t f (t) d t. Laplace Transforms - vCalc Processing Laplace Transforms to Solve BVPs for PDEs Laplace transforms can be used solve linear PDEs. It is similar to the use of logarithms to multiple or divide numbers. And it's minus because this is minus. The syntax is as follows: LaplaceTransform [ expression , original variable , transformed variable ] Inverse Laplace Transforms. 8 The Impulse Function in Circuit Analysis EE 230 Laplace circuits – 5 Now, with the approach of transforming the circuit into the frequency domain using impedances, the Laplace procedure becomes: 1. Integro-Differential Equations and Systems of DEs; 10 In mathematics, the Laplace transform, named after its inventor Pierre-Simon Laplace (/ ləˈplɑːs /), is an integral transform that converts a function of a real variable {\displaystyle t} (often time) to a function of a complex variable {\displaystyle s} (complex frequency). Some understanding of the LAPLACE TRANSFORMS 5. edu Laplace transform is named in honour of the great French mathematician, Pierre Simon De Laplace (1749-1827). I am not typing in "laplace" I am using the toolbox -> Calculus-> Transform->Laplace. laplace transformation of f(t). The Laplace May 20, 2015 · The inverse Laplace transform does exactly the opposite, it takes a function whose domain is in complex frequency and gives a function defined in the time domain. , decays when , the intersection of the two ROCs is , and we have: However, if , i. 3) Apply the Inverse Laplace Transform to the solution of 2. )tn−1 (n−1)!e −αtu(t), Reα>0 1 (α+jω)n Apr 17, 2007 · For the best answers, search on this site https://shorturl. e. Laplace transforms from time to Laplace domain. Build your own widget Aug 04, 2017 · Laplace Transform of the Dirac Delta Function using the TiNspire Calculator To find the Laplace Transform of the Dirac Delta Function just select the menu option in Differential Equations Made Easy from www. Derivation in the time domain is transformed to multiplication by s in the s-domain. Using the If L [f = F(s), then L-l [F = f (t), wherc L-l is called the Inverse Laplace Transform operator. 1 Definition of the Laplace Transform [ ] 1 1 1 ()()1 2 Look-up table ,an easier way for circuit application ()() j st j LFsftFseds j ftFs − + − == ⇔ ∫sw psw One-sided (unilateral) Laplace transform Two-sided (bilateral) Laplace 3 Finding inverse transforms using partial frac-tions Given a function f, of t, we denote its Laplace Transform by L[f] = f˜; the inverse process is written: L−1[f˜] = f. To easily calculate inverse Laplace transform, choose New Quick conversion option and enter the expression in the specified inversion filed. \(\) The one precaution is that the Fourier Transform is often given as a bilateral function (t extending from $-\infty$ to $\infty$) so to be truly equivelent unless the function is declared to be causal, we must be using the bilateral Laplace Transform for the two to be exactly identical (which is also seldom used). $$ Now the impolite way is to invoke a famous theorem Aug 26, 2017 · The main idea behind the Laplace Transformation is that we can solve an equation (or system of equations) containing differential and integral terms by transforming the equation in " t -space" to one in " s -space". I am using this formula, e to the minus as times the Laplace transform of the unit step function, which is one over s. com Inverse Laplace Transform Calculator The calculator will find the Inverse Laplace Transform of the given function. It can also be shown that the determinant is equal to the Laplace expansion by the second row, The Laplace Transform 4. Overview of the Transform of a Derivative and Steps for using Laplace Transforms to Solve ODEs; Example #1 – use Solving PDEs using Laplace Transforms, Chapter 15 Given a function u(x;t) de ned for all t>0 and assumed to be bounded we can apply the Laplace transform in tconsidering xas a parameter. Solve the circuit using any (or all) of the standard circuit analysis Feb 08, 2012 · Laplace Transforms. Get result from Laplace Transform tables. The Laplace transform of f(t), written F(s), is given by (4. BYJU'S online Laplace transform calculator tool makes the calculations faster and the integral change is displayed in a fraction of seconds. First let us try to find the Laplace transform of a function that is a derivative. ], in the place holder type Sep 27, 2010 · Laplace Transform. Plugging in x(0) = x′(0) = 0we get s2X(s)− 6sX(s)+8X(s) = (s2−6s+8)X(s) = (s−4)(s− 2)X(s). for the other direction. Proof of Laplace Transform of Derivatives $\displaystyle \mathcal{L} \left\{ f'(t) \right\} = \int_0^\infty e^{-st} f'(t) \, dt$ Using integration by parts, Use the Laplace Transform Use standard tables to transform to Laplace form and also use the inverse Laplace Transform Solve differential equations using Laplace 4 1. Definition of Laplace Transformation: Let be a given function defined for all, then the Laplace Transformation of is defined as Here, is called Laplace Transform Operator. 8 May 22, 2019 · The Laplace Transform is a powerful tool that is very useful in Electrical Engineering. Question: Use the Laplace transform to solve the following damped vibrating system that experiences a constant force: Dec 18, 2020 · Laplace Transform is a strategy for resolving differential equations. (d) the Laplace Transform does not exist (singular at t = 0). We can use the Laplace transform to nd v(t). 0237 Now we can take the inverse transform. The calculator will find the Laplace Transform of the given function. Recall, that L − 1 (F (s)) is such a function f (t) that L (f (t)) = F (s). Usually, the only difficulty in finding the inverse Laplace transform to these systems is in matching coefficients and scaling the transfer function to match the Using Laplace Transforms to Solve Linear Differential Equations Partial Differential Equations The Laplace transform, which are very useful for solving differential equations is defined as: where f and t are the symbolic variables, f the function, t the time variable. Try the free Mathway calculator and problem solver below to practice various math topics. One of the main advantages in using Laplace transform to solve differential equations is that the Laplace transform converts a differential equation into an algebraic equation. . Laplace transforms including computations,tables are presented with examples and solutions. Example #9 – find the given inverse Laplace Transform using Completing the Square; Example #10 – find the given inverse Laplace Transform using Partial Fractions; Initial Value Problems with Laplace Transforms. If I type purge(s,t) the calculator returns ["no such variable s" "no such variable t"]. 1 hr 3 Examples. H. The transfer function defines the relation between the output and the input of a dynamic system, written in complex form (s variable). 25. 2. So the Laplace Transform of sine of 2t. Examples of how to use Laplace transform to solve ordinary differential equations (ODE) are presented. The transfer function defines the relation between the output and the input of a dynamic system, written in complex form ( s variable). This prompts us to make the following definition. What are the applications of the Laplace Transform? Laplace Transforms - vCalc Processing Laplace Transform Online Calculator. View all Online Tools Laplace transforms can be computed using a table and the linearity property, "Given f(t) and g(t) then, L\left\{af(t)+bg(t)\right\}=aF(s)+bG(s). It reduces the problem of solving differential equations into algebraic equations. laplace transform using calculator
b9, yljr, gcey, p61, wzz, ql4i, epkl, erhxp, eo, h5x, oco, cfmg, ff4r, ank, x4g, | CommonCrawl |
Stiffness matrix
stiffness matrix (5-6) This equation together with Eqs (5-4) and (5-5) yields: p = BkBtv. WAMIT, AQWA) account for both of these contributions. for example, Null*K*Null' doesn't equal to Kc. The present study relies on a single stiffness matrix, S, based on cadaveric experiments conducted to target a relative orientation between femur and tibia of 45° of flexion (i. The stress stiffness matrix is A singular stiffness matrix means mcq. Based on the developed generalized stiffness matrix method, an example of foot force analysis of a quadruped is presented to demonstrate the effects of different Jul 28, 2016 · The stiffness matrix is the overall stiffness for the structure, it is the stiffness in every direction for the structure for any arbitrary loading and boundary condition. The delay of measurable force generation was cell lineage dependent but not FCS dependent. Case 2:51 and 52 is pinned Chinghed). Both beams have modulus of elasticity E, moment The reduced stiffness constants in the material principle directions are: where T is the transformation matrix which is used to transform the reduced stiffness constants from the principal material fibre directions to a global (x, y, z) beam coordinates. columns 3, 4, and 5). To delineate their relationship, we modulated cytoskeletal tension Aug 30, 2013 · To build a custom stiffness matrix for a particular FEM element, just derive your own element class from the relevant standard one and override the CalcStiffnessMatrix() method. Section one fo-cuses on structures featuring cyclic symmetric where the stiffness and mass matrices of only stiffness matrix Kin the location that is specified in the first row of Edofarray. 3. Capital expansion from the effective earthquake force, Sn for the first 2 modes 3. To understand the "what and why" of this, it is important to understand physically just what the stiffness matrix is. Fastener Stiffness The fastener generally consists of two distinct sections, the threaded and the unthreaded. The laminate stiffness matrix is used to express laminate resultant forces per unit width {N} and laminate resultant moments per unit width {M} in terms of laminate mid-plane strains {e 0 } and laminate mid-plane curvatures {k}. From wikipedia : An eigenvector of a square matrix is a non-zero vector that, when multiplied by the matrix, yields a vector that differs from the original at most, by a multiplicative scalar. 15. (b)Explain the properties and uses of [k cr ]. one that is required – describes the behaviour of the complete system, and not just the individual springs. 6, we will develop four distinct Foundation Stiffness Stiffness Matrix The single pile-soil system can be represented as a 6x6 coupled stiffness matrix. However, Note that the stiffness matrix for plane stress is NOT found by removing columns and rows from the general isotropic stiffness matrix. 9) Sep 28, 2018 · So a 5*5 stiffness matrix can be defined containing 25 numbers. Recall from elementary strength of materials that the deflection δof an elastic bar of length L and uniform cross-sectional area A when subjected to axial load P : where E is the modulus of elasticity of the material. A. a simple method to construct the stiffness matrix of a beam and a beam-column element of constant cross-section, with bending in one principal plane, including shear deflections; 2. China Jun 17, 2016 · The stiffness matrix satisfies the general equation F−F 0 = S(U−U 0). (Although the solution to the previous post appears to have been related to making sure the model was statically stable, FEA solvers usually cannot distinguish why the stiffness matrix has a problem. Those components of joint displacements that are free to move are called degrees of freedom. The stiffness matrix is symmetric, i. This field allows us to compute the elastic forces in a non-rotatedreference frame while using the precomputed stiffness matrix. Matrix creation and manipulation is central to the stiffness method. Next, we can solve the same model using the Timoshenko beam theory. The discrete equations that stem from using an ansatz in the variational formulation (minimizing the potential energy functional) for this self-adjoint problem will be the stiffness matrix also depends on the geometry of the laminate (thickness, orientation and location of layers). This relationship is defined by the equations and matrices below using Hooke's Law: Where σ ij and ε ij describe the stress and strain components, i indicating direction of normal to plane and j indicating direction of component. Vikunj Tilva Prof. Jump to content. ; I want these local stiffness matrices to be arranged in global stiffness matrix of (8x8) size according to above local stiffness address with overlapping cells added. The diagonal stiffness terms of the stiffness matrix D nst given in are elucidated by writing the stiffness terms as (18. e, assume all ends are free). For both enzymes, as they disrupt extracellular matrix molecules and reduce its foundational structure, a cellular response is reduction of nuclear stiffness. External loads do not change in magnitude or direction as the structure deflects. It essentially defines the elastic properties of the entire laminate. Which Kc is the eliminated Stiffness Matrix. Quantify the linear elastic stress-strain response in terms of tensorial quantities and in Global Stiffness Matrix: Singularity A global stiffness matrix relates the nodal dof and the external forces and moments applied to the nodes: where D is the vector of all the nodal dof for the whole structure. It gives the details of the method, the steps involved in the For these reasons, the matrix stiffness method is the method of choice for use in general purpose structural analysis software packages. 6, we will develop four distinct We propose a fast stiffness matrix calculation technique for nonlinear finite element method (FEM). The most important matrix generated is the overall joint stiffness matrix [S J ]. Matrix Structural Analysis – Duke University – Fall 2012 – H. For example, the set of equilibrium equations necessary to solve for unknown displacements can be created by hand and visually offers a The stiffness matrix needs 2 transformations so that sine and cosine appear up to 4th power. force directed in say left direction cannot produce a displacement in right direction. As stated, the hydrostatic stiffness matrix is only applicable for small changes in the vessel's position and orientation; it is a linearisation of what is, in general, a nonlinear problem. This is a The B-matrix (strain-displacement) corresponding to this element is We will denote the columns of the B-matrix as Computation of the terms in the stiffness matrix of 2D elements (recap) Jun 18, 2019 · Stiffness matrix of shear building based on shear-building idealization assuming all floors are identical. Theory. sup. please help me urgently The stiffness matrix is symmetric, i. 4 hours ago · It is known that the mass matrix, the stiffness of the first 2 vibrational patterns of a 4 DOF structure are as follows: 1. Actually there is two methods the Total Lagragian (that recomputes everything with respect to the initial frame, which is presented in the paper) and the Updated Lagrangian (which update the strain-displacement matrices B wich respect to the deformed frame). E A = El=00 for the beam member. ITS SIMPLE!!STEP 1Label all the nodal displacements with the appro Jul 04, 2020 · I am quite noob in Matlab. (4. The stiffness of the extracellular matrix exerts powerful effects on cell proliferation and differentiation, but the mechanisms transducing matrix stiffness into cellular fate decisions remain poorly understood. example, G12 is the shear stiffness for shearing in the 1-2 plane. Discussion Stiffness matrix Author Date within 1 day 3 days 1 week 2 weeks 1 month 2 months 6 months 1 year of Examples: Monday, today, last week, Mar 26, 3/26/04 It is a matrix method that makes use of the members' stiffness relations for computing member forces and displacements in structures. C, the element stiffness equations are 1 11 1 12 2 13 3 14 4 15 5 16 6 f1 The derived stiffness matrix and the fixed-end force vectors are useful and simple to use in the matrix structural analysis packages. - These are derived from the 3-D In this paper, a new stiffness matrix for a beam element with transverse opening including the effect of shear deformation has been derived. to inertia. Calculate reduced stiffness matrix Q ij The stiffness matrix in your case is simply: $$ K_m+K_n $$ But this stiffness matrix only applies to each edge's local coordinate system respectively, while the variables shown in the triangle are inevitable in a global coordinate system. element (2,1) of the stiffness matrix gives the force on mass 2, for a displacement on mass 1). Experiments were Derivation of Member Stiffness Matrix k •Various classical methods of structural analysis, such as the method of consistent deformations and the slope-deflection equations, can be used to determine the expressions for the stiffness coefficients kij in terms of member length and its flexural rigidity, EI @article{osti_175457, title = {Stiffness matrix for beams with shear deformation and warping torsion}, author = {Schramm, K and Pilkey, W}, abstractNote = {A beam model which considers the warping effect in beams with arbitrary cross sections is discussed. ) The stiffness matrix Ke in Eq. Organized into seven chapters, this book first describes the matrix algebra and the fundamental structural concepts and principles which are directly related to the development of the matrix methods. 3 Joint Stiffness A typical joint is composed of two components, the fastener and the members. The best would be to use this full stiffness matrix for FEA or rotor dynamic calculations. 3 Axial Element Stiffness matrix in local coordinate system (Xi). Vote. Matrix Algebra Representing the above two equations in the matrix form, we get = − 0 6 1 1 1 2 y x The above equation is in the form of AX =B where A is known as the coefficient matrix, X is called the variable matrix and B, the constant matrix. Uncoupled response for the bending deforma-tion modes is known from elemen-tary strength of materials where we learned that bending moments and shearing forces in one principal plane do not cause any deforma-tion outside the principal plane. The derivation of stiffness matrix for this case is based on this fact (i. t i =σ ij n j Cauchy's Law (7. You need only to fill the upper triangle of the K matrix, the StiffnessComputation() call will fill the lower one. 2. It can be viewed with any text editor, such as Notepad, or any word processing program, such as Microsoft Word. Then the Structure Stiffness Matrix y x 3 4 1 2 6 5 L 2 EI 1 EI 2 L 1!=#∆ The 6x6 structure stiffness matrix can be assembled from the element stiffness matrices Each beam joint can move in two directions: 2 Degrees of Freedom (DOF) per joint Stiffness Matrix! General Procedures! Internal Hinges! Temperature Effects! Force & Displacement Transformation! Skew Roller Support BEAM ANALYSIS USING THE STIFFNESS Module 3 Constitutive Equations Learning Objectives Understand basic stress-strain response of engineering materials. I am trying to add two matrix to create a global stiffness matrx. (c) Calculate torsional buckling load of ? section column under axial load. The end displacement $\delta$ and the end slope $\theta$. Feb 12, 2017 · Stiffness Matrix 1. The beams are fixed at their other ends (i. The common form of the stiffness matrix in the local pile coordinate system is given by the following expression: Apr 11, 2012 · Each column of stiffness matrix is an equilibrium set of nodal force required to produce unit respective dof; Symmetric stiffness matrix shows force is directly proportional to displacement; Diagonal terms of the matrix are always positive i. Below is a minimal working example in Matlab of my progress so far. Follow 150 views (last 30 days) abdelrahman alhammadi on 12 Oct 2018. 2) From here we developed linear algebraic equations describing the displacement of A short review for solving the beam problem in 2D is given. I also can suggest you to increase number of elements to see does it increase the accuracy of the displacement. The member stiffness matrix [S M] for an arbitrary truss member with member axes X m and Y m oriented along the member and Ke = elastic stiffness matrix Kg = geometric (initial stress) stiffness matrix I have found both approaches in the literature (1) Bathe - Finite element procedures, (2) Bathe - Finite Element Procedures for Solids and Structures - Nonlinear Analysis - MIT Open Courseware - Lecture notes Stiffness matrix K is a 4x4 matrix with stiffness coefficients. This formulation includes all the joints of the structure, whether they are free to displace or are restrained by supports. Include this parameter to output the stiffness matrix. There are two DOFs of rigid movements for planer trusses and three DOFs for space trusses. 11 developed a general method for the stiffness analysis of serial and parallel kinematic mechanism; Patterson and Lipkin Mar 08, 2020 · Stiffness and Mass matrix plays a very important role in structural static and dynamic analysis problem. Determination and description of the single components . Each has a stiffness that contributes to the overall stiffness of the joint, and are identified in the figure. The function of the relatively weak matrix is to bond the fibers together and to transfer loads between them, As or "stiffness", matrix. At least for a physical spring. The element stiffness matrix 'k' is the inv erse of the element flexibility matrix 'f' and is given by f=1/k or k =1/f. Haftka* Virginia Polytechnic Institute and State University Blacksburg, Virginia 24061 Introduction F OR static response, the condition number of the stiffness matrix is an upper bound to the amplification of errors in structural properties and loads. We will present a more general computational approach in Part 2 of this blog series. pdf in the link provided is the Timoshenko beam stiffness matrix for a constant cross section with bending and torsion coupling. It is a specific case of the more general finite element method, and was in part responsible for the development of the finite element method. 1 Stiffness matrix The stiffness matrix [k] from the strain-displacement and constitutive rela-tions, [b] and [d] is given in equation (7). R. the stiffness matrix. O. 4 1 4501 J1 1,6- J2 1 4 1. One of its advantages over the flexibility method is that it is conducive to computer programming. It gives all the details to implement the geometric stiffness matrix. Nov 17, 2019 · i have a beam element i want to get a stiffness matrix: we have beam element (2 nodes) node (1) : u1 horizontal displacement, v1 vertical displacement node (2): u2 horizontal displacement , v2 vertical displacement i know that the stiffness matrix is the relation between the forces in nodes and displacement ke : Stiffness Matrix D: displacement AbsoluteTiming[ res = Eigensystem[{stiffness, damping}, -10, Method -> {"Arnoldi"}];] which will give a warning about the damping matrix not being positive definite but should be OK. At every time step of the simulation, we compute a tensor field that describes the local rotations of all the vertices in the mesh. The output will be the same as the results generated from the Python script. Theory and Applications to. 0 ⋮ Vote. For a more complex spring system, a 'global' stiffness matrix i. Nonlinear stiffness matrices are constructed using Green-Lagrange strains, which are derived from infinitesimal strains by adding the nonlinear terms discarded from small deformations. Aij = Aji, so all its eigenvalues are real. In this paper, the stiffness matrix method is generalized to include all the major system compliances, i. ) For linear elastic frame, if stiffness matrix is doubled with respect to the existing stiffness matrix, the deflection of the resulting frame will be. Contents General concepts and stiffness of sand Hooke's law E-moduli from triaxial testing E-moduli from oedometer testing Examples on the estimation of E Stiffness of clays Undrained clay behavior Drained clay behavior Examples on the estimation of E Idealized and real stress-strain behavior of soils Idealized and real stress-strain behavior Calculate the laminate stiffness matrix. 12 • Obtain stiffness matrix from potential energy 22()22()()2 11 2 2 1 5 6 2 3 2 3 43 12 2 223563 334 1 2 0 0 V kxk x kk k K kkkkk k kkk mass matrix is needed otherwise, a simple static analysis is enough for getting the stiffness matrix. Herein, we probe mammary epithelial cell responses to substrate stiffness with a dynamically stiffened hydrogel, enabling investigation of how the change in substrate stiffness impacts collective cell behaviors. Modeling procedure. A singular stiffness matrix means mcq This proposed stiffness determination method is validated against experiments in the literature and compared to existing analytical models and widely used advanced computational methods. , Darwin, D. Using the Euler-Bernoulli bending beam theory, the governing differential equations are exploited and representative, frequency-dependent, field variables are chosen based on the closed Control of Matrix Stiffness Using Methacrylate–Gelatin Hydrogels for a Macrophage-Mediated Inflammatory Response Zhumei Zhuang Key State Laboratory of Fine Chemicals, School of Bioengineering, Dalian University of Technology, 2 Linggong Road, High-Tech District, Dalian 116024, P. Jan 24, 2000 · MODAL STIFFNESS MATRIX When the stiffness matrix is post-multiplied by the mode shape matrix and pre-multiplied by its transpose, the result is a diagonal matrix, shown in equation (6). These structure stiffness matrix coefficients are designated as Sij and i = 1, 2, …, n and j = 1, 2, …, n. Effective model mass, Mn * of the first 2 modes. "Dynamics of Structures. Stiffness Matrix and Boundary Conditions; Element Properties. 1 elastic stiffness matrix considering in-plane deformations 54 7. A matrix that is necessary to account for the change in potential energy associated with rotation of continuum elements under load. Solution eT k t A B D B ee where, 13 23 23 13 2 11 det 22 1 23. nodes a and c). ac. A: No, this stiffness matrix is a N X N matrix where N is 6 times the number of bodies. However, i find K D E is not the matrix i want to extract for the future computation. This is especially important when your global stiffness matrix might be 1e5x1e5 or larger. Stiffness Matrix for Truss Members in the Local Axes System Consider a truss member AB subjected to forces (X A, Y A) and (X B, Y B) at joints A and B. You can then constrain and apply loads accordingly directly on the matrix. Natural Coordinates; Triangular Elements; Rectangular Elements; Lagrange and Serendipity Elements; Solid Elements; Isoparametric Formulation; Stiffness Matrix of Isoparametric Elements; Numerical Integration: One Dimensional; Numerical Integration: Two and Three Dimensional; Analysis Once that sparse matrix is built, all operations, like matrix multiplies and backslash are fully supported, and can be very fast compared to the same operations on a full matrix. For continuous beam problem, if the supports are unyielding, then only rotational degree of freedom Version 2 CE IIT, Kharagpur shown in Fig. This is shown in The purpose of this note is to explain how to extract stiffness and mass matrices from Ansys. We implemented a linear and a nonlinear finite element method with the same material properties to examine the Matrix Structural Analysis – Duke University – Fall 2014 – H. Element Stiffness • In the previous example, we considered the entire structure when we defined the terms in the stiffness matrix. 6, we will develop four distinct Eigenvalues of a stiffness matrix First, I need to explain what an eigenvector is before I can explain what an eigenvalue is. So I would like to find the Timoshenko beam stiffness matrix with both bending and torsion coupling for a variable cross section. The matrix $\mathbf{K}$ simply represents the force response to a unit displacement on each of the degrees of freedom of the system. MASS. Fs are arrested or fixed. 51 7. 255 Proximal-point method for finite element model updating problem Velocity of sound, c, is proportional to stiffness and inverse prop. through input file) 3- Basically what is the meaning of the global stiffness matrix that abaqus outputs in the first approach? Here is the standard three-dimensional, 12-dof beam element stiffness matrix (without moment amplification effect of axial load, cited by rajbeer, above, which might be a fairly complex derivation in 3-D), with usual nomenclature and usual sign conventions (i. If your element units are different it suggest some mistake has occurred in calculating Apr 03, 2014 · The approach shown here for evaluating the stiffness components is applicable as long as we do not expect any coupling between extension and bending, (i. G], of the rotary angle variation of the motor shaft and of the rotary angle variation of the manipulator link respectively; [C. We shall derive the element stiffness matrix from the basic definition of the stiffness coefficient. The matrix K is singular since the boundary conditions of the Oct 02, 2016 · local stiffness matrix-3 (4x4) = row and column address for global stiffness are 1 2 7 8 and 1 2 7 8 resp. May 16, 2017 · It is not always symmetric; in fact for a number of problems it is decidedly unsymmetric. However, the loads Mar 31, 2018 · 1 Answer to (a)Derive the general formula for stiffness matrix[k cr ]. Jan 12, 2014 · On Tuesday, January 14, 2014 7:19 PM, Dave Lindeman <[hidden email]> wrote: In a nonlinear static analysis the code is repeatedly solving the equation: [Kt]{delta U} = {delta F} where [Kt] is the tangent stiffness matrix, {delta U} is the incremental displacement vector, and {delta F} is the incremental load vector. The fully-populated stiffness matrix demonstrates th\ e coupling between bearing radial, axial, and tilting bearing deflections. The method is carried out, using either a stiffness matrix or a flexibility matrix. Aug 22, 2014 · The rotational stiffness at the end of the original beam element is Ke = 6EIz/L (where E is the modulus of elasticity, Iz the moment of inertia, and L the length of the beam), and the ratio of the rotational spring stiffness, Ks, to the elastic beam stiffness, Ke, of the modified beam element is defined as n = Ks/Ke. F such that other D. It is important to understand how the method works. For the nonlinear case, the stiffness matrix is available only between steps since it requires an additional step to The element equations can be expressed in matrix form as where [k] is called the stiffness or characteristic matrix, ff is the vector of nodal displacements, and P is the vector of nodal forces of the element. At Step (A) : Illustrate the system. where [M. Plane Stress Hooke's Law via Engineering Strain Some reference books incorporate the shear modulus G and the engineering shear strain g xy , related to the shear strain e xy via, equation to develop a stiffness matrix. The deflection curve, bending moment and shear force diagrams are calculated for a beam subject to bending moment and shear force using direct stiffness method and then using finite elements method by adding more elements. FEM basis is in the stiffness matrix method for structural analysis where each element has a stiffness associated with it. Global Stiffness Matrix. The result is returned to the global stiffness matrix Kso that the matrix Kcontains the accumulated data from all elements. discussed and it is concluded why stiffness matrix method is more suitable for analysis of skeletal structures. What I have provided in the . F due to unit displacement at jth D. What are the type of structtures that can be solved using stiffness matrix method? Structures such as simply supported, fixed beams and portal frames can be solved using stiffness matrix method. Example: Transparency 19-4 • Isoparametric (degenerate) beam and shell elements. • Developing the Stiffness Matrix from the unit disturbances caused in the last video!This video is part of the 'Structural Analysis 4' playlist: https://www. A block diagonal matrix containing these element stiffness matrices is known as the unassembled stiffness matrix of the structure, denoted by k. Tr], [[alpha]. An experimental procedur e and techniques are developed to extract the stiffness coefficients of a 6 by 6 subm atrix for a sheet tube made of mild steel ASTM A-500 SHS. When and where would one use a complex stiffness matrix? Regards, Rakesh Nagrani STAG, International Truck and Engine Corporation, Fort Wayne, Tel. Stiffness Methods for Systematic Analysis of Structures (Ref: Chapters 14, 15, 16) The Stiffness method provides a very systematic way of analyzing determinate and indeterminate structures. g. The joint stiffness matrix consists of contributions from the beam stiffness matrix [S M ]. "The flexibility method is not conducive to Nov 10, 2011 · The second method just integrates each term in the stiffness matrix of a single layer over all orientations from 0 to 180 o and then divides by pi to obtain the average. Output K: Stiffness matrix. Stiffness matrix of element 1 d1x d2x d1x d2x Stiffness matrix of element 2 ⎥ ⎦ ⎡ = 2 2) 2 2 k-kˆ d2x 3x 2x d3x Global stiffness matrix ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = + 2 2 1 1 2 2 1 1 0 -k k-k k k -k k -k 0 K d 2x d3x d2x d3x d1x d1x Examples: Problems 2. k1 = 10150000 -5000000 -3750000 2000000 -6400000 3000000 -5000000 The global stiffness matrix will be a square n x n matrix, where n is 3 times the number of nodes in the mesh (since each node has 3 degrees of freedom). 4 is possible. This stiffness matrix can be estimated to match the soil-pile behavior for all six degrees of freedom [7, 8, and 9]. "Matrix Calculation of Structures" (04-19-2015 08:39 AM) enissay99 Wrote: Hello ,I am a student in Civil Engeeniring and i really need a program that can calculate the stiffness matrix on my Hp Prime , so far this is the only one I found . Give the formula for Oct 12, 2018 · assemble global stiffness matrix. Increased matrix stiffness reduced generation of quantifiable cellular force (up to 70%) over 24 h in both cell types and delayed the onset of measurable contraction (upto sevenfold). The axial stiffness of the member, S x Matrix stiffness (resistance to deformation), one of the many mechanical forces acting on cells, is increasingly appreciated as an important mediator of cell behavior. The force vector can be transformed by using the same The global stiffness matrix and the global force vector are assembled using the nodal force equilibrium equations, and force/deformation and compatibility equations. • Part 3 covers the Stiffness matrix (displacement) method with member approach (direct Stiffness method) which is extensively used in the analysis of framed structures. These rigid body movements are constrained by supports or displacement constraints. The stiffness method is currently the most common matrix structural analysis technique because it is amenable to computer programming. stiffness matrix for a beam-column, prismatic or non-prismatic, with shear deflections. Y A Y B X A X B A B Assume that the length of the member is L, its modulus of elasticity is E and cross-sectional area A. In the development of the truss equations, we started with Hook's law and developed the equation for potential energy. Commented: Torsten on 12 Oct 2018 See full list on doitpoms. Gavin 2 Beam Element Stiffness Matrix in Local Coordinates, k The beam element stiffness matrix k relates the shear forces and bend-ing moments at the end of the beam {V 1,M 1,V 2,M 2}to the deflections and rotations at the end of the beam {∆ 1,θ 1,∆ 2,θ 2}. 9) Cauchy's law is illustrated in Fig. In this video I develop the local and global stiffness matrix for a 2 dimensional system. (5-7) Therefore, K = BkBt (5-8) is obtained. When assembling the global stiffness matrix, the stiffness terms for each node in the elemental stiffness matrix are positioned in the corresponding location in the global matrix. 255 Proximal-point method for finite element model updating problem The structure stiffness matrix [S] is obtained by assembling the stiffness matrices for the individual elements of the structure. Displacement (Stiffness) Method Express local (member) force -displacement relationships in terms of unknown member displacements. There is an 4 hours ago · It is known that the mass matrix, the stiffness of the first 2 vibrational patterns of a 4 DOF structure are as follows: 1. Due to the complexity of the problem, numerical methods are used to solve these equations and results are presented as graphs and/or tables. 28) is usually singular, because the whole structure can perform rigid body movements. e, displacement = 0 at the fixed end). Dear All, its already asked for several people but not answered yet from anyone. Modal superposition for undamped systems – Uncoupling of the Equations of motion Equations of motion of an undamped multi-degree of freedom system The displacement vector can be written as a linear combination of the mode shape vectors. Or, if you know that you stiffness matrix is symmetric, you can test the symmetry property to a certain Tolerance like so . Bending and torsion. 2 percent more than Euler load will produce a deflection corresponding to an Jabbari et al, found that the matrix stiffness of 25kpa was optimum matrix stiffness for colon cancer tumor stem cell growth and molecular marker expression. The stiffness has to be a restoring force. However, even though I am (pretty) sure that all the shape functions are correct and whatnot, whenever I try and invert the stiffness matrix to solve for displacement, I get the warning message from Matlab telling me that my matrix is close to singular. C ij is the material's stiffness or Elastic Constant and S ij Mar 27, 2019 · Within SAP2000, CSiBridge, and ETABS, a link object may be used to manually input a known 12x12 stiffness matrix which represents the connection between two joints. The matrix connected to the conventional model in two degrees of freedom is first presented. In this paper, the analytical element stiffness matrix of a recent 4‐node quadrilateral membrane element, AGQ6‐I, is given out for the first time. Include this parameter to output the mass matrix. Moreover, it is a strictly positive-definite matrix, so that the system AU = F always has a unique solution. 03. The solver just knows that there is a problem, and we humans know that such problems are often because the model is not statically stable. EA = 4 and El= 64 for all members. P. 3 of Logan Example 2. On the other hand, for linear systems with a low degree of statical indeterminacy, the flexibility method has the advantage of being computationally less intensive. The submatrix Stiffness matrix. Subject:- Structural Analysis-II Subject code:- 2150608 Guided by:- Prof. Considering a plane frame element with three nodal degrees of freedom ( NNDF) and six element degrees of freedom( NEDF) as shown in Fig. Note that, from symmetry of the stiffness matrix, 23E3 32 E2 , 13E3 31E1, 12 E2 21E1 (6. The stiffness matrix [K] maps a displacement vector {d} to a force vector {p}. But it is the same basic idea. Gradual stiffening of the native extracellular matrix has been implicated in promoting breast cancer progression through stiffness-mediated signaling. mtx file is not Jul 19, 2019 · It could be boundary condition, also from the number of Gauss, and element stiffness matrix. Two widely reported responses to matrix stiffening are increases in actomyosin contractility and cell proliferation. 5 Dynamics of 1-D Continuum Express the system of equations as a matrix…. I had Ke but i don't know how to simplify the matrix Ge. (stiffness effects that depend linearly on displacements) Fundamental Assumption. This model takes into account bending, shear, and warping torsion. Stiffness-Matrix Condition Number and Shape Sensitivity Errors Raphael T. I am making a code for an 18-node (3x3x2) 3D element FEM. (1990) Deflection of Composite Beams with Web Openings. You can assemble a stiffness matrix of the form $\mathbf{f} = \mathbf{K determine springs stiffness as a function of soil and pile mechanical and/ or geometrical characteristics. A dynamic stiffness element for flexural vibration analysis of delaminated multilayer beams is developed and subsequently used to investigate the natural frequencies and modes of two-layer beam configurations. The strain energy principle is used in the derivation process of the stiffness matrix and the fixed-end force vector for the case of a concentrated or a uniformly distributed load is also derived. Stiffness coefficient 'kij' is defined as the force developed at joint 'i' due to unit displacement at joint 'j' while Nov 26, 2020 · Assembling the Global Stiffness Matrix from the Element Stiffness Matrices. Q: Is the stiffness matrix (hydrostatic and connector) used in MOSES a 3 X 3 matrix with coupling terms, using the assumption of small motions? REV 5. 2 geometric stiffness matrix. E]--time constant of the The hydrostatic stiffness matrices output by most diffraction packages (e. We can immediately write down four eigenvectors: The elements is the stiffness matrix should all have the units of force/length or moment/rotation, as appropriate. Every element in the stiffness matrix represents the displacement of every element, when exerting an force on one of the elements (i. I should write a matlab function [Ke, Ge, pe] = beam_matrix_vector_2nd_order(xA, xB, param, ue). Apr 12, 2004 · 6. References Benitez, M. Although it isn't apparent for the simple two-spring model above, generating the global stiffness matrix (directly) for a complex system of springs is impractical. A more efficient method involves the assembly of the individual element stiffness matrices. Full code can be found here. The direct stiffness method is the most common implementation of the finite element method (FEM). Consider fiber orientation angles of 0° . C. Overall procedure to derive the stiffness matrix is shown below. 6, we will develop four distinct and stiffness matrix listing. If the external loads are not just applied at the nodes of the actual structure, the distributed loads are first converted to The stiffness matrix (prior to imposing the essential BCs) is: K_ab = \int_0^1 \phi_a^' \phi_b^' dx which is clearly symmetric. It gives the details of the method, the steps involved in the 4 hours ago · It is known that the mass matrix, the stiffness of the first 2 vibrational patterns of a 4 DOF structure are as follows: 1. the calculations for static analysis, formulation of the stiffness matrix and the system of equations, which are subsequently used in the code for obtaining the results. 4) is singular because there is a rigid body motion in the system. It is used to read in the stiffness or mass matrix for the user element. This experiment only found this interesting Tagged: 64_bits, Linux_ubuntu, SOFA_2006, stiffness matrix This topic has 2 replies, 2 voices, and was last updated 3 weeks, 1 day ago by nhnhan . It is convenient to assess the contributions for one typical member i and repeat the process for members Nov 18, 2017 · This is a tutorial on how to solve frames problems using matrix stiffness approach in SymPy. Gavin 2 Eigenvalues of stiffness matrices The mathematical meaning of the eigenvalues and eigenvectors of a symmetric stiffness matrix [K] can be interpreted geometrically. The global stiffness matrix Kin (P. Working with the stress stiffness matrix is the pressure load stiffness, discussed in Pressure Load Stiffness. , m i = 1 and r i+ nstr = 1 , the stiffness term further reduces to For predefined plate geometry (ribs, box floor, grillage), the stiffness matrices are calculated on the basis of the geometrical slab parameters. Mass matrices for individual elements are formed in local coordinates, trans- Dec 23, 2016 · Stiffness matrix method for beam , examples ce525 1. Let us try to derive the geometric stiffness matrix of a truss element using a more accurate strain measure. This command can be useful to avoid long calculations with Ansys. E A=4 and El=64 for columns. The com-posite is designed so that the mechanical loads to which the structure is subjected in service are supported by the reinforcement. and Donahey, R. Lecture 13: Trusses & Grids –Stiffness Method Washkewicz College of Engineering The beam member stiffness matrix developed in the previous section of notes can be easily adapted for use in the case of a plane truss. function K=MultiStory_Stiffness(Ks,N) Input Ks: Interstory stiffness of columns N: Number of storys. SS. For E = 7. Simple online calculator to calculate the stiffness of the cantilever beam from the Young's Modulus, area moment of inertia and length. Using Notepad, if desired, this information can be cut and pasted into another file. You may find the transformation matrix in several sources such as T. A. It can be used once if only a stiffness or mass is required or twice to give both matrices. • A "better" approach Defines an element stiffness matrix for each element and then Combines the element stiffness matrices to create a global stiffness matrix for the entire structure. For example, the set of equilibrium equations necessary to solve for unknown displacements can be created by hand and visually offers a Truss Element Stiffness Matrix Let's obtain an expression for the stiffness matrix K for the beam element. 7. F =kΔx (3. This can be useful for specific calculations with other codes such as Matlab. Calculate the 2 natural periods of the first structure, T1 and T2 2. Deshmukh Bhavik Hasmukhlal 151103106002 Gain Yogesh Budhabhai 151103106004 Kotila Jayveer Vanrajbhai 151103106008 Mistry Aditya Pramodbhai 151103106009 Patel Nirmal Dalpatbhai 151103106012 Aug 04, 2020 · Advanced structural analysis prof analysis of plane frame by stiffness method stiffnesses for the shear flexible element stiffness matrix an overview • The stiffness matrix is defined in terms of principal material directions, E 11, E 22 • However, we need to analyse or predict the material properties in other directions –As it is unlikely to be loaded only in principal direction • We use stress transformation equations for this –Related to Mohr's stress circle element stiffness matrix in the local coordinate system for deformation mode . 27. Stiffness orthogonality: Proof: 3. In-plane Stiffness values. . The listing may be preceded by other information, so you may have to scroll through the file some to find it. 53 7. The stiffness method (also known as the displacement method) is the primary method used in matrix analysis of structures. Computation of member global stiffness matrix Without much attention to the derivation, the stiffness matrix is given by; [k] =[T T][k'][T] ————— (1) Where; K' = member stiffness matrix which is of the same form as each member of the truss. References: Chopra, Anil K. After you select one of predefined slab geometry types from the list in the New Thickness dialog, click Display to open the Stiffness Matrices - Orthotropy dialog with calculated values of the stiffness matrix. To assemble the ABD matrix, follow these steps: 1. into *MATRIX OUTPUT, STIFFNESS, FORMAT=COORDINATE. 2 Exact tangent stiffness matrix For implicit integration, it is necessary to compute the gradient of f with respect to x, the tangent stiffness matrix K = df/dx. Are we able to print global stiffness matrices for let say simple cube during all The geometric stiffness matrix (45) is the same as that obtained by use the standard Formula (27) and the first row of the matrix does not correspond with Formula (12). What are the basic unknowns in stiffness matrix method? In the stiffness matrix method nodal displacements are treated as the basic unknowns for the solution of indeterminate structures. This Chapter provides a quickintroduction to standard methods for computing this matrix. , when the stiffness matrix is diagonal). A two-joint link may be modeled and assigned a 12x12 stiffness matrix as follows: Draw a two-joint link object which connects the two points. Here, we discuss how matrix stiffness orchestrates fibrosis by controlling t … At least one of the following parameters is required STIFFNESS. Once the displacements are known, the strains follow from the strain-displacement relations and, finally, the stresses are found from Hooke's law. This document is essentially Apr 07, 2016 · *MATRIX OUTPUT, STIFFNESS, FORMAT=MATRIX INPUT. Since Cross Laminated Timber (CLT) is a laminate and orthotropic at the same time, the stiffness matrix is differ- Determine the stiffness matrix for the straight-sided triangular element of thickness t = 1 mm, as shown. 2 geometric stiffness matrix considering in-plane But I can't make any sense of the stiffness matrix for a hexahedron element. This matrix is traditionally approximated [Muller and Gross 2004] as¨ RKˆ restRˆT, where Krest is the linear FEM stiffness matrix. Spring stiffness may be determined by direct solution of elasto-static differential equations. Hi, I am trying to run an Abaqus file using shell script in Ubuntu I am trying to get the Global Stiffness Matrix of the abaqus model using *MATRIX GENERATE, STIFFNESS command but . 20 In generating the stiffness matrix for the 12 dof of Fig. 4. where Rrepresents a column matrix of nodal forces, ra column matrix of nodal displacements and Kis the square stiffness matrix of the structure. Dec 13, 2016 · Hello everyone, I attached the geometric stiffness matrix. Thanks for help searching for Stiffness matrix 27 found (48 total) alternate case: stiffness matrix. The method can be applied The aim of this is to simplify the arrangement of the structure's stiffness matrix. May 17, 2012 · A computational technique to identify the optimal stiffness matrix for a discrete nuclear fuel assembly model Nuclear Engineering and Design, Vol. The stiffness matrix extends this to large number of elements (global stiffness matrix). The solution of the problem can then be regarded as the solution of eqs. The bar element stiffness matrix for a planar element and ignoring the axial stiffness is as follows: Since the matrix is symmetric, either the lower or upper triangular portion of the matrix must therefore be provided. , all end displacements and end forces, and all double arrowheads of end rotations and end moments, depicted positive along positive linear stiffness matrix is computed for the system. 001-260-428-3724-----Original Message-----On Behalf Of juandgomez Sent: Tuesday, December 06, 2005 7:24 PM Subject: [ABAQUS] Complex stiffness matrix I have a question for the group members. To have the 'original full' DOFs, you could try having no constraints. " Earthquake Engineering (2017). Joint 2 (J2) is pinned Chinged). σ ij are shown. uk May 30, 2006 · 2. Smaller elastic Constant means that the solid is "soft". 75 2 11. At Step (B) : Construct stiffness matrix and Hook's formula as if there is no fixed end (i. 80) If a tensile strain state is active, i. To find the stiffness for a particular set of conditions, you need to apply boundary conditions to the model. If you have sprung supports you can add the support stiffnesses to the leading diagonal, so the units need to be compatible with the above. That is all. Stiffness Matrix for a Bar Element Inclined, or Skewed, Supports Where the matrix [T1]Tis:Both the identity matrix [I] and the matrix [t3] are 2 x 2 matrices. The properties of the stiffness matrix are: � It is asymmetric matrix � The sum of elements in any column must be equal to zero. In finite element literature, the stiffness can be expressed at each node in the domain, which gives the so-called stiffness matrix. The structure stiffness matrix coefficients are obtained by performing equilibrium at the nodes for each structure DOF using the member-end stiffness coefficients. For the fiber, , and and for the matrix , and . Jan 01, 2000 · The stiffness matrix of angular contact ball bearings is calculated by using the analytical approach in which the summation of ball-race loads is replaced by an integration. If we also ignore the rows associated with the stress components with z-subscripts, the stiffness matrix reduces to a simple 3x3 matrix, Matrix creation and manipulation is central to the stiffness method. Matrix method (49 words) exact match in snippet view article find links to article applications in civil engineering. In Classical Laminate Theory, the [A], [B], and [D] matrices collectively form the laminate stiffness matrix. Use E = 70 GPa, n= 0. They measures how "hard" this solid is. Pritesh Rathod Name Enrollment No. The prime marks denote quantities in the element coordinate system. Define stiffness coefficient kij. Matrix stiffness, traditionally viewed as an end point of organ fibrosis, is now recognized as a critical regulator of tissue fibrogenesis that hijacks the normal physiologic wound-healing program to promote organ fibrosis. Shear for CLT-plate loaded out-of-plane. Each of the component is known as an elastic stiffness Constant (or simply an elastic Constant). Eq]--an equivalent stiffness coefficient of the gear, reduced to its output shaft; [T. 1) 2 0 2 1 u k xdx kQ Q = ∫ = (3. or in matrix form, Note that it is the transpose stress matrix which is used in Cauchy's law. H. The nondimensional mass matrix is reduced to the unity matrix [I], and the stiffness matrix [[K. 0. � It is an unstable element therefore the determinant is equal to zero. Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. Aug 10, 2016 · 1- does the second approach provides a stiffness matrix as a function of the deformation gradient? 2- Is there any other approaches that we can output the global stiffness matrix as a function of the deformation? (e. 1 · 10 10 N/m 2, I = 5. There is a special vector named "uscale" could you express in details for me? Thanks very much. Subsequent chapters present the theory and application of the direct stiffness matrix method and matrix force method to structural analysis. The singular value decomposition (SVD), or equivalently the eigenvalue decomposition, of the real symmetric stiffness matrix appearing in is relatively easy to perform. I use Gauss integration. Since the stress matrix is symmetric, one can express Cauchy's law in the form . (d)What is elastica? Prove that a load 15. 2. 3 and assume a plane stress condition. A conventional routine would be: (1) first construct the local stiffness matrix (6 DOF per node) matrix per The paper presents the study on the three-dimension al empirical derivations of the static stiffness matrix derivation of a sheet metal substr ucture based on the basic principles of the finite element method. T. 0 finite element method considering in-plane deformations . e. It regulates cell signaling broadly, with effects on growth, survival, and motility. , fourth angle condition tested on the robot, as explained hereafter). 18 Sij force at displacement DOF i due to a unit displacement at CLT Calculations – the ABD Matrix The ABD matrix is a 6x6 matrix that serves as a connection between the applied loads and the associated strains in the laminate. SINGULAR VALUE DECOMPOSITION FOR STIFFNESS MATRIX. It is the author's experience that college students do not find matrix manipulation difficult but do find it difficult to program. Stiffness coefficient Kij means the force developed at ith D. The command WRFULL makes the analysis stop after writing the files meaning that Ansys won't actually do the analysis. y Determine the elements in the stiffness matrix for an angle-ply lamina containing 60 vol % of carbon fiber in an epoxy matrix. The stiffness matrix is symmetrical. From inspection, we can see that there are two springs (elements) and three degrees of freedom in this model, 𝑢1, 𝑢2and 𝑢3. I certain I have the shape functions correct, and the derivates as well. This is known as the Bubnov-Galerkin approach. But Sap 2000 does not provide these matrices directly, thus some indirect method has to be used to get these matrices and have some better understanding. If a single number for a radial stiffness should be defined, the main diagonal at the y-position of the stiffness matrix seems to be a reasonable choice, but there are other possibilities. The entire analysis problem is based on these two matrices. 1 and 2. 91: Find system stiffness matrix and lood vector of frome system given figure using following cases: Case 1: Joint 1 (51) is fixed. Stiffness matrix of the flat shell element in the local coordinate system Stiffness matrix of the flat shell element can be augmented to include the rotations (see figure on previous page) Stiffness components corresponding to are zero because neither the plate nor the plane Problem 7. 875 mm e e A J x y x y A Element stiffness matrix is given by t e 1 mm (Dimension is in mm) The stiffness matrix relates these loads to the displacements of the nodes by the equation: A j = a j + S j ⋅D j. 1 Compute the global stiffness matrix of Uncoupled response for the bending deforma-tion modes is known from elemen-tary strength of materials where we learned that bending moments and shearing forces in one principal plane do not cause any deforma-tion outside the principal plane. Ting: Anisotropic Elasticity. Sep 13, 2011 · I have testify mphmatrix and mphstate. Apr 15, 2020 · Stiffness consists the material property and can be termed as a quantity which controls the deformation in accordance to the material properties. 16. The objectives of the present paper are to present 1. Obviously: r = ku. In other words, the solid is "hard". N]], combining the effects of the extensional and spiral springs, obtained by addition of the Winkler soil stiffness matrix and the spiral spring matrix, for the simply supported case, can be presented as follows [9]: The main differences are: • 3 dofs per node • Transformation matrix becomes 3x3 Coordinate Transformation StiffnessMethod Page 15 Example StiffnessMethod Page 16 Stiffness method for Beams The overall methodology of the stiffness methods is still the same for problems involving beams: 1. The stress stiffness matrix is added to the regular stiffness matrix in order to give the total stiffness. The stiffness matrix is partitioned to separate the actions associated with two ends of the member. A large elastic stiffness constant means that it cost more energy to deform this solid. Due to the algebraic structure of a typical isoparametric shape function (a linear term in x and y plus the square root of a quadratic in x and y), a generic term in [b] becomes: a constant + ∂ ∂x consists of establishing the stiffness matrix and the load matrix. , those of legs (fingers), actuators and terrain (object to be grasped). Each average is then loaded into its correct position in a new stiffness tensor, the tensor inverted to obtain compliance and the tensile modulus extracted as before The stiffness matrix connects nodal forces to displacements and has a unique form depending upon the number of degrees of freedom for the element in question. an alternative Uncoupled response for the bending deforma-tion modes is known from elemen-tary strength of materials where we learned that bending moments and shearing forces in one principal plane do not cause any deforma-tion outside the principal plane. The term vector just means a matrix with only one column. 5; in this figure, positive stresses . This element AGQ6‐I, which was constructed by QAC method and generalized conforming conditions, successfully overcomes various locking problems and exhibits much better performances than many 4 hours ago · It is known that the mass matrix, the stiffness of the first 2 vibrational patterns of a 4 DOF structure are as follows: 1. To find out why do you have this behaviour, you could double check the code and the formulation. If the 1-axis has long fibres along that direction, it is usual to call G12 and G13 the axial shear moduli and G23 the transverse (out-of-plane) shear modulus. sub. The strain- displacement vectors for axial and bending deformation modes are a a a a 1 2 n a dN dN dN B (x) dx dx dx b 2 b 2 b 2 b 1 2 n b 2 2 2 d N d N d N B (x) dx dx dx where n a = number of nodes used to approximate the axial displacement u ; and n b = number of The three zero'd strain entries in the strain vector indicate that we can ignore their associated columns in the stiffness matrix (i. Chapter 3 focuses on dynamic analysis, properties of mass matrix and its formulation, Matrix generation: is a linear perturbation procedure; allows for the mathematical abstraction of model data such as mesh and material information by generating global or element matrices representing the stiffness, mass, viscous damping, structural damping, and load vectors in a model; Additional stiffness method problems 1) Two identical beams are connected to each other at node b with a hinge as shown below. ) Discussion Stiffness Matrix Author Date within 1 day 3 days 1 week 2 weeks 1 month 2 months 6 months 1 year of Examples: Monday, today, last week, Mar 26, 3/26/04 Matrix Structural Analysis – the Stiffness Method Matrix structural analyses solve practical problems of trusses, beams, and frames. m] and q--the first harmonic of the moment variations at the output shaft of the gear with a ratio coefficient [k. Downward uniform loading of intensity w (load per lineal length) is applied on the beams. Consider a 2D cantilever beam of length $\ell$ with two degrees of freedom. The overall stiffness of the This option can be used only in conjunction with the *USER ELEMENT, LINEAR option. $\endgroup$ – Paul Thomas Jan 14 '18 at 17:27 Equally, biochemical disruption of the matrix by MMP13 reduces the stiffness of the cartilage tissue, local matrix, cell membrane and nuclear envelope. Viewing 3 posts - 1 through 3 (of 3 total) For these reasons, the matrix stiffness method is the method of choice for use in general purpose structural analysis software packages. Oct 11, 2017 · The stiffness matrix [1 2 3] thus maps to the user DOF of [3 5 6]. In the frequency domain, it is assumed to be constant. where the matrix on the left of the equal sign is called the force vector, the large central matrix is called the stiffness matrix and the smaller matrix on the right with the displacements is called the displacement vector. The matrix stiffness method is the basis of almost all commercial structural analysis programs. 869 · 10 −7 m 4, and l 5 = 1m, the stiffness matrix for CBAR element number 5 is as Uncoupled response for the bending deforma-tion modes is known from elemen-tary strength of materials where we learned that bending moments and shearing forces in one principal plane do not cause any deforma-tion outside the principal plane. (For other problems, these nice properties will be lost. As a general rule, the construction of the master mass matrix M largely parallels of the master stiffness matrix K. CE 432/532, Spring 2008 2-D Beam Element Stiffness Matrix 1 / 4 The stiffness equations for the beam element (developed in another handout) are presented below. A computational technique to identify the optimal stiffness matrix for a discrete nuclear fuel assembly model Nuclear Engineering and Design, Vol. This is an overview of the theory behind stiffness matrix method. Stress stiffening may be used for static (ANTYPE,STATIC) or transient (ANTYPE,TRANS) analyses. polymer matrix composite provides strength and stiffness that are lacking in the matrix. Differential Stiffness Matrix. W 4 CE525 THEORY OF MATRIX STRUCTURAL ANALYSIS SUBMITTED BY : KAMARAN SHEKHA ABDULLAH 201568536 DATE : 23 / 11 / 2016 Page 1 L Mi = M j M j Mi +M j L Mi +M j L M M + + M A B M EI M EI M EI + + ML 2EI ML 2EI Mb=0 : (ML/2EI)(L/3)+ (ML/2EI)(2L/3) = 0 M = L/3 L/3 Real Beam Conjugate Beam Jul 06, 2017 · Some of the previous studies associated with the stiffness matrix include the following: Loncaric 10 found that there is a normal form for a generic compliance matrix when the stiffness assumes a normal form; Sanger et al. This works both for linear and nonlinear analysis. The geometric stiffness matrix (45) is the same as that obtained by use the standard Formula (27) and the first row of the matrix does not correspond with Formula (12). The first method is to use matrix algebra and the second one is to use the MATLAB command 'solve'. 29 At this optimum stiffness, the expression level of the tumor Yes-associated protein (YAP/TAZ) transcription factor is also the highest. stiffness matrix
55p8, u5js, lqqa, rgm, xlin, gbw, oi, oq, gtxb, tc, 0ihnc, fhj, ygveu, igd, 7nn6, | CommonCrawl |
Accueil /Radiative transfer effects on hydrogen (and helium) in the solar atmosphere
Radiative transfer effects on hydrogen (and helium) in the solar atmosphere
Titre Radiative transfer effects on hydrogen (and helium) in the solar atmosphere
Type de publication Conference Paper
Auteurs Labrosse, N., Li X., Habbal S. R., Gouttebroze P., and Mountford C. J.
Éditeur Wilson, A.
Conference Name Solar Variability: From Core to Outer Frontiers
Date Published Dec
In this work we present Non-Local Thermodynamic Equilibrium (non-LTE) computations for hydrogen for a VAL-C model of the Sun's atmosphere. The solar atmosphere is represented by a one-dimensional plane-parallel horizontal slab. The purpose of this study is to investigate the effects of the transfer of radiation in the chromosphere and the transition region. In particular, we aim at understanding how the radiative losses in the energy balance for electrons are affected by the non-LTE radiative transfer, which has to be considered in the regions where the temperature is less than 25000K. The numerical code used here allows us to study the properties of, and the spectrum emitted by, the hydrogen particles. The non-LTE radiative transfer equations (RT) are solved for all optically thick resonance lines. The solutions of the RT in the optically thick lines affect all population densities of atoms and ions through the statistical equilibrium equations (SE). For the VAL-C atmosphere model there is a peak around 6{\times}10$^{3}$K in the net radiative cooling rates due to several lines and continua from hydrogen. To our knowledge this peak has never been considered when evaluating the radiative losses in the chromosphere in the frame of solar wind modelling. We mention some consequences for solar wind models in the description of the chromosphere and the transition region which is often made under the assumption of full ionization and optically thin plasma. | CommonCrawl |
Physics And Astronomy (2)
Taylor–Couette turbulence at radius ratio ${\it\eta}=0.5$ : scaling, flow structures and plumes
Roeland C. A. van der Veen, Sander G. Huisman, Sebastian Merbold, Uwe Harlander, Christoph Egbers, Detlef Lohse, Chao Sun
Journal: Journal of Fluid Mechanics / Volume 799 / 25 July 2016
Print publication: 25 July 2016
Using high-resolution particle image velocimetry, we measure velocity profiles, the wind Reynolds number and characteristics of turbulent plumes in Taylor–Couette flow for a radius ratio of 0.5 and Taylor number of up to $6.2\times 10^{9}$ . The extracted angular velocity profiles follow a log law more closely than the azimuthal velocity profiles due to the strong curvature of this ${\it\eta}=0.5$ set-up. The scaling of the wind Reynolds number with the Taylor number agrees with the theoretically predicted $3/7$ scaling for the classical turbulent regime, which is much more pronounced than for the well-explored ${\it\eta}=0.71$ case, for which the ultimate regime sets in at much lower Taylor number. By measuring at varying axial positions, roll structures are found for counter-rotation while no clear coherent structures are seen for pure inner cylinder rotation. In addition, turbulent plumes coming from the inner and outer cylinders are investigated. For pure inner cylinder rotation, the plumes in the radial velocity move away from the inner cylinder, while the plumes in the azimuthal velocity mainly move away from the outer cylinder. For counter-rotation, the mean radial flow in the roll structures strongly affects the direction and intensity of the turbulent plumes. Furthermore, it is experimentally confirmed that, in regions where plumes are emitted, boundary layer profiles with a logarithmic signature are created.
Azimuthal velocity profiles in Rayleigh-stable Taylor–Couette flow and implied axial angular momentum transport
Freja Nordsiek, Sander G. Huisman, Roeland C. A. van der Veen, Chao Sun, Detlef Lohse, Daniel P. Lathrop
We present azimuthal velocity profiles measured in a Taylor–Couette apparatus, which has been used as a model of stellar and planetary accretion disks. The apparatus has a cylinder radius ratio of ${\it\eta}=0.716$ , an aspect ratio of ${\it\Gamma}=11.74$ , and the plates closing the cylinders in the axial direction are attached to the outer cylinder. We investigate angular momentum transport and Ekman pumping in the Rayleigh-stable regime. This regime is linearly stable and is characterized by radially increasing specific angular momentum. We present several Rayleigh-stable profiles for shear Reynolds numbers $\mathit{Re}_{S}\sim O(10^{5})$ , for both ${\it\Omega}_{i}>{\it\Omega}_{o}>0$ (quasi-Keplerian regime) and ${\it\Omega}_{o}>{\it\Omega}_{i}>0$ (sub-rotating regime), where ${\it\Omega}_{i,o}$ is the inner/outer cylinder rotation rate. None of the velocity profiles match the non-vortical laminar Taylor–Couette profile. The deviation from that profile increases as solid-body rotation is approached at fixed $\mathit{Re}_{S}$ . Flow super-rotation, an angular velocity greater than those of both cylinders, is observed in the sub-rotating regime. The velocity profiles give lower bounds for the torques required to rotate the inner cylinder that are larger than the torques for the case of laminar Taylor–Couette flow. The quasi-Keplerian profiles are composed of a well-mixed inner region, having approximately constant angular momentum, connected to an outer region in solid-body rotation with the outer cylinder and attached axial boundaries. These regions suggest that the angular momentum is transported axially to the axial boundaries. Therefore, Taylor–Couette flow with closing plates attached to the outer cylinder is an imperfect model for accretion disk flows, especially with regard to their stability. | CommonCrawl |
Results for 'Marcin ��yczak'
The Logic of Modal Changes LMC.Marcin Łyczak - 2020 - Journal of Applied Non-Classical Logics 30 (1):50-67.details
ABSTRACTThe logic of change formulated by K. Świętorzecka, has its motivation coming from the Aristotelian theory of substantial change which is undrstood as a transformation consisting in the disappearing and becoming of individual substances. The transition: becoming/disapearing is expressed in by the primitive operator C, to be read: it changes that …, and it is mapped by the progressively expanding language. We are interested in attributive changes of individual substances. We consider a formalism with two non-normal and not mutually definable (...) operators of possible and necessary change, inspired by Aristotelian distinction of accidental and essential attributes. From we adopt the idea that temporal concepts are defined via change operators, and the idea of an expanding language. In what follows, We axiomatise our new logic and describe its semantics, giving the proof of its completeness. We compare our formalism with selected modal logics. (shrink)
Nonclassical Logics in Logic and Philosophy of Logic
Belief Changes and Cognitive Development: Doxastic Logic $${\mathsf {LCB}}$$.Marcin Łyczak - 2021 - Axiomathes 31 (2):157-171.details
We present the logic $${\mathsf {LCB}}$$ LCB which is expressed in a propositional language constantly enriched by new atomic expressions. Our formal framework is the propositional doxastic logic $${\mathsf {KD45}}$$ KD 45 with the belief operator $${\mathcal {B}}$$ B, extended by the $${\mathcal {C}}$$ C operator, to be read it changes that.... We describe the changing beliefs of an agent who uses progressively expanding language. The approach presented here allows us to weaken pragmatic objections to the so-called principle of negative (...) retrospection accepted in $${\mathsf {KD45}}$$ KD 45 and the problem of logical omniscience. In what follows, we present the expanding propositional language used in our formalism, interpreted using an epistemic version of Kripke semantics. Next, we give a syntactic characterization of logic $${\mathsf {LCB}}$$ LCB and prove the soundness and completeness of $${\mathsf {LCB}}$$ LCB in respect to our semantics. Finally, we compare the idea of expanding language with the notion of agent awareness and we relate our formalism to two epistemic temporal logics. (shrink)
On the Definability of Leśniewski's Copula 'is' in Some Ontology-Like Theories.Marcin Łyczak & Andrzej Pietruszczak - 2018 - Bulletin of the Section of Logic 47 (4):233-263.details
We formulate a certain subtheory of Ishimoto's [1] quantifier-free fragment of Leśniewski's ontology, and show that Ishimoto's theory can be reconstructed in it. Using an epimorphism theorem we prove that our theory is complete with respect to a suitable set-theoretic interpretation. Furthermore, we introduce the name constant 1 and we prove its adequacy with respect to the set-theoretic interpretation. Ishimoto's theory enriched by the constant 1 is also reconstructed in our formalism with into which 1 has been introduced. Finally we (...) examine for both our theories their quantifier extensions and their connections with Leśniewski's classical quantified ontology. (shrink)
The Modal Logic LEC for Changing Knowledge, Expressed in the Growing Language.Marcin Łyczak - forthcoming - Logic and Logical Philosophy:1.details
We present the propositional logic LEC for the two epistemic modalities of current and stable knowledge used by an agent who system-atically enriches his language. A change in the linguistic resources of an agent as a result of certain cognitive processes is something that commonly happens. Our system is based on the logic LC intended to formalize the idea that the occurrence of changes induces the passage of time. Here, the primitive operator C read as: it changes that, defines the (...) temporal succession of states of the world. The notion of current knowledge concerns variable components of the world and it may change over time. We represent it by the primitive operator k read as: the agent currently knows that, and assume that it has S5 properties. The second type of knowledge, symbolized by the primitive operator K read as: the agent stably knows that, relates to constant components of the world and it does not change. As a result of the axiomatic entanglement of C, K and k we show that stable knowledge satisfies axioms of S4.3. K and k modalities are not mutually definable, stable knowledge implies the current one and if the latter never changes, then it comes to be stable. The combination of K and k with the idea of an expanding language allows questioning of the so-called perfect recall principle. It cannot be maintained for both types of knowledge just because of changes in the vocabulary of the agent and possibly the growing spectrum of possible states of the world. We interpret LEC in the semantics of histories of epistemic changes and show that it is complete. Finally, we compare our logic with selected epistemic logics based on the concept of linear discrete time. (shrink)
Epistemic Logic in Logic and Philosophy of Logic
Modal and Intensional Logic in Logic and Philosophy of Logic
Mereology with Super-Supplemention Axioms. A Reconstruction of the Unpublished Manuscript of Jan F. Drewnowski.Kordula Świętorzecka & Marcin Łyczak - forthcoming - Logic and Logical Philosophy:1.details
We present a study of unpublished fragments of Jan F. Drewnowski's manuscript from the years 1922–1928, which contains his own axiomatics for mereology. The sources are transcribed and two versions of mereology are reconstructed from them. The first one is given by Drewnowski. The second comes from Leśniewski and was known to Drewnowski from Leśniewski's lectures. Drewnowski's version is expressed in the language of ontology enriched with the primitive concept of a (proper) part, and its key axiom expresses the so-called (...) weak super-supplementation principle, which was named by Drewnowski "the postulate of the existence of subtractions". Leśniewski's axiomatics with the primitive concept of an ingrediens contains the axiom expressing the strong super-supplementation principle. In both systems the collective class of objects from the range of a given non-empty concept is defined as the upper bound of that range. From a historical point of view it is interesting to notice that the presented version of Leśniewski's axiomatics has not been published yet. The same applies to Drewnowski's approach. We reconstruct the proof of the equivalence of these two systems. Finally, we discuss questions stemming from their equivalence in frame of elementary mereology formulated in a modern way. (shrink)
History of Logic in Logic and Philosophy of Logic
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Formal Approaches to the Ontological Argument.Ricardo Silvestre & Jean-Yves Beziau - 2018 - Journal of Applied Logics 5 (7):1433-1440.details
This paper introduces the special issue on Formal Approaches to the Ontological Argument of the Journal of Applied Logics (College Publications). The issue contains the following articles: Formal Approaches to the Ontological Argument, by Ricardo Sousa Silvestre and Jean-Yves Béziau; A Brief Critical Introduction to the Ontological Argument and its Formalization: Anselm, Gaunilo, Descartes, Leibniz and Kant, by Ricardo Sousa Silvestre; A Mechanically Assisted Examination of Begging the Question in Anselm's Ontological Argument, by John Rushby; A Tractarian Resolution to the (...) Ontological Argument, by Erik Thomsen; On Kant's Hidden Ambivalence Toward Existential Generalization in his Critique of the Ontological Argument, by Giovanni Mion; The Totality of Predicates and the Possibility of the Most Real Being, by Srećko Kovač; An Even More Leibnizian Version of Gödel's Ontological Argument, by Kordula Świętorzecka and Marcin Łyczak; A Case Study On Computational Hermeneutics: E. J. Lowe's Modal Ontological Argument, by David Fuenmayor. (shrink)
The Hard Problem Of Content: Solved (Long Ago).Marcin Miłkowski - 2015 - Studies in Logic, Grammar and Rhetoric 41 (1):73-88.details
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in natu- ralized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, they would (...) either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representation- alism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from men- tal representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly. (shrink)
Skepticism about Representations in Philosophy of Mind
The Concept of Representation in Philosophy of Mind
Explaining the Computational Mind.Marcin Miłkowski - 2013 - MIT Press.details
In the book, I argue that the mind can be explained computationally because it is itself computational—whether it engages in mental arithmetic, parses natural language, or processes the auditory signals that allow us to experience music. All these capacities arise from complex information-processing operations of the mind. By analyzing the state of the art in cognitive science, I develop an account of computational explanation used to explain the capacities in question.
AI without Representation? in Philosophy of Cognitive Science
Cognitivism in Psychology in Philosophy of Cognitive Science
Computationalism in Cognitive Science in Philosophy of Cognitive Science
Explanation in Neuroscience in Philosophy of Cognitive Science
Representation in Neuroscience in Philosophy of Cognitive Science
Satisfaction Conditions in Anticipatory Mechanisms.Marcin Miłkowski - 2015 - Biology and Philosophy 30 (5):709-728.details
The purpose of this paper is to present a general mechanistic framework for analyzing causal representational claims, and offer a way to distinguish genuinely representational explanations from those that invoke representations for honorific purposes. It is usually agreed that rats are capable of navigation because they maintain a cognitive map of their environment. Exactly how and why their neural states give rise to mental representations is a matter of an ongoing debate. I will show that anticipatory mechanisms involved in rats' (...) evaluation of possible routes give rise to satisfaction conditions of contents, and this is why they are representationally relevant for explaining and predicting rats' behavior. I argue that a naturalistic account of satisfaction conditions of contents answers the most important objections of antirepresentationalists. (shrink)
Causal Accounts of Mental Content, Misc in Philosophy of Mind
Mechanistic Explanation in General Philosophy of Science
Representation in Cognitive Science in Philosophy of Cognitive Science
Teleology and Function in Philosophy of Biology
Unification Strategies in Cognitive Science.Marcin Miłkowski - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):13–33.details
Cognitive science is an interdisciplinary conglomerate of various research fields and disciplines, which increases the risk of fragmentation of cognitive theories. However, while most previous work has focused on theoretical integration, some kinds of integration may turn out to be monstrous, or result in superficially lumped and unrelated bodies of knowledge. In this paper, I distinguish theoretical integration from theoretical unification, and propose some analyses of theoretical unification dimensions. Moreover, two research strategies that are supposed to lead to unification are (...) analyzed in terms of the mechanistic account of explanation. Finally, I argue that theoretical unification is not an absolute requirement from the mechanistic perspective, and that strategies aiming at unification may be premature in fields where there are multiple conflicting explanatory models. (shrink)
Explanation in Cognitive Science in Philosophy of Cognitive Science
Child's Assent in Research: Age Threshold or Personalisation?Marcin Waligora, Vilius Dranseika & Jan Piasecki - 2014 - BMC Medical Ethics 15 (1):44.details
Assent is an important ethical and legal requirement of paediatric research. Unfortunately, there are significant differences between the guidelines on the details of assent.
Medical Ethics in Applied Ethics
The False Dichotomy Between Causal Realization and Semantic Computation.Marcin Miłkowski - 2017 - Hybris. Internetowy Magazyn Filozoficzny 38:1-21.details
In this paper, I show how semantic factors constrain the understanding of the computational phenomena to be explained so that they help build better mechanistic models. In particular, understanding what cognitive systems may refer to is important in building better models of cognitive processes. For that purpose, a recent study of some phenomena in rats that are capable of 'entertaining' future paths (Pfeiffer and Foster 2013) is analyzed. The case shows that the mechanistic account of physical computation may be complemented (...) with semantic considerations, and in many cases, it actually should. (shrink)
Computation and Representation, Misc in Philosophy of Cognitive Science
Functional Realization in Metaphysics
Implementing Computations in Philosophy of Computing and Information
Enhancing Health and Wellbeing Through Immersion in Nature: A Conceptual Perspective Combining the Stoic and Buddhist Traditions.Marcin Fabjański & Eric Brymer - 2017 - Frontiers in Psychology 8.details
Failures in Clinical Trials in the European Union: Lessons From the Polish Experience.Marcin Waligora - 2013 - Science and Engineering Ethics 19 (3):1087-1098.details
When discussing the safety of research subjects, including their exploitation and vulnerability as well as failures in clinical research, recent commentators have focused mostly on countries with low or middle-income economies. High-income countries are seen as relatively safe and well-regulated. This article presents irregularities in clinical trials in an EU member state, Poland, which were revealed by the Supreme Audit Office of Poland (the NIK). Despite adopting many European Union regulations, including European Commission directives concerning Good Clinical Practice, these irregularities (...) occurred. Causes as well as potential solutions to make clinical trials more ethical and safer are discussed. (shrink)
Technology Ethics in Applied Ethics
Objections to Computationalism: A Survey.Marcin Miłkowski - 2018 - Roczniki Filozoficzne 66 (3):57-75.details
In this paper, the Author reviewed the typical objections against the claim that brains are computers, or, to be more precise, information-processing mechanisms. By showing that practically all the popular objections are based on uncharitable interpretations of the claim, he argues that the claim is likely to be true, relevant to contemporary cognitive science, and non-trivial.
Metalinguistic Comparison in an Alternative Semantics for Imprecision.Marcin Morzycki - 2011 - Natural Language Semantics 19 (1):39-86.details
This paper offers an analysis of metalinguistic comparatives such as more dumb than crazy in which they differ from ordinary comparatives in the scale on which they compare: ordinary comparatives use scales lexically determined by particular adjectives, but metalinguistic ones use a generally-available scale of imprecision or 'pragmatic slack'. To implement this idea, I propose a novel compositional implementation of the Lasersohnian pragmatic-halos account of imprecision—one that represents clusters of similar meanings as Hamblin alternatives. In the theory that results, existential (...) closure over alternatives mediates between alternative-sets and meanings in which imprecision has been resolved. I then articulate a version of this theory in which the alternatives are not related meanings but rather related utterances, departing significantly from Lasersohn's original conception. Although such a theory of imprecision is more clearly 'metalinguistic', the evidence for it from metalinguistic comparatives in English is surprisingly limited. The overall picture that emerges is one in which the grammatical distinction between ordinary and metalinguistic comparatives tracks the independently motivated distinction between vagueness and imprecision. (shrink)
Conversational Implicature in Philosophy of Language
" Traktat o rzeczach następujących po sobie", przypisywany Williamowi Ockhamowi, część pierwsza:" Traktat o ruchu"/z jęz łac. tł. Marcin Karaś. [REVIEW]Marcin Karaś & William Ockham - 2007 - Archiwum Historii Filozofii I Myśli Społecznej 52.details
" Traktat o rzeczach następujących po sobie", przypisywany Williamowi Ockhamowi, część trzecia:" Traktat o czasie"/z jęz. łac. tł. Marcin Karaś. [REVIEW]Marcin Karaś & William Ockham - 2006 - Archiwum Historii Filozofii I Myśli Społecznej 50.details
Traktat o rzeczach następujących po sobie, przypisywany Williamowi Ockhamowi: część druga: Traktat o miejscu/z jęz. łac. tł. Marcin Karas. [REVIEW]Marcin Karas & William Ockham - 2008 - Archiwum Historii Filozofii I Myśli Społecznej 53.details
Towards Formal Representation and Evaluation of Arguments.Marcin Selinger - 2014 - Argumentation 28 (3):379-393.details
The aim of this paper is to propose foundations for a formal model of representation and numerical evaluation of a possibly broad class of arguments, including those that occur in natural discourse. Since one of the most characteristic features of everyday argumentation is the occurrence of convergent reasoning, special attention should be paid to the operation ⊕, which allows us to calculate the logical force of convergent arguments with an accuracy not offered by other approaches.
Informal Logic in Logic and Philosophy of Logic
Argumentative Polylogues: Beyond Dialectical Understanding of Fallacies.Marcin Lewiński - 2014 - Studies in Logic, Grammar and Rhetoric 36 (1):193-218.details
Dialectical fallacies are typically defined as breaches of the rules of a regulated discussion between two participants. What if discussions become more complex and involve multiple parties with distinct positions to argue for? Are there distinct argumentation norms of polylogues? If so, can their violations be conceptualized as polylogical fallacies? I will argue for such an approach and analyze two candidates for argumentative breaches of multi-party rationality: false dilemma and collateral straw man.
Argumentative Polylogues in a Dialectical Framework: A Methodological Inquiry.Marcin Lewiński & Mark Aakhus - 2014 - Argumentation 28 (2):161-185.details
In this paper, we closely examine the various ways in which a multi-party argumentative discussion—argumentative polylogue—can be analyzed in a dialectical framework. Our chief concern is that while multi-party and multi-position discussions are characteristic of a large class of argumentative activities, dialectical approaches would analyze and evaluate them in terms of dyadic exchanges between two parties: pro and con. Using as an example an academic committee arguing about the researcher of the year as well as other cases from argumentation literature, (...) we scrutinize the advantages and pitfalls of applying a dialectical framework to polylogue analysis and evaluation. We recognize two basic dialectical methods: interpreting polylogues as exchanges between two main camps and splitting polylogues into a multitude of dual encounters. On the basis of this critical inquiry, we lay out an argument expressing the need for an improved polylogical model and propose its basic elements. (shrink)
The Philosophy of Philosophies: Synthesis Through Diversity.Marcin Schroeder - 2016 - Philosophies 1 (1):68--72.details
Our new journal Philosophies is devoted to the search for a synthesis of philosophical and scientific inquiry. It promotes philosophical work derived from the experience of diverse scientific disciplines. [...].
Clinical Ethics Consultation in the Transition Countries of Central and Eastern Europe.Marcin Orzechowski, Maximilian Schochow & Florian Steger - 2020 - Science and Engineering Ethics 26 (2):833-850.details
Since 1989, clinical ethics consultation in form of hospital ethics committees was established in most of the transition countries of Central and Eastern Europe. Up to now, the similarities and differences between HECs in Central and Eastern Europe and their counterparts in the U.S. and Western Europe have not been determined. Through search in literature databases, we have identified studies that document the implementation of clinical ethics consultation in Central and Eastern Europe. These studies have been analyzed under the following (...) aspects: mode of establishment of HECs, character of consultation they provide, and their composition. The results show that HECs in the transition countries of Central and Eastern Europe differ from their western-European or U.S. counterparts with regard to these three aspects. HECs were established because of centrally imposed legal regulations. Little initiatives in this area were taken by medical professionals interested in resolving emerging ethical issues. HECs in the transition countries concentrate mostly on review of research protocols or resolution of administrative conflicts in healthcare institutions. Moreover, integration of non-professional third parties in the workings of HECs is often neglected. We argue that these differences can be attributed to the historical background and the role of medicine in these countries under the communist regime. Political and organizational structures of healthcare as well as education of healthcare staff during this period influenced current functioning of clinical ethics consultation in the transition countries. (shrink)
Neither the Harm Principle nor the Best Interest Standard Should Be Applied to Pediatric Research.Marcin Waligora, Karolina Strzebonska & Mateusz T. Wasylewski - 2018 - American Journal of Bioethics 18 (8):72-74.details
Biomedical Ethics in Applied Ethics
Morphological Computation: Nothing but Physical Computation.Marcin Miłkowski - 2018 - Entropy 10 (20):942.details
The purpose of this paper is to argue against the claim that morphological computation is substantially different from other kinds of physical computation. I show that some (but not all) purported cases of morphological computation do not count as specifically computational, and that those that do are solely physical computational systems. These latter cases are not, however, specific enough: all computational systems, not only morphological ones, may (and sometimes should) be studied in various ways, including their energy efficiency, cost, reliability, (...) and durability. Second, I critically analyze the notion of "offloading" computation to the morphology of an agent or robot, by showing that, literally, computation is sometimes not offloaded but simply avoided. Third, I point out that while the morphology of any agent is indicative of the environment that it is adapted to, or informative about that environment, it does not follow that every agent has access to its morphology as the model of its environment. (shrink)
Computation and Physical Systems, Misc in Philosophy of Computing and Information
Robotics in Philosophy of Cognitive Science
Advancing Polylogical Analysis of Large-Scale Argumentation: Disagreement Management in the Fracking Controversy.Mark Aakhus & Marcin Lewiński - 2017 - Argumentation 31 (1):179-207.details
This paper offers a new way to make sense of disagreement expansion from a polylogical perspective by incorporating various places in addition to players and positions into the analysis. The concepts build on prior implicit ideas about disagreement space by suggesting how to more fully account for argumentative context, and its construction, in large-scale complex controversies. As a basis for our polylogical analysis, we use a New York Times news story reporting on an oil train explosion—a significant point in the (...) broader controversy over producing oil and gas via hydraulic fracturing. (shrink)
Structural Representations: Causally Relevant and Different From Detectors.Paweł Gładziejewski & Marcin Miłkowski - 2017 - Biology and Philosophy 32 (3):337-355.details
This paper centers around the notion that internal, mental representations are grounded in structural similarity, i.e., that they are so-called S-representations. We show how S-representations may be causally relevant and argue that they are distinct from mere detectors. First, using the neomechanist theory of explanation and the interventionist account of causal relevance, we provide a precise interpretation of the claim that in S-representations, structural similarity serves as a "fuel of success", i.e., a relation that is exploitable for the representation using (...) system. Then, we discuss crucial differences between S-representations and indicators or detectors, showing that—contrary to claims made in the literature—there is an important theoretical distinction to be drawn between the two. (shrink)
Explanatory Role of Content in Philosophy of Mind
Philosophy of Biology, Miscellaneous in Philosophy of Biology
The Nature of Contents, Misc in Philosophy of Mind
The 2015 Paris Climate Conference.Marcin Lewiński & Dima Mohammed - 2019 - Journal of Argumentation in Context 8 (1):65-90.details
The paper applies argumentative discourse analysis to a corpus of official statements made by key players at the opening of the 2015 Paris Climate Conference. The chief goal is to reveal the underlying structure of practical arguments and values legitimising the global climate change policy-making. The paper investigates which of the elements of practical arguments were common and which were contested by various players. One important conclusion is that a complex, multilateral deal such as the 2015 Paris Agreement is based (...) on a fragile consensus. This consensus can be precisely described in terms of the key premises of practical arguments that various players share and the premises they still discuss but prefer not to prioritise. It thus provides an insight into how a fragile consensus over goals may lead to a multilateral agreement through argumentative processes. (shrink)
Contemporary Natural Philosophy and Contemporary Idola Mentis.Marcin J. Schroeder - 2020 - Philosophies 5 (19):19-0.details
Contemporary Natural Philosophy is understood here as a project of the pursuit of the integrated description of reality distinguished by the precisely formulated criteria of objectivity, and by the assumption that the statements of this description can be assessed only as true or false according to clearly specified verification procedures established with the exclusive goal of the discrimination between these two logical values, but not with respect to any other norms or values established by the preferences of human collectives or (...) by the individual choices. This distinction assumes only logical consistency, but not completeness. Completeness is desirable, but may be impossible. This paper is not intended as a comprehensive program for the development of the Contemporary Natural Philosophy but rather as a preparation for such program advocating some necessary revisions and extensions of the methodology currently considered as the scientific method. This is the actual focus of the paper and the reason for the reference to Baconian _idola mentis_. Francis Bacon wrote in _Novum Organum_ about the fallacies obstructing progress of science. The present paper is an attempt to remove obstacles for the Contemporary Natural Philosophy project to which we have assigned the names of the Idols of the Number, the Idols of the Common Sense, and the Idols of the Elephant. (shrink)
Child's Objection to Non-Beneficial Research: Capacity and Distress Based Models.Marcin Waligora, Joanna Różyńska & Jan Piasecki - 2016 - Medicine, Health Care and Philosophy 19 (1):65-70.details
A child's objection, refusal and dissent regarding participation in non-beneficial biomedical research must be respected, even when the parents or legal representatives have given their permission. There is, however, no consensus on the definition and criteria of a meaningful and valid child's objection. The aim of this article is to clarify this issue. In the first part we describe the problems of a child's assent in research. In the second part we distinguish and analyze two models of a child's objection (...) to research: the capacity-based model and the distress-based model. In the last part we present arguments for a broader and unified understanding of a child's objection within regulations and practices. This will strengthen children's rights and facilitate the entire process of assessment of research protocols. (shrink)
From Wide Cognition to Mechanisms: A Silent Revolution.Marcin Miłkowski, Robert Clowes, Zuzanna Rucińska, Aleksandra Przegalińska, Tadeusz Zawidzki, Joel Krueger, Adam Gies, Marek McGann, Łukasz Afeltowicz, Witold Wachowski, Fredrik Stjernberg, Victor Loughlin & Mateusz Hohol - 2018 - Frontiers in Psychology 9.details
In this paper, we argue that several recent 'wide' perspectives on cognition (embodied, embedded, extended, enactive, and distributed) are only partially relevant to the study of cognition. While these wide accounts override traditional methodological individualism, the study of cognition has already progressed beyond these proposed perspectives towards building integrated explanations of the mechanisms involved, including not only internal submechanisms but also interactions with others, groups, cognitive artifacts, and their environment. The claim is substantiated with reference to recent developments in the (...) study of "mindreading" and debates on emotions. We claim that the current practice in cognitive (neuro)science has undergone, in effect, a silent mechanistic revolution, and has turned from initial binary oppositions and abstract proposals towards the integration of wide perspectives with the rest of the cognitive (neuro)sciences. (shrink)
Embodiment and Situated Cognition in Philosophy of Cognitive Science
Extended Cognitive Science in Philosophy of Mind
Externalism and Cognitive Science, Misc in Philosophy of Mind
Externalism and Psychological Explanation in Philosophy of Mind
Contemporary Natural Philosophy and Philosophies.Gordana Dodig-Crnkovic & Marcin Schroeder - 2018 - Philosophies 3 (4):42--0.details
In this Editorial note, Guest Editors introduce the theme of the Special Issue of the journal Philosophies, titled Contemporary Natural Philosophy and Philosophies.
Metaphysical Naturalism in Metaphysics
Argumentative Discussion: The Rationality of What?Marcin Lewiński - 2019 - Topoi 38 (4):645-658.details
Most dialectical models view argumentation as a process of critically testing a standpoint. Further, they assume that what we critically test can be analytically reduced to individual and bi-polar standpoints. I argue that these two assumptions lead to the dominant view of dialectics as a bi-partisan argumentative discussion in which the yes-side argues against the doubter or the no-side. I scrutinise this binary orientation in understanding argumentation by drawing on the main tenets of normative pragmatic and pragma-dialectical theories of argumentation. (...) I develop my argument by showing how argumentative practice challenges these assumptions. I then lay out theoretical reasons for this challenge. This paves the way for an enhanced conceptualisation of dialectical models and their standards of rationality in terms of multi-party discussions, or argumentative polylogues. (shrink)
Albert Mieczysław Krąpiec's Theory of the Person for Professional Nursing Practice.Marcin Paweł Ferdynus - 2020 - Nursing Philosophy 21 (2).details
Cognitive Artifacts for Geometric Reasoning.Mateusz Hohol & Marcin Miłkowski - 2019 - Foundations of Science 24 (4):657-680.details
In this paper, we focus on the development of geometric cognition. We argue that to understand how geometric cognition has been constituted, one must appreciate not only individual cognitive factors, such as phylogenetically ancient and ontogenetically early core cognitive systems, but also the social history of the spread and use of cognitive artifacts. In particular, we show that the development of Greek mathematics, enshrined in Euclid's Elements, was driven by the use of two tightly intertwined cognitive artifacts: the use of (...) lettered diagrams; and the creation of linguistic formulae. Together, these artifacts formed the professional language of geometry. In this respect, the case of Greek geometry clearly shows that explanations of geometric reasoning have to go beyond the confines of methodological individualism to account for how the distributed practice of artifact use has stabilized over time. This practice, as we suggest, has also contributed heavily to the understanding of what mathematical proof is; classically, it has been assumed that proofs are not merely deductively correct but also remain invariant over various individuals sharing the same cognitive practice. Cognitive artifacts in Greek geometry constrained the repertoire of admissible inferential operations, which made these proofs inter-subjectively testable and compelling. By focusing on the cognitive operations on artifacts, we also stress that mental mechanisms that contribute to these operations are still poorly understood, in contrast to those mechanisms which drive symbolic logical inference. (shrink)
From Computer Metaphor to Computational Modeling: The Evolution of Computationalism.Marcin Miłkowski - 2018 - Minds and Machines 28 (3):515-541.details
In this paper, I argue that computationalism is a progressive research tradition. Its metaphysical assumptions are that nervous systems are computational, and that information processing is necessary for cognition to occur. First, the primary reasons why information processing should explain cognition are reviewed. Then I argue that early formulations of these reasons are outdated. However, by relying on the mechanistic account of physical computation, they can be recast in a compelling way. Next, I contrast two computational models of working memory (...) to show how modeling has progressed over the years. The methodological assumptions of new modeling work are best understood in the mechanistic framework, which is evidenced by the way in which models are empirically validated. Moreover, the methodological and theoretical progress in computational neuroscience vindicates the new mechanistic approach to explanation, which, at the same time, justifies the best practices of computational modeling. Overall, computational modeling is deservedly successful in cognitive science. Its successes are related to deep conceptual connections between cognition and computation. Computationalism is not only here to stay, it becomes stronger every year. (shrink)
Philosophy of Artificial Intelligence in Philosophy of Cognitive Science
Explanatory Completeness and Idealization in Large Brain Simulations: A Mechanistic Perspective.Marcin Miłkowski - 2016 - Synthese 193 (5):1457-1478.details
The claim defended in the paper is that the mechanistic account of explanation can easily embrace idealization in big-scale brain simulations, and that only causally relevant detail should be present in explanatory models. The claim is illustrated with two methodologically different models: Blue Brain, used for particular simulations of the cortical column in hybrid models, and Eliasmith's SPAUN model that is both biologically realistic and able to explain eight different tasks. By drawing on the mechanistic theory of computational explanation, I (...) argue that large-scale simulations require that the explanandum phenomenon is identified; otherwise, the explanatory value of such explanations is difficult to establish, and testing the model empirically by comparing its behavior with the explanandum remains practically impossible. The completeness of the explanation, and hence of the explanatory value of the explanatory model, is to be assessed vis-à-vis the explanandum phenomenon, which is not to be conflated with raw observational data and may be idealized. I argue that idealizations, which include building models of a single phenomenon displayed by multi-functional mechanisms, lumping together multiple factors in a single causal variable, simplifying the causal structure of the mechanisms, and multi-model integration, are indispensable for complex systems such as brains; otherwise, the model may be as complex as the explanandum phenomenon, which would make it prone to so-called Bonini paradox. I conclude by enumerating dimensions of empirical validation of explanatory models according to new mechanism, which are given in a form of a "checklist" for a modeler. (shrink)
Idealization in General Philosophy of Science
Dualism of Selective and Structural Manifestations of Information in Modelling of Information Dynamics.Marcin J. Schroeder - 2013 - In Gordana Dodig-Crnkovic Raffaela Giovagnoli (ed.), Computing Nature. pp. 125--137.details
Philosophy of Information in Philosophy of Computing and Information
$128.73 used $129.52 new $139.62 from Amazon (collection) Amazon page
Cabdrivers and Their Fares: Temporal Structures of a Linking Ecology.Marcin Serafin - 2019 - Sociological Theory 37 (2):117-141.details
The author argues that behind the apparent randomness of interactions between cabdrivers and their fares in Warsaw is a temporal structure. To capture this temporal structure, the author introduces the notion of a linking ecology. He argues that the Warsaw taxi market is a linking ecology, which is structured by religious time, state time, and family time. The author then focuses on waiting time, arguing that it too structures the interactions between cabdrivers and their fares. The author makes a processual (...) argument that waiting time has been restructured by the postsocialist transformation, but only because this transformation has been continually encoded through the defensive and adaptive strategies of cabdrivers responding to the repetitive and unique events located across the social space. The author concludes with the claim that linking ecologies are a recurring structure of the social process and that they form the backbone of globalization, financialization, and mediatization. (shrink)
Sociology in Social Sciences
Negation in Weak Positional Calculi.Marcin Tkaczyk - 2013 - Logic and Logical Philosophy 22 (1):3-19.details
Four weak positional calculi are constructed and examined. They refer to the use of the connective of negation within the scope of the positional connective "R" of realization. The connective of negation may be fully classical, partially analogical or independent from the classical, truth-functional negation. It has been also proved that the strongest system, containing fully classical connective of negation, is deductively equivalent to the system MR from Jarmużek and Pietruszczak.
Logical Expressions in Logic and Philosophy of Logic
Environmental Argumentation.Marcin Lewiński & Mehmet Ali Üzelgün - 2019 - Journal of Argumentation in Context 8 (1):1-11.details
In this paper, we analyze the argumentative strategies deployed in the Ecomodernist Manifesto, published in 2015 by a group of leading environmental thinkers. We draw on pragma-dialectics and Perelman's rhetoric to characterize manifesto as a genre of practical argumentation. Our goal is to explore the relation of manifesto as a discursive genre to the argumentative structures and techniques used in the Ecomodernist Manifesto. We therefore take into scrutiny the elements of practical argumentation employed in the manifesto and describe the polylogical (...) strategies of dissociation in negotiating the ecological value of nature and the modernist value of progress. (shrink)
Models of Environment.Marcin Miłkowski - 2016 - In Roger Frantz & Leslie Marsh (eds.), Minds, Models and Milieux. Commemorating the Centennial of the Birth of Herbert Simon. Palgrave-Macmillan. pp. 227-238.details
Herbert A. Simon is well known for his account of bounded rationality. Whereas classical economics idealized economic agency and framed rational choice in terms of the decision theory, Simon insisted that agents need not be optimal in their choices. They might be mere satispcers, i.e., attain good enough goals rather than optimal ones. At the same time, behaviorally as well as computationally, bounded rationality is much more realistic.
Rationality and Cognitive Science in Philosophy of Cognitive Science
$74.53 used $114.12 new $120.93 from Amazon (collection) Amazon page
Debating Multiple Positions in Multi-Party Online Deliberation: Sides, Positions, and Cases.Marcin Lewiński - 2013 - Journal of Argumentation in Context 2 (1):151-177.details
Dialectical approaches traditionally conceptualize argumentation as a discussion in which two parties debate on "two sides of an issue". However, many political issues engender multiple positions. This is clear in multi-party online deliberations in which often an array of competing positions is debated in one and the same discussion. A proponent of a given position thus addresses a number of possible opponents, who in turn may hold incompatible opinions. The goal of this paper is to shed extra light on such (...) "polylogical" clash of opinions in online deliberation, by examining the multi-layered participation in actual online debates. The examples are drawn from the readers' discussions on Osama bin Laden's killing in online versions of two British newspapers: The Guardian and The Telegraph. As a result of the analysis, a distinction between sides, positions, and cases in argumentative deliberation is proposed. (shrink)
Towards a Critique-Friendly Approach to the Straw Man Fallacy Evaluation.Marcin Lewiński - 2011 - Argumentation 25 (4):469-497.details
In this article I address the following question: When are reformulations in argumentative criticisms reasonable and when do they become fallacious straw men? Following ideas developed in the integrated version of pragma-dialectics, I approach argumentation as an element of agonistic exchanges permeated by arguers' strategic manoeuvring aimed at effectively defeating the opponent with reasonable means. I propose two basic context-sensitive criteria for deciding on the reasonableness of reformulations: precision of the rules for interpretation (precise vs. loose) and general expectation of (...) cooperativeness (critical vs. constructive). On the basis of analysis of examples taken from online political discussions, I argue that in some contexts, especially those that are critical and loose, what might easily be classified as a straw man following conventional treatment should be taken as a harsh, yet reasonable, strategic argumentative criticism. (shrink)
A Computational Approach to Quantifiers as an Explanation for Some Language Impairments in Schizophrenia.Marcin Zajenkowski, Rafał Styła & Jakub Szymanik - 2011 - Journal of Communication Disorder 44:2011.details
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only with proportional quantifiers, like more than half. This can be explained by noting that, according to the complexity perspective, only proportional quantifiers require working memory engagement.
Computational Complexity in Philosophy of Computing and Information
Formal Semantics in Philosophy of Language
Generalized Quantifiers in Philosophy of Language
Linguistics in Cognitive Sciences
Psycholinguistics in Philosophy of Language
Unification by Fiat: Arrested Development of Predictive Processing.Piotr Litwin & Marcin Miłkowski - 2020 - Cognitive Science 44 (7).details
Predictive processing (PP) has been repeatedly presented as a unificatory account of perception, action, and cognition. In this paper, we argue that this is premature: As a unifying theory, PP fails to deliver general, simple, homogeneous, and systematic explanations. By examining its current trajectory of development, we conclude that PP remains only loosely connected both to its computational framework and to its hypothetical biological underpinnings, which makes its fundamentals unclear. Instead of offering explanations that refer to the same set of (...) principles, we observe systematic equivocations in PP‐based models, or outright contradictions with its avowed principles. To make matters worse, PP‐based models are seldom empirically validated, and they are frequently offered as mere just‐so stories. The large number of PP‐based models is thus not evidence of theoretical progress in unifying perception, action, and cognition. On the contrary, we maintain that the gap between theory and its biological and computational bases contributes to the arrested development of PP as a unificatory theory. Thus, we urge the defenders of PP to focus on its critical problems instead of offering mere re‐descriptions of known phenomena, and to validate their models against possible alternative explanations that stem from different theoretical assumptions. Otherwise, PP will ultimately fail as a unified theory of cognition. (shrink)
The Paradox of Charity.Marcin Lewiński - 2012 - Informal Logic 32 (4):403-439.details
The principle of charity is used in philosophy of language and argumentation theory as an important principle of interpretation which credits speakers with "the best" plausible interpretation of their discourse. I contend that the argumentation account, while broadly advocated, misses the basic point of a dialectical conception which approaches argumentation as discussion between two parties who disagree over the issue discussed. Therefore, paradoxically, an analyst who is charitable to one discussion party easily becomes uncharitable to the other. To overcome this (...) paradox, I suggest to significantly limit the application of the principle of charity depending on contextual factors. (shrink)
Direct download (14 more)
Illocutionary Pluralism.Marcin Lewiński - 2021 - Synthese 199 (3-4):6687-6714.details
This paper addresses the following question: Can one and the same utterance token, in one unique speech situation, intentionally and conventionally perform a plurality of illocutionary acts? While some of the recent literature has considered such a possibility Perspectives on pragmatics and philosophy. Springer, Cham, pp 227–244, 2013; Johnson in Synthese 196:1151–1165, 2019), I build a case for it by drawing attention to common conversational complexities unrecognized in speech acts analysis. Traditional speech act theory treats communication as: a dyadic exchange (...) between a Speaker and a Hearer who trade illocutionary acts endowed with one and only one primary force. I first challenge assumption by discussing two contexts where plural illocutionary forces are performed in dyadic discussions: dilemmatic deliberations and strategic ambiguity. Further, I challenge assumption by analyzing poly-adic discussions, where a speaker can target various participants with different illocutionary acts performed via the same utterance. Together, these analyses defend illocutionary pluralism as a significant but overlooked fact about communication. I conclude by showing how some phenomena recently analyzed in speech act theory—back-door speech acts New work on speech acts. Oxford University Press, Oxford, pp 144–164, 2018) and dog-whistles New work on speech acts. Oxford University Press, Oxford, pp 360–383, 2018)—implicitly presuppose illocutionary pluralism without recognizing it. (shrink) | CommonCrawl |
Resources for learning Chemistry
Active yesterday
This question's answers are a community effort. Edit existing answers to improve this post. It is not currently accepting new answers or interactions.
Based on various other Stack exchange site (Mandarin Chinese, Russian and German), we adapt this project here for chemistry, since it's a great idea to have all kinds of resources in one place.
This is a specifically created Community Wiki which gathers resources for learning Chemistry. The lists are a community maintained project, hence everybody with more than 100 reputation points can contribute edits to the appropriate sections. If you feel something is missing, just fill it in. To avoid multiple answers for similar branches, we decided on a general outline, which is locked, i.e. no new answers can be added.
If you have concerns or questions, you can discuss this list on its parent meta post. If they are of a more general concern, you can also post a new question on chemistry.meta.se. This procedure is used, so the comment section here does not become too overcrowded. If you do choose the second option, please leave the link to it in the comments to this post.
Answers have a type of resource each.
If possible, state whether the material is directed towards a beginner, intermediate or an advanced audience.
Do not include links that lead to illegal content or sites that host such content. If you see any, please flag for moderator attention and choose "other" so you can point us to the content. We'll delete it as soon as we see the flag. (You can of course also delete it yourself. If you do, please flag it anyway, so that we are aware of it. In this case it is crucial you fill in the edit summary with something like: Removed link to illegal content.)
Both free and commercial resources are allowed, but make sure to include a note if they are the latter. Remember the rules about self-promotion. Include also if registration is required.
Include links to the sites only, don't post images, they would take too much space.
Add the resources in alphabetical order so they're easier to find.
For the resources, a short summary is very much appreciated.
If you have questions about these guidelines, please head to meta to make yourself heard. The above points are not set in stone and might change in the future.
(Text)Books: All books that teach you chemistry with theory and exercises. The subcategories are:
Inorganic, Organic and Physical Chemistry
Analytical Chemistry, Biochemistry and Chemical Biology, Chemical Engineering, Computational and Quantum Chemistry, Theoretical Chemistry
You can add any subcategory to this post if it is missing.
Online courses and Websites: Free or paid services online that teach you chemistry through lessons as well as sites that give help for learning chemistry. They give material, tips, hints, and various help for self-learners or regular students.
Software: This can be any software ranging from plugins for the browser over mobile apps up to standalone applications for the computer. Pure 2D or 3D visualization programs as well as quantum chemistry programs might not fit in this category, as they are not primarily focused on teaching chemistry.
Video Resources & TV: Video resources which help learning chemistry.
References about Nomenclature: Successful communication requires an agreed set of definitions compiled as nomenclature. An example for such compilations are IUPAC's Color Books, named by the color of their book cover.
Currently those are all categories. If you think, that a new one should be added, please submit an answer in the corresponding meta thread. (A comment is probably not sufficient, as it does "bump" the question on the active tab.
software reference-request books
pH13 - Yet another Philipp
$\begingroup$ Official chat room for this thread: chat.stackexchange.com/rooms/29685/… (Ping a moderator in Chemistry Chat if frozen.) $\endgroup$
Books about Inorganic Chemistry
General texts
Housecroft, C. E.; Sharpe, A. G. Inorganic Chemistry, 4th ed.; Prentice Hall: Upper Saddle River, NJ, 2012.
Weller, M.; Overton, T.; Rourke, J.; Armstrong, F. Inorganic Chemistry, 6th ed; Oxford UP: Oxford, U.K., 2014.
This is the latest version of the textbook commonly known as Shriver & Atkins.
Miessler, G. L.; Fischer, P. J.; Tarr, D. A. Inorganic Chemistry, 5th ed.; Prentice Hall: Upper Saddle River, NJ, 2014.
A relatively shorter text, but explains chemical bonding extremely well.
Ghosh, A.; Berg, S. Arrow Pushing in Inorganic Chemistry: A Logical Approach to the Chemistry of the Main-Group Elements, 1st ed.; Wiley: Hoboken, NJ, 2014.
A mechanistic approach to reactions in inorganic chemistry with emphasis on the mechanisms.
Lee, J. D. Concise Inorganic Chemistry, 5th ed; Wiley: Hoboken, NJ, 1999.
A classic introductory inorganic chemistry textbook.
Bauer, G Handbook of Preparative Inorganic Chemistry, 2nd ed; Academic Press: New York, 1963
A two volume compendium about small-scale syntheses of elements, maingroup and transition element inorganic compounds.
Books about Organic Chemistry
Carey, F. A.; Sundberg, R. J. Advanced Organic Chemistry, Part A: Structure and Mechanisms, 5th ed.; Springer: New York, 2007.
Carey, F. A.; Sundberg, R. J. Advanced Organic Chemistry, Part B: Reactions and Synthesis, 5th ed.; Springer: New York, 2007.
Carey & Sundberg is a classic two-volume text with an extremely in-depth discussion. Mechanisms are elucidated in great detail (primarily via MO theory, with results from both ab initio and semi-empirical methods), and the exposition of pericyclic reactions is notably excellent. It also contains a great deal of illuminating content on conformational analysis. Definitely not appropriate for a first textbook, but essential reading for advanced undergraduates and above.
Carey, F. A.; Giuliano, R. M. Organic Chemistry, 9th ed.; McGraw-Hill: New York, NY, 2014.
Clayden, J.; Greeves, N.; Warren, S. Organic Chemistry, 2nd ed.; Oxford UP: Oxford, U.K., 2012.
Excellent first textbook for organic with lucid explanations. It's worth noting that the first edition, while slightly more verbose, contains more information than the second, where some chapters and sections were cut.
Smith, M. B. March's Advanced Organic Chemistry: Reactions, Mechanisms, and Structure, 7th ed.; Wiley: Hoboken, NJ, 2013.
A comprehensive book that covers nearly every reaction under the sun, with appropriate references to primary literature. Arguably best used as a reference and not as study material, but it is also surprisingly readable.
Solomons, T. W. Graham; Fryhle, C. B.; Organic Chemistry, 10th ed.; Wiley: Hoboken, NJ, 2011.
Introductory textbook for organic chemistry.
Wade, L. G. Organic Chemistry, 8th ed.; Pearson Education: Glenview, IL, 2013.
A systematic introductory textbook for organic chemistry. Follows the traditional functional group approach.
Warren, S.; Wyatt, P. Organic Synthesis: The Disconnection Approach, 2nd ed.; Wiley: Chichester, U.K., 2008.
A step-by-step introduction to organic retrosynthetic analysis and the construction of different relations between functional groups. Do also get the accompanying workbook (the solutions are discussed immediately after problems).
Kürti, L.; Czakó, B. Strategic Applications of Named Reactions in Organic Synthesis; Elsevier: Amsterdam, 2005.
An incredible compilation of 250 named reactions, with discussion of mechanisms and examples of application to total synthesis. Nearly 10,000 references to primary literature.
Mundy, B. P.; Ellerd, M. G.; Favaloro, F. G., Jr. Name Reactions and Reagents in Organic Synthesis, 2nd ed.; Wiley: Hoboken, NJ, 2005.
A reference book for numerous reaction mechanisms and common reagents.
Nicolaou, K. C.; Sorensen, E. J. Classics in Total Synthesis; Wiley: Weinheim, Germany, 1996.
Thorough discussion of the retrosynthesis and forward synthesis of 36 molecules, by one of the most well-known synthetic chemists of recent years and (at the time) his Ph.D. student. Examples are taken from almost the entire history of organic synthesis: from Woodward's 1954 synthesis of strychnine to the author's own 1995 synthesis of brevetoxin B. Also check out the sequels, Classics in Total Synthesis II and Classics in Total Synthesis III.
Wuts, P. G. M. Greene's Protective Groups in Organic Synthesis, 5th ed.; Wiley: Hoboken, NJ, 2014.
Comprehensive listing of protecting groups, protection and deprotection conditions with references to primary literature, and handy reactivity charts which assess the stability of protecting groups towards various reagents and conditions.
Zubrick, James W. The Organic Chem Lab Survival Manual: A Student's Guide to Techniques; Wiley: Hoboken, NJ, 2019. This primer introduces students to basic equipment and techniques in the organic lab. Topics considered include literature search, general safety, microscale operation, product isolation / purification / characterization (e.g., melting point, IR and NMR), and record keeping.
Pavia, D. L.; Lampman, G. M.; Kriz, G. S.; Vyvyan, J. A. Introduction to Spectroscopy, 5th ed.; Cengage Learning: Stamford, CT, 2015.
Excellent introductory text with good coverage of all the typical structure determination techniques: elemental analysis, NMR, IR, MS, and UV-Vis.
Silverstein, R. M.; Webster, F. X.; Kiemle, D. J.; Bryce, D.L. Spectrometric Identification of Organic Compounds, 8th ed.; Wiley: Hoboken, NJ, 2014.
Perhaps a slightly more in-depth discussion than Pavia, but without sacrificing any clarity. Includes a large number of tables and charts of spectroscopic data, making it also very valuable as a reference.
Claridge, T. D. W. High-Resolution NMR Techniques in Organic Chemistry, 3rd ed.; Elsevier: Amsterdam, 2016.
A much more involved (graduate level) discussion of how NMR experiments are designed and how we extract information from them. Along with the more physchem-oriented NMR books, this is recommended for those who want to know what is actually going on in their NMR machine.
Field, L. D.; Li, H. L.; Magill, A. M. Organic Structures from Spectra, 6th edition; Wiley, 2020 Brief theory of UV-Vis, MS, IR, and NMR followed by training sets of experimentally recorded data with focus on the combination of these techniques in the process of structure elucidation. Instructors may obtain an answer key.
Field, L. D.; Li, H. L.; Magill, A. M. Organic Structures from 2D NMR Spectra, Wiley, 2016 While (newer editions) of their other book about structure elucidation contains some 2D NMR spectra, this book now emphasises the training how to interpret correlation NMR spectra. Equally available as instructor's guide including the answers.
Books about Physical Chemistry
Atkins, P.; de Paula, J. Physical Chemistry, 10th ed; Oxford UP: Oxford, U.K., 2014.
Classic physical chemistry textbook, but can sometimes be difficult to follow, especially for first-time students.
Atkins, P.; de Paula, J. Elements of Physical Chemistry, 7th ed.; Oxford UP: Oxford, U.K., 2016.
A less detailed book than Physical Chemistry. Can be used for introductory physical chemistry courses.
Engel, T.; Reid, P. Physical Chemistry, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, 2012.
Levine, I. N. Physical Chemistry, 6th ed.; McGraw-Hill: New York, 2009.
Statistical mechanics
Chandler, D. Introduction to Modern Statistical Mechanics; Oxford UP:Oxford, U.K., 1987 Useful for undergraduate and early graduate study of both classical and quantum statistical mechanics, as well as for reference.
McQuarrie, D. A. Statistical Mechanics; University Science Books: Mill Valley, CA, 2000.
Keeler, J. Understanding NMR Spectroscopy, 2nd ed.; Wiley: Chichester, U.K., 2010.
An extremely accessible and readable introduction to the theory behind NMR experiments. Describes nearly every important aspect of NMR, from basic quantum mechanics to the vector model, product operators, and a range of NMR experiments. It seems to be a conscious choice on the part of the author to not delve too deep into the quantum mechanics, in the interests of clarity and understanding.
Levitt, M.H. Spin Dynamics, 2nd ed.; Wiley: Chichester, U.K., 2008.
An extremely in-depth treatment of the quantum mechanics of NMR, which goes well beyond the level of Keeler's book.
Gunther, H. NMR Spectroscopy, 3rd ed.; Wiley: Weinheim, Germany, 2013.
Computational and Quantum Chemistry
Levine, I. N. Quantum Chemistry, 7th ed.; Pearson: Upper Saddle River, NJ, 2012.
This book acts as a first introduction to quantum chemistry and computational methods for students who do not have prior experience with the field.
Atkins, P. W.; Friedman, R. S. Molecular Quantum Mechanics, 5th ed.; Oxford UP: Oxford, U.K., 2010.
Realizes a more extensive coverage than Levine.
Cohen-Tannoudji, C.; Diu, B.; Laloe, F. Quantum Mechanics; Wiley: New York, 1977.
This book is a very good book on quantum mechanics that is often used by physics majors to learn the subject. Reading it is necessary if a better understanding than that provided by previous books is desired.
McWeeny, R. Symmetry; Dover, 2002.
Parr, R. G.; Yang, W. Density-Functional Theory of Atoms and Molecules; Oxford UP: Oxford, U.K., 1996.
The first book on DFT and by leaders in the field, still unbeatable.
Jensen, F. Introduction to Computational Chemistry; Wiley, 2007 This books treats the computational methods at a beginner level.
Szabo, A.; Ostlund, N. S. Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory; Dover, 1996
Helgaker, T.; Jorgensen, P; Olsen, J. Molecular Electronic-Structure Theory.
This is the reference book on wavefunction methods.
Cancès, E; Defranceschi, M; Kutzelnigg, W.; Le Bris, C.; Maday, Y. Computational Quantum Chemistry: A primer.
A sophisticated mathematical analysis of wavefunction methods and density functional theory.
Solid State Chemistry and Crystallography
West, A. R. Solid State Chemistry and Its Applications, 2nd ed.; John Wiley & Sons, Inc: Chichester, West Sussex, 2014.
Simon, S. H. The Oxford Solid State Basics, 1st ed.; Oxford University Press: Oxford, 2013.
Smart, L.; Moore, E. Solid State Chemistry: An Introduction, 4th ed.; CRC Press: Boca Raton, FL, 2012.
Tilley, R. J. D. Understanding Solids: The Science of Materials; J. Wiley: Chichester, West Sussex, England; Hoboken, NJ, USA, 2004.
Hoffmann, R. How Chemistry and Physics Meet in the Solid State; DOI: 10.1002/anie.198708461.
Learning solid state theory is difficult for a chemist in part because it is primarily ridden by a different community (physicists) that have a different language and way to understand bonding. This review helps in bridging the gap.
Ashcroft, N. S.; Mermin, N. Solid State Physics; Cengage Learning, 1976.
This textbook is primary reference to learn solid state physics.
Evarestov, R. A. Quantum Chemistry of Solids: LCAO Treatment of Crystals and Nanostructures, Springer, 2012.
Fundamentals of Crystallography, 3th ed., Oxford University Press, 2011
Müller, Ulrich Symmetry Relationships between Crystal Structures. Applications of Crystallographic Group Theory in Crystal Chemistry, 2nd ed., Oxford University Press, 2017
A rigorous yet accessible explanation of the foundations of crystallography and symmetry relationships with a large collection of exercises to practice. Good for self-study.
Müller, U. Anorganische Strukturchemie, 6., aktualisierte Aufl., unveränd. Nachdr.; Studium; Vieweg + Teubner: Wiesbaden, 2009. (in German)
Dronskowski, R. Computational Chemistry of Solid State Materials: A Guide for Materials Scientists, Chemists, Physicists and Others; Wiley-VCH: Weinheim, 2005.
Molecular Orbital Theory
Fleming, I. Molecular Orbitals and Organic Chemical Reactions: Student Edition; Wiley: Chichester, 2009.
A readable introduction to molecular orbital theory with an emphasis on its application to simple organic chemical reactions (SN1/SN2 etc). Suitable for undergraduates wanting a more detailed understanding of organic reactivity. Also widely considered to be the book for the molecular orbital treatment of pericyclic reactions.
Fleming, I. Molecular Orbitals and Organic Chemical Reactions: Reference edition; Wiley: Chichester, 2009.
An expanded version of Molecular Orbitals and Organic Chemical Reactions: Student Edition with a full list of references and additional coverage of some material. More suited to graduate students / instructors who might want to read deeper into a topic.
Albright, T. A.; Burdett, J. K.; Whangbo, M-H. Orbital Interactions in Chemistry, 2nd ed.; Wiley: Hoboken, NJ, 2013.
Arguably the best book on the subject, both in terms of depth (some mathematical treatment of the topics) and coverage (organic and inorganic molecules are discussed). Given the size (over 800 pages), Albright is more of a reference text than Fleming which can easily be read cover-to-cover, and the level of discussion is more geared towards graduate students / undergraduates in later years who already have some understanding of molecular orbitals.
Jean, Y. Molecular Orbitals of Transition Metal Complexes; Oxford, 2005.
An undergraduate level textbook covering the fundamentals of molecular orbital theory applied to transition metal complexes. The derivations of the MO diagrams for various coordination geometries is provided along with a chapter on 'applications' of MO theory such as how complexes react.
Elgrishi, N. et al. A Practical Beginner's Guide to Cyclic Voltammetry; J. Chem. Educ. 2018, 95, 197–206. (open access)
A short introduction teaching the very basics about CV's theory, supplemented with tips and warnings for pitfalls recording and interpreting the experimental data. The supplementary information contains five training modules to walk through the more typical minutiae one encounters.
orthocresol
$\begingroup$ For Organic chemistry I really like "Organic Chemistry" by Brown, Iverson, Anslyn and Foote $\endgroup$
$\begingroup$ Umm...Paula Bruice would be a good book for beginners..to get the basics clear..and for further reading Peter Sykes is also good. $\endgroup$
– Muskaan
$\begingroup$ I found McQuarrie's Quantum Chemistry to be very helpful for quantum mechanics. Especially the "math chapters", which reviewed pertinent math techniques before they are required for each quantum topic. $\endgroup$
– electronpusher
Video Resources (online)
The University of Nottingham's Periodic Videos
The periodic table of videos includes introductions to all elements. The molecular videos section focuses on interesting chemical reactions. Apart from having quite some entertaining value, they visualise a lot of reactions that can only safely be carried out in a laboratory environment. The videos are hosted on their YouTube channel.
Khan Academy is a site, which is explaining many different subjects to all who are eager to learn. Those subjects are explained through video lessons on youtube, on which the speaker uses a virtual blackboard to draw and clarify the explanations for a deeper understanding. As those videos cover basic theories, it is probably more helpful for the beginner as for the advanced student. There are two channels for chemistry:
Khan Academy - Chemistry
Did you know that everything is made out of chemicals? Chemistry is the study of matter: its composition, properties, and reactivity. This material roughly covers a first-year high school or college course, and a good understanding of algebra is helpful.
Khan Academy - Organic Chemistry
Topics covered in college organic chemistry course are explained. Basic understanding of basic high school or college chemistry assumed (although there is some review).
Organic Chemistry 1 by University of New Orleans
This is the first semester of sophomore Organic Chemistry.This course completes most chemistry requirements for pre-professional degree programs and science degrees. This course will cover the introduction of basic fundamental topics of organic chemistry. Specifically the structure-activity relationship and spectroscopy of organic functional groups will be investigated. Starting with simple organic models, we will cover structures of organic chemicals from basic connectivity to three- dimensional spatial alignments. Nomenclature and spectroscopy of the different groups will be covered along with reactivity of those groups.
This is the second semester of sophomore Organic Chemistry. This course completes most chemistry requirements for pre-professional degree programs and science degrees. This course will cover the reactions, mechanisms and properties of various functional groups including dienes, arenes, carbonyls, carboxylic acid and their derivatives, phenols, amines as well as biochemicals such as carbohydrates, lipids, amino acids and proteins.
General Organic Chemistry 2 Course by Arizona State University
This is the complete set materials used in the second semester of Organic Chemistry. It includes home works, video lectures, notes, exams, etc.
Chemistry Courses by University of Massachusetts - Boston
Offers both Organic Chemistry 1 and 2.
"Organic Chemistry – Structure and Reactivity" by UC Berkeley professor Peter C. Vollhardt
An Excellent Resource for Organic Chemistry. It is different from other courses, in the manner the of approach to the topic.
Organic Reactions and Pharmaceuticals by Professor Hardinger, UCLA
The lectures are lot more exciting than others due to the method of teaching of the professor.Organic Reactions and Pharmaceuticals is a class that provides an in-depth analysis of organic reactions, nucleophilic and electrophilic substitutions and additions; electrophilic aromatic substitutions, carbonyl reactions, catalysis, molecular basis of drug action, and organic chemistry of pharmaceuticals.
UC Irvine OpenCourseWare
This is mother of all resources. OpenChem by UCI offers a course on each and every aspect of chemistry. It has 6 different courses for organic chemistry whose levels vary from undergraduate to graduate.
The youtube channel of the Australian and New Zealand Society for Magnetic Resonance
Presented briefly by Kwan et al. in the Journal of Chemical Education, basic and advanced principles of NMR / MRI and EPR are lectured.
MIT 5.60 Thermodynamics and Kinetics, Spring 2008
The above course is available as a part of MIT OCW. Links in the description of video lectures includes lecture notes, readings, exams and course materials available for download. Covers a basic to intermediate introduction to thermodynamics and kinetics at the undergraduate level.
TMP Chem
Trent M. Parker's youtube channel about quantum chemistry, spectroscopy, chemical thermodynamics, kinetics, theoretical/computational chemistry, reviewing mathematics for physical chemistry. Computations centre on Python.
Uranium- Twisting the Dragon's Tail
Host and physicist Dr. Derek Muller unlocks the mysteries of uranium, one of the Earth's most controversial elements. Born from the collapse of a star, uranium has brought hope, progress and destruction. It has revolutionized society, from medicine to warfare. It is an element that has profoundly shaped the past, will change the future and will exist long after humans have left the Earth.
edited yesterday
Chemistry LibreTexts
This collection was formerly known as UC Davis' ChemWiki: The Dynamic Chemistry Hypertext. It is a collaborative approach toward chemistry education where an Open Access textbook environment is constantly being written and re-written by students and faculty members resulting in a free Chemistry textbook to supplant conventional paper-based books.
Jim Clark's Chemguide
An in-depth overview of many areas of basic chemistry, including organic, inorganic, physical, and instrumental. Forgoes mathematical treatments of chemistry in favor of teaching an intuitive understanding for how chemical systems behave. Targeted at UK A-level students, but useful for anyone who wants to shore up their fundamentals.
Virtual Textbook of Organic Chemistry
A very useful introduction to principles in organic chemistry and the reactions of common functional groups.
IUPAC Compendium of Chemical Terminology - the Gold Book
A concise compendium of the most common terminology that is used in chemical and related sciences.
NIST Chemistry WebBook
A searchable database for standard reference data of chemical compounds by the National Institute of Standards and Technologies.
Symmetry@Otterbein
An interactive tutorial (by Otterbein University) about molecular, as well as crystallographic symmetry.
A Hypertext Book of Crystallographic Space Group Diagrams and Tables
The Space group diagrams you find in the in the International Tables of Crystallography. It equally considers standard settings (e.g., $P2_1/c$) and alternatives like $P2_1/a$, $P2_1/n$, $B2_1/a$, $B2_1/d$ -- all of the filed under #14.
Online Dictionary of Crystallography
The dictionary is provided by the International Union of Crystallography (IUCr). It is a curated glossary about the more frequently met crystallographic concepts.
Organic Chemistry Lab Techniques
An illustrated collection of basic techniques met in the organic chemistry lab. The content naturally shows some overlaps with / complements to print primers like the one by Zubrick.
Organic Chemistry Data & Info
A site of general interest about chemical and reaction data, cross-link to literature references (like the Hans Reich collection) useful for the chemist in the lab as well as educators/students. Moderated by the Division of Organic Chemistry of the American Chemical Society.
$\begingroup$ Is there a clean difference between an online course a website, or is it somewhat fuzzy? $\endgroup$
– chipbuster
$\begingroup$ For me, the online courses are a special subset of websites that are explicitly for teaching conceptual stuff and not only for giving some information. E.g., the mentioned Goldbook and the Webbook both give information from which you can learn, but they are not teaching like what you'd expect from a school lesson. And then there are sites like Khan Acadamy on youtube, which besides other subjects is about teaching chemistry and should be classified as an online course. But I guess you are right, when saying it can be quite fuzzy. (Please ask about such things in the meta thread next time.) $\endgroup$
– pH13 - Yet another Philipp
$\begingroup$ Another great website is the eBook Principles of General Chemistry $\endgroup$
Books about Analytical Chemistry
Schwarzenbach, G.; Flaschka, H. A. Complexometric Titrations; Methuen: London, 1957 (translated by H. Irving in 1969).
Skoog, D. A.; West, D. M.; Holler, F. J.; Crouch, S. R. Fundamentals of Analytical Chemistry, 9th ed.; Brooks/Cole: Pacific Grove, CA, 2013.
Elgrishi et al., A Practical Beginner's Guide to Cyclic Voltammetry, J. Chem. Educ. 2018, 95, 2, 197-206, doi 10.1021/acs.jchemed.7b00361. A short introduction, indicating tips and pitfalls, published as open access.
Books about Biochemistry and Chemical Biology
Voet, D.; Voet, J. G. Biochemistry, 4th ed.; Wiley: Hoboken, NJ, 2011.
Nelson, D.; Cox, M. Lehninger Principles of Biochemistry, 7th ed.; Macmillan Higher Education: Houndmills, UK, 2017.
Berg, J.; Tymoczko, J.; Gatto Jr., G; Stryer L. Biochemistry, 9th ed.; Macmillan Higher Education: Houndmills, UK, 2019.
Books about Chemical Engineering
The probably most comprehensive reference for chemical engineering is the McGraw-Hill Chemical Engineering Series, which contains more than you should know as a student. After all a strong knowledge in all areas of chemistry is necessary. As an engineer, and a chemist especially, if you are looking for sizing operations, you'll need to think about details, which are not in the books unfortunately.
For the related question, see: What are introductory level books on chemical engineering?
The Visual Encyclopedia of Chemical Engineering is a project hosted by the Department of Chemical Engineering, University of Michigan. By text, illustration, and video, typical applications of devices and processes, their advantages and disadvantages are presented. Equally, means to monitor processes are shown and literature references (pointing e.g. to Perry's Chemical Engineering Handbook) provided.
Books about Computational Chemistry and Quantum Chemistry
Atkins, P.; Friedman, R. Molecular Quantum Mechanics, 5th ed.; Oxford UP: Oxford, U.K., 2010. Oxford University Press, Amazon.
Cramer, C. J. Essentials of Computational Chemistry: Theories and Models, 2nd ed.; Wiley: Chichester, U.K., 2004. Wiley, Amazon.
Jensen, J.H. Molecular Modeling Basics; CRC Press: Boca Raton, FL, 2010. CRC Press, Amazon.
Levine, I. N. Quantum Chemistry, 7th ed.; Prentice Hall: Upper Saddle River, NJ, 2014. Prentice Hall, Amazon.
McQuarrie, D. A. Quantum Chemistry, 2nd ed.; University Science Books: Mill Valley, CA, 2007. University Science Books (https not available), Amazon.
Koch, W.; Holthausen , M. C. A Chemist's Guide to Density Functional Theory, 2nd ed.; Wiley-VCH: Weinheim, 2001. ISBNs: 3-527-30372-3 (Softcover); 3-527-60004-3 (Electronic). DOI: 10.1002/3527600043.
Introductory text for chemists familiar with conventional quantum mechanics. The book introduces density functional theory: its basis, concepts, terms, implementation, and performance in diverse applications. This includes the usage of DFT for structure, energy, and molecular property computations, as well as reaction mechanism studies, etc.
Allen, M. P.; Tildesley, D. J. Computer Simulation of Liquids; Clarendon Press: Oxford, U.K., 1987. <Links>.
Bachrach, S. M. Computational Organic Chemistry, 2nd ed.; Wiley: Hoboken, NJ, 2014. Wiley (https not available), Amazon.com.
Jensen, F. Introduction to Computational Chemistry, 3rd ed.; Wiley: Chichester, U.K., 2017. Wiley (https not available), Amazon.com.
Provides a good overview/introduction to many aspects of QC. The focus is on concepts, not on mathematical rigour.
Szabo, A.; Ostlund, N.S. Modern Quantum Chemistry: Introduction to Advanced Electronic Stucture Theory; Dover: Mineola, NY, 1989 (revised in 1996). Dover Publications (https not available), Amazon.com.
A classic introduction to the ab initio wave-function-based methods of electronic structure theory. Contains detailed discussions of the Hartree-Fock and post-Hartree-Fock methods such as Møller–Plesset perturbation theory, configuration interaction, and coupled cluster. The latter discussions are correct, but sometimes a bit dated.
Tuckerman M.E; Statistical Mechanics: Theory and Molecular Simulation, 1st ed.; Oxford University Press: Oxford, U.K., 2010. Oxford UP, Amazon.com.
Helgaker T.; Jørgensen P.; Olsen J. Molecular Electronic-Structure Theory; Wiley: Chichester, U.K. 2000. Wiley (https not available), Amazon. An in-depth description of the inner workings of all the common modern wavefunction based methods - Hartree-Fock and multi-reference self-consistent field, perturbation theory, configuration interation and coupled cluster.
Books about Theoretical Chemistry
Cotton, F. A. Chemical Applications of Group Theory, 3rd ed.; Wiley: New York, 1990.
Vincent, A. Molecular Symmetry and Group Theory: A Programmed Introduction to Chemical Applications, 2nd ed.; Wiley: Chichester, U.K., 2000.
$\begingroup$ related: List of important publications in chemistry on Wikipedia $\endgroup$
$\begingroup$ There is an excellent course available for free at compmatphys.org . I am not sure if the videos will be available afterwards. $\endgroup$
– CoffeeIsLife
$\begingroup$ Can someone mention what are the standard undergraduate level (up to 4th year) analytical chemistry textbooks used in Germany? $\endgroup$
– M. Farooq
ChemOffice Professional
Commercial software for drawing molecular structures, 3D models and many more. High price but most higher education institutes will provide students/staff with free institutional licenses. The molecular sketcher now has a free online version (built using HTML$5$ and JS): ChemDraw online
MarvinSketch (desktop) & MarvinJS (web)
Freeware (closed-source) for drawing molecular structures with a wide feature set. In addition to just being able to draw Lewis structures, the software includes plugins to name what you have drawn (systematic/traditional), predict properties, change atom and bond properties, generate stereoisomers, and much more. Desktop version is Java based so works on Mac, Windows, Linux.
Commercial software for 2D and 3D molecular structures and diagrams and much more. Also has versions for tablet/mobile. An online free version of ChemDoodle is here: ChemDoodle Web Components
ACD/ChemSketch
Commercial software for Windows only. Includes a free version with fewer features.
chemfig
Free $\mathrm\LaTeX$ package distributed under the LATEX Project Public License 1.3c developed by Christian Tellechea for creating 2D chemical structures with seamless integration into any type of a document, from a standalone illustration to a textbook or a poster. Since TikZ is used for graphics generation, the functionality of chemfig can be greatly extended. PDF output can also be rasterized by using external tools such as ImageMagick. Also, see chemfig questions on TeX.SE.
Avogadro
Free, open source software for generating 3D models and computational chemistry.
GAMESS
Free ab initio molecular quantum chemistry program.
Free for academic use ab initio/DFT quantum chemistry program. Has a nice tutorial on basic molecular modelling here.
Open Babel: The Open Source Chemistry Toolbox
Free "Open Babel is a chemical toolbox designed to speak the many languages of chemical data. It's an open, collaborative project allowing anyone to search, convert, analyze, or store data from molecular modeling, chemistry, solid-state materials, biochemistry, or related areas."
RDKit: Open-Source Cheminformatics Software
Odyssey by Wavefunction
Commercial software designed to be used for teaching and learning chemistry. Includes high quality molecular simulations of many materials, in-built virtual labs. Excellent animations of many physical and chemical processes.
$\begingroup$ Related questions: - What free software exists to view molecule pictures within a file explorer? - How do I take chemistry notes on Mac? - How do I make 3d molecular graphics similar to those shown on Wikipedia? $\endgroup$
$\begingroup$ Also related on our meta: Software to name compounds $\endgroup$
– Gaurang Tandon
Books about General Chemistry
Atkins, P. W.; Jones, L. L.; Laverman, L. E. Chemical Principles: The Quest for Insight, 6th ed.; W. H. Freeman: New York, 2012.
Silberberg, M.; Amateis, P. Chemistry: The Molecular Nature of Matter and Change, 7th ed.; McGraw-Hill: New York, 2014.
Oxtoby, D. W.; Gillis, H. P.; Campion, A. Principles of Modern Chemistry, 8th ed.; Cengage Learning: Boston, MA, 2015.
Zumdahl, S. S.; Zumdahl, S. A. Chemistry, 9th ed.; Brooks/Cole: Pacific Grove, CA, 2013.
Whitten, K. W.; Davis, R. E.; Peck, L.; Stanley, G. G. Chemistry, 10th ed.; Brooks/Cole: Pacific Grove, CA, 2013.
Munowitz, M. Principles of Chemistry, W. W. Norton and Co.: New York, NY, 2000.
References about Nomenclature
Successful communication requires an agreed set of definitions compiled as nomenclature. An example for such compilations are IUPAC's Color Books, named by the color of their book cover. Below, they are listed in alphabetic order of their color.
Favre, H. A. Powell W. H. (eds.) Nomenclature of Organic Chemistry: IUPAC Recommendations and Preferred Names 2013, IUPAC Blue book, RSC Publishing
It is noteworthy that there is a searchable compilation of these rules on-line prepared by G. P. Moss here.
The sub-set about substitutive nomenclature is nicely illustrated by Hellwich et al. in the Brief guide to the nomenclature of organic chemistry as open access publication, altogether with a four-page SI (pdf) summary this article.
McNaught, A. D. and Wilkinson, A. (eds.) Compendium of Chemical Terminology, The Gold Book, Blackwell Science, Oxford 1997. The perhaps most elemental compilation about terms, synonyms, acronyms, and abbreviations by IUPAC equally may be consulted as online reference.
Cohen, E. R. et al. (eds.) Quantities, Units and Symbols in Physical Chemistry, IUPAC Green Book. A joint effort by IUPAC, IUPAP, and ISO about quantities, constants, units and their recommended symbolization. The conversion of units, scientific typography and uncertainty are addressed, too. The content of the second reprint is available online (pdf). Equally, Stohner and Quack prepared a four-page summary (pdf).
Inczedy, J.; Lengyel, T. and Ure, A. M. IUPAC Compendium on Analytical Nomenclature, Definitive Rules 1997 3rd edition, Blackwell Science, 1998. Equally known as Orange Book, it addresses topics like the (statistical) representation of results, terms and definitions of classical wet chemistry like titrimetry, thermo- and electrochemical analysis, separations (chromatography and extraction), spectroscopic and spectrometric techniques; kinetic, radiochemical or surface analytical characterizations; as well as quality assurance. A fourth edition was foreseen for 2019 and currently is scheduled for 2021.
Between 2000 and 2003, IUPAC's Analytical Chemistry Division set up a public searchable online compendium. To ease the compilation of .pdf files, there is a searchable index. There are concepts considered important (PCA / principal component analysis, PLS / partial least squares, or chemometrics) for which the current edition does not (yet) contain a proper dedicated keyword.
Jones, R. G. et al. Compendium of Polymer Terminology and Nomenclature, IUPAC Recommendations 2008, RSC Publishing, 2009. The Purple Book addresses both general as well as specific terms about the nomenclature of polymers, their graphical representation. In addition, copolymers and blends of polymers; liquid crystals; sols, gels, networks, inorganic organic hybrid materials; reactions and aging of polymers are among the topics addressed.
Since June 2014, there is searchable online pdf compendium by IUPAC, too.
Connelly, N. G.; Damhus, T.; Hartshorn, R. M.; Hutton, A. T. (eds.) Nomenclature of Inorganic Chemistry: IUPAC Recommendations 2005, RSC publishing, 2005. The Red Book compiles the rules about elements, isotopes, simple inorganic compounds, coordination compounds, metalorganic compounds, and solids.
There is a freely available online release (pdf) by IUPAC, as well as a public site about errors and updates. Similar to the Blue Book, there equally is a Brief guide to the nomenclature of inorganic chemistry with examples of application illustrated in colour, as well as a 4-page summary (pdf).
Férard, G.; Dybkaer, R.; Fuentes-Arderiu X. Compendium of Terminology and Nomenclature of Properties Clinical Laboratory Sciences, Recommendations 2016, RCS publishing, 2016. The Silver Book is a joint effort by IUPAC and IFCC, and to be understood as one interface between laboratory scientists and medical professionals.
A 4-page summary (pdf) is available.
Liébecq, C. Biochemical Nomenclature and Related Documents, 2nd edition, Portland Press, 1992. For this reference known as White Book, G.P. Moss set up an on-line companion site.
If used critically, the non-exhaustive listing about software to name [organic] compounds mentioned in a comment by @Gaurang Tandon may be of interest, too.
Not the answer you're looking for? Browse other questions tagged software reference-request books or ask your own question.
Books for a very advanced 3rd grader (elementary school)
Organic chemistry textbook for self-learning?
A Chemdrawing software?
can anyone suggest me good chemistry video lectures
Books on Chemical Reaction Theory
Resources to learn computational chemistry?
what are some books and research papers that an aspiring theoretical chemist must read?
References for learning about acid and bases in organic chemistry
Studying Chemistry
Guide to DFT functionals
Learning chemistry with software?
InChIKey generation from InChI
What are the most extreme chemicals?
Could anyone suggest some catchy chemistry experiments or demonstrations, to be performed in front of a young audience?
What are introductory level books on chemical engineering? | CommonCrawl |
On achieving network throughput demand in cognitive radio-based home area networks
Mohd Adib Sarijari1,2,
Mohd Sharil Abdullah2,
Gerard JM Janssen1 &
Alle-Jan van der Veen1
The growing number of wireless devices for in-house use is causing a more intense use of the spectrum to satisfy the required quality-of-service such as throughput. This has contributed to spectrum scarcity and interference problems particularly in home area networks (HAN). Cognitive radio (CR) has been recognized as one of the most important technologies which could solve these problems and sustainably meeting the required communication demands by intelligently exploiting temporarily unused spectrum, including licensed spectrum. In this paper, we propose a throughput demand-based cognitive radio solution for home area networks (TD-CRHAN) which aims at effectively and efficiently meet the ever-increasing throughput demand in HAN communication. It is shown numerically and by simulations that a TD-CRHAN can satisfy the requested throughput from the network devices and has high utilization of the available throughput. The analysis further shows that, by setting the achievable throughput to be as close as possible to the total demanded throughput (instead of maximizing it), a TD-CRHAN is able to relax the tight cooperative spectrum sensing requirements which significantly improves cooperative spectrum sensing parameters, such as the local spectrum sensing time and the number of cooperative spectrum sensing devices. Finally, it is shown that these cooperative spectrum sensing parameters can be further improved when additional channels are available.
A future home area network (HAN) is envisaged to consist of a large number of devices that support various applications including smart grid, security and safety systems, voice call, and video streaming. Most of these home devices are communicating based on various wireless networking technologies such as WiFi, ZigBee, and Bluetooth which typically operated in the already congested ISM licensed-free frequency band [1]. As these devices are located in a small physical space (i.e., limited by the size of the house) creating a dense HAN, they might interfere one another and causing a severe limitation to the quality of service (QoS) such as throughput. These issues are further aggravated in dense cities where the HAN also receives interference from neighboring HANs. Cognitive radio (CR) is seen as one of the most promising technologies to solve these problems and at the same time fulfill the HAN's communication needs. CR technology enables the HAN devices to intelligently exploit idle spectrum including licensed spectrum for their communications, avoiding from being interfered as well as causing interference to others (in particular, the incumbent user).
A key component of CR-based networks is spectrum sensing, i.e., to reliably identify temporarily unused spectrum which is then exploited. Many existing works on throughput-based spectrum sensing focus on maximizing the achievable throughput. In [2], the maximum achievable throughput is obtained by optimizing the local spectrum sensing time, subject to a certain level of spectrum owner protection. The work in [3] incorporates the parameters from spectrum sensing (i.e., sensing time and number of cooperating devices decided the channel is occupied) and spectrum access (i.e., transmission probability) and optimizes those parameters to yield the maximum throughput for a given spectrum set. Further, in [4], the optimal sensing order for the channels is determined based its occupancy history, i.e., by correlating the channel availability statistics across time and frequency, in order to maximized the total achievable throughput. In addition, in our previous work [5] and in [6], throughput maximization is achieved by determining the optimal local spectrum sensing time, number of cooperating nodes and fusion strategy. However, aiming at maximizing the achievable network throughput leads to tight requirements on cooperative spectrum sensing parameters (e.g., spectrum sensing time and number of cooperating devices). On the other hand, in practice, every communication network has a certain demanded throughput; hence, a maximization of the achievable network throughput without taking into consideration the actual network's needs is inefficient. Throughout this paper, we refer to this throughput maximization-based solution in spectrum sensing as the conventional case.
In this work, we propose a throughput demand-based cognitive radio communication for home area networks (TD-CRHAN), where, instead of maximizing the achievable throughput, the TD-CRHAN seeks to tightly satisfy the network throughput demand. To the best of our knowledge, this is the first work proposing such an objective for CR-based HAN communication. In the TD-CRHAN, the optimal local spectrum sensing time and number of cooperating devices required for spectrum sensing are determined, and it is shown that these are significantly lower as compared to the values from the conventional scheme. In addition, by taking into consideration the total throughput demand in designing the CR-based HAN communication, the TD-CRHAN scheme is also able to determine the optimal number of channels needed for the HAN.
We mathematically model the proposed TD-CRHAN scheme and formulate a suitable optimization problem with corresponding constraints. In the derivations, we consider general expressions for the cooperative spectrum sensing performance parameters (i.e., cooperative probability of false alarm, and detection). This supports scenarios in which the signal-to-noise ratio (SNR) of the incumbent user is not the same at different sensing devices and supports more general fusion rules, not limited to OR and AND rules only. Note that most of the previous works consider the same incumbent signal strength at all sensing devices and/or only consider OR and AND rules [2, 5–7] in order to simplify the analytical models and derivations. Assuming the same SNR is not realistic, in particular for indoor environments, because the sensing devices will be located at various locations where for example, devices that are located near the window may receive a relatively strong incumbent user's signal while devices which are located further inside the house will experience a very low signal strength.
Finally, we thoroughly analyze the performance of the TD-CRHAN, numerically and through simulations, where we compare the performance with the conventional scheme, illustrate the impact of different parameter settings, and demonstrate the significant gains obtained from TD-CRHAN.
The remaining of this paper is organized as follows. Section 2 explains the proposed TD-CRHAN; Section 3 presents the derivation of the considered system model and the cooperative spectrum sensing, as well as the formulation of the problem and the proposed solution; the numerical analysis and the simulation results are presented in Sections 4 and 5, respectively; and the conclusions are in Section 6. A list of key symbols used in this paper is given in Table 1.
Table 1 List of key symbols
Throughput demand-based cognitive radio home area network (TD-CRHAN)
TD-CRHAN topology
The proposed TD-CRHAN topology is based on a network of clustered CR devices as shown in Fig. 1. It consists of a HAN gateway (G), a cognitive HAN controller (C), a number of cognitive cluster heads (CHs) and many CR-based HAN devices. In such a network, the cognitive HAN controller is connected to the HAN gateway with a fixed connection while the CHs are linked to the cognitive HAN controller through wireless multi-hop links. The CHs are deployed such that each area of the house is covered. The communication among CHs is in a meshed manner. Each CH will form a network cluster. The CR-based HAN devices will need to connect to one of the clusters in order to communicate with or through the HAN network.
The proposed network topology of TD-CRHAN
The functionalities of each network component are further described as follows.
The HAN gateway is the communication gateway for the HAN network to the outside world (i.e., the internet). Normally, the HAN gateway is connected to the internet service provider (ISP) for internet access through an Ethernet or Optical Fiber cable. The other possible connection is via a wireless link, e.g., the WiMAX or LTE network.
The cognitive HAN controller is the device that is responsible to manage and coordinate the spectrum usage of the HAN. For this, the cognitive HAN controller needs to construct a spectrum map database for the particular HAN environment. This database consists of a list of channels that the CH can use in their cluster, and the condition of each channel, i.e., the statistics of the channel activities including channel utilization. It is constructed from the information fed by the CHs using for example the concept of MAC-layer sensing [8]. From this database, the cognitive HAN controller will provide the CHs with the channels that they could scan and utilize for their cluster. Therefore, the channels that the CHs are going to exploit are optimal and not random. In addition, in this way, the cognitive HAN controller also knows which channels are being utilized by which CHs and which are still unallocated. In this work, the channels that are allocated to the CHs are called in-band channels while the channels that are not allocated are called candidate channels. This concept is illustrated as in Fig. 2.
The proposed spectrum management in TD-CRHAN
The cognitive cluster head (CH) is responsible to manage the usage of the cluster's in-band channels including sensing and access. A CH can request for more channels from the cognitive HAN controller if the current in-band channels are not enough to support its network cluster demand. Each CH will utilize different channels from the other CHs creating a distributed multi-channel network in the HAN. In addition, a CH is also responsible for selecting and grouping the CR-based HAN devices that are connected to it to perform the cooperative spectrum sensing (CSS) task. Besides, it also needs to schedule and distribute the selected and grouped CR-based HAN devices on when and where to sense, respectively. For CSS, a CH also acts as the fusion center where the local sensing results from the sensing devices will be reported to and the decision of spectrum availability will be made. Last but not least, from the CSS results, a CH is required to report the channel utilization and occupancy to the cognitive HAN controller periodically in order for the controller to construct and keep the spectrum map database up to date.
CR-based HAN devices are the devices that carry out various HAN applications including smart grid, security and safety, and home automation. These devices will connect to one of the clusters to get access and communicate with or through the HAN network. Besides performing the communication for its application, CR-based HAN devices also need to execute the spectrum sensing task. We consider two types of CR-based HAN devices: home and guest devices. Home devices are devices which belong to the HAN-owner, while guest devices do not belong to the HAN-owner. An example of a guest device is a neighbor's device which needs to off-load its traffic, e.g., due to congestion in its own HAN network. Another example is a device that passes through the house and wants to connect to the internet through the HAN network. For the home CR-based HAN devices, the communication topology within the cluster is in a mesh. However, the guest devices are only allowed to connect to the CH.
TD-CRHAN operation
In TD-CRHAN, CR-based HAN devices need to be connected to one of the clusters in order to get access and communicate with or through the HAN network. For this, any cluster joining mechanism such as listed in [9] can be applied. One of the simple mechanisms is as employed in the IEEE 802.22 standard [10]. In this standard, the CH transmits a beacon at the beginning of each frame in each of the in-band channel. Alternatively, this beacon can be sent in one of the highest quality in-band channels. A CR-based HAN device will search for one of these beacons at its start-up and connect to the corresponding CH's cluster once the beacon is found. If the CR-based HAN device can hear beacons from multiple CHs, it may choose to join either one cluster based on for example the signal strength and/or the signal quality of the received beacons [9].
Figure 3 illustrates the TD-CRHAN operation for one network cluster. In a TD-CRHAN, the bandwidth of the cluster is adaptable, it can be expanded or shrunk depending on the total throughput demand of the network cluster. In the example in the figure, at time t 0, the cluster only uses one in-band channel i.e., channel B. When the cluster needs more bandwidth, i.e., at time t 1 and t 4, for example due to a new connected device, the number of in-band channels is increased to two channels with the addition of channel A, and to three channels with the addition of channel C, respectively. The additional in-band channels are obtained from the pool of candidate channels at the cognitive HAN controller. This process is illustrated by the arrows labeled "2" in Fig. 2. The cognitive HAN controller will provide the cluster with the best candidate channels it has. These channels will be passed on to the CH.
An example of TD-CRHAN operation
In addition, at time t 9, the cluster shrinks its bandwidth by releasing one of its in-band channels that is channel D due to a decrease in the network demand, e.g., due to a device leaving the cluster. The released channel is selected from the lowest quality channels among the in-band channels. This channel will be returned to the cognitive HAN controller and becomes a candidate channel that can be used by other clusters. This process is illustrated by the arrows labeled "1" in Fig. 2.
During typical CR operation, spectrum sensing will be executed first before any channel can be used for data transmission. In this work, the CSS method is considered. Therefore, the sensing operations will consist of spectrum sensing and reporting segments. For this, the CR-based devices will be grouped together forming multiple spectrum sensing groups in the cluster. For instance, in Fig. 3, three sensing groups are formed: groups 1, 2, and 3. The CH will schedule and distribute the spectrum sensing tasks among these groups. In doing so, the CH has to ensure as much as possible that the group which is scheduled for sensing does not have any group member involved in active communication during this sensing period.
The CH also acts as the CSS's fusion center. Unlike in conventional CSS where the sensing results are transmitted either at the same sensed channels as in [5, 6] or by using a dedicated common control channel as in [3, 11], in TD-CRHAN, the sensing results are transmitted in one of the active transmission slots of the in-band channels as shown in Fig. 3. For this, the CH will inform the sensing groups on which channel the sensing reports should be transmitted and when. This information can be broadcasted by the CH through the beacons. In this way, the sensing reporting transmission will not interfere with the incumbent user of the channel, and the quality of the reporting channels are also ensured. Note that the sensing report information is very crucial, hence it needs to be highly reliable [12]. In case the dedicated common control channel is used, a dedicated channel will be required and the reporting transmission could cause this channel to be congested, and thus it may become the bottleneck of the network [11].
If the CSS results show that a channel is highly occupied (often busy), the CH will withdraw this channel from its in-band channels' list and return it to the HAN gateway. In the meantime, the CH can request for an additional in-band channel from the HAN controller to overcome the throughput degradation due to this highly occupied in-band channel. This scenario is illustrated at times t 8 and t 9 in Fig. 3 where the returned channel is channel B and the new channel is channel D, respectively. In this example, the channel is returned to the HAN controller after one time it is sensed to be occupied.
In the next sections, we consider schemes to satisfy the TD-CRHAN network throughput demand with high resource (available throughput) utilization, and we determine the optimal local spectrum sensing time, the number of cooperating sensing devices and the number of active in-band channels needed.
A simple network model (one cluster) of the proposed TD-CRHAN network is shown in Fig. 4. It consists of a HAN gateway (G), a cognitive HAN controller (C), a cluster head (CH), and J CR-based HAN devices n as j,j=1,2,…,J. Every CR-based HAN device is equipped with a half-duplex radio that can be tuned to any combination of I channels for data transmission and reception. This can be done by using, for example, the non-contiguous OFDM (NC-OFDM) technology [13]. Besides data communication, each CR-based HAN device is also able to perform a narrow-band spectrum sensing in which the sensing bandwidth is equal to the bandwidth of a single channel.
Network model diagram for a single TD-CRHAN cluster
Cooperative spectrum sensing
In CSS, each cooperating CR-based HAN device will periodically sample the spectrum and send its local spectrum sensing result to a fusion center (in our case, this is the CH). The CH will combine these local spectrum sensing results using a certain fusion strategy to make the final decision on whether the sensed spectrum is idle or not. In this work, a hard-fusion strategy is considered in which each cooperating CR-based HAN device makes a local decision and sends only this decision to the CH. The local decision is a binary hypothesis test: decide whether the sensed channel is idle, given by hypothesis \(\mathcal {H}_{0}\), or occupied, given by hypothesis \(\mathcal {H}_{1}\). Each of the spectrum-samples observed by a CR-based HAN device can be modeled as
$$ x[\!l] = \left\{ \begin{array}{ll} w[\!l] & ~~~~~: \mathcal{H}_{0} \\ u[\!l] + w[\!l] & ~~~~~: \mathcal{H}_{1} \end{array} \right. $$
((1))
where l=1,2,…,L. Here, L is the total number of observation samples made by a CR-based HAN device within the local spectrum sensing period T s such that L=T s /τ, where τ is the sampling period. We assume that the Nyquist sampling condition holds, i.e., τ is at least one over twice the channel bandwidth. Further, u[ l] is the received incumbent signal and w[ l] is the additive noise signal, during the l-th sample. u[ l] is given by u[ l]= s[ l]∗ h[ l], with s[ l] is the transmitted incumbent signal and h[ l] is the impact of Rayleigh fading channel. Note that u[ l] does not contain the impact of additive noise but the additive noise component is taken into account in w[ l]. Both w[ l] and u[ l] are assumed to be independent and identically distributed (i.i.d.) random processes with zero mean and variance \({\sigma _{w}^{2}}\) and \({\sigma _{u}^{2}}\), respectively. We consider additive white Gaussian noise (AWGN) for w[ l] and a random signal with a Gaussian distribution for u[ l].
In this paper, energy detection is considered for spectrum sensing. The received power is estimated as
$$ \hat{E} = \frac{1}{L} \sum_{l=1}^{L} x^{2}[\!l]. $$
\(\hat {E}\) is the output of the energy detector which is used as input for a binary hypothesis test of the CR-based HAN device. In the test, \(\hat {E}\) is compared to a predefined threshold γ to decide on hypothesis \(\mathcal {H}_{0}\) or \(\mathcal {H}_{1}\). The performance of this test is characterized by two metrics: the probability of detection (P d ) and the probability of false alarm (P f ). The probability that a CR-based HAN device decides that the channel is occupied (i.e., \(\hat {E}>\gamma \)) under \(\mathcal {H}_{1}\) is given by
$$ P_{d} = P\left(\hat{E} > \gamma \mid \mathcal{H}_{1}\right) $$
while the probability that a CR-based HAN device decides that the channel is occupied under \(\mathcal {H}_{0}\) is
$$ P_{f} = P\left(\hat{E} > \gamma \mid \mathcal{H}_{0}\right) $$
From [2, 5], for a targeted probability of detection \(\bar {P}_{d}\), the corresponding probability of false alarm P f can be expressed as
$$ {\fontsize{8.3}{12}{\begin{aligned} P_{f}\left(\bar{P}_{d},\text{SNR}_{p},T_{s}\right) = \mathcal{Q} \left(\text{SNR}_{p} \sqrt{\frac{T_{s}}{2 \tau}} + \mathcal{Q}^{-1}\left(\bar{P}_{d}\right) \sqrt{1 + 2 \text{SNR}_{p}}\right) \end{aligned}}} $$
where \(\mathcal {Q}(\cdot)\) denotes the usual Q-function (the tail probability of the standard normal distribution), and \( {\mathrm{SNR}}_p:={\sigma}_u^2/{\sigma}_w^2 \) is the signal-to-noise ratio of the incumbent user at the sensing device. Alternatively, if a target \(\bar {P}_{f}\) needs to be achieved, the achievable P d can be formulated as [2, 5]
$$ {\fontsize{8.5}{12}{\begin{aligned} P_{d}\!\left(\bar{P}_{f},\text{SNR}_{p},T_{s}\right) = \mathcal{Q} \left(\!\frac{1}{\sqrt{1 + 2 \text{SNR}_{p}}}\!\left(\!\mathcal{Q}^{-1}\left(\bar{P}_{f}\right) \!- \text{SNR}_{p} \sqrt{\frac{T_{s}}{2 \tau}}~\right) \right)\!. \end{aligned}}} $$
Notice that any pair of \(\bar {P}_{d}\) and \(\bar {P}_{f}\) can be satisfied if the local spectrum sensing time T s is not restricted. From (5) or (6) it follows that [2, 5]
$$ \begin{aligned} L = \frac{T_{s}\left(\text{SNR}_{p},\bar{P}_{d},\bar{P}_{f}\right)}{\tau} &= \frac{2}{\text{SNR}_{p}^{2}}~\left(\mathcal{Q}^{-1}(\bar{P}_{f}) \right.\\ &\left.\quad\,\,- \mathcal{Q}^{-1}(\bar{P}_{d}) \sqrt{1+2\text{SNR}_{p}} \right)^{2}. \end{aligned} $$
In this paper, we consider CSS with a hard-fusion strategy, wherein each cooperating CR-based HAN device sends its local decision to the CH. The CH makes the final decision and decides \(\mathcal {H}_{1}\) if at least K out of N cooperating CR-based HAN devices have decided that the channel is occupied; otherwise \(\mathcal {H}_{0}\) will be decided. This strategy is known as the K-out-of-N fusion rule. The cooperative probability of detection Q d and false alarm Q f under this fusion rule can be derived using the Poisson-Binomial distribution theorem as [3, 14, 15]
$$ Q_{d} = \sum_{k=K}^{N} \sum_{\mathcal{A}_{k}^{(a)} \in \mathcal{A}_{k}} \prod_{g \in \mathcal{A}_{k}^{(a)}} P_{d_{g}} \prod_{h \in \left\{\mathcal{N}\setminus \mathcal{A}_{k}^{(a)}\right\}} (1-P_{d_{h}}) $$
$$ Q_{f} = \sum_{k=K}^{N} \sum_{\mathcal{A}_{k}^{(a)} \in \mathcal{A}_{k}} \prod_{g \in \mathcal{A}_{k}^{(a)}} P_{f_{g}} \prod_{h \in \left\{\mathcal{N}\setminus \mathcal{A}_{k}^{(a)}\right\}} (1-P_{f_{h}}) $$
\(\mathcal {N} = \{1,\cdots,N\}\) is a set consisting of all sensor indices,
\(\mathcal {A}_{k}\) is a set consisting of all possible subsets of k elements of \(\mathcal {N}\), representing the k out of N sensing devices that locally decide that the channel is occupied,
\(\mathcal {A}_{k}^{(a)} \in \mathcal {A}_{k}\), where a is an index, is one of the sets in \(\mathcal {A}_{k}\),
\(g, h \in \mathcal {N}\) are sensor indices.
There are three special cases in this fusion rule: 1) if K=1, the cooperative detection will become the OR combining rule, 2) if K=N, the fusion scheme follows the AND rule, and 3) if \(K = \left \lceil \frac {N}{2} \right \rceil \), the decision is known as the majority rule. In addition, if \(P_{d_{j}}\) (and \(P_{f_{j}}\)) are identical for all devices j (i.e., \(P_{d_{j}}=P_{d}\) and \(P_{f_{j}}=P_{f}, \forall j\)) which can be achieved for example by adapting the sensing time of each sensing device differently, then (8) and (9) can be simplified and formulated by using the normal Binomial distribution (instead of Poisson-Binomial), and become
$$ Q_{d} = \sum_{k=K}^{N} \left(N \atop k\right) {P_{d}^{k}} (1-P_{d})^{N-k} $$
$$ Q_{f} = \sum_{k=K}^{N} \left(N \atop k\right) {P_{f}^{k}} (1-P_{f})^{N-k} $$
respectively, where \(\left (N \atop k\right)\) is called the Binomial coefficient.
In CR, Q d reflects the quality of protection of the band-owner and is determined by the regulator or the standardization body such as the IEEE (for example, in IEEE 802.22, Q d is required to be greater or equal to 0.9, [10]). On the other hand, Q f is important for the CR devices (in our case, the CR-based HAN devices). A lower Q f will provide a higher opportunity for the CR-based HAN devices to access the spectrum and hence attain a higher network throughput. Note that IEEE 802.22, which is actually meant for rural area and large distance, is used as an example because it defines the spectrum sensing specifications (e.g., the probability of detection constraint) that are needed in this paper. Other newer standards like IEEE 802.11af and IEEE 802.15.4m would be more useful to home scenario, but there are no specifications given for the spectrum sensing because they are using the database method instead. In fact, the spectrum sensing parameters (e.g., Q d ≥0.9 constraint) used in the numerical and simulation in this paper are as examples and they can be changed to the desired values.
Problem formulation
Figure 5 shows the timing diagram of a single channel operation where the sensing-transmit task alternates in time. In this figure, T f is the time duration of a frame which is a constant, and it comprises two sub slots: a sub slot for the cooperative spectrum sensing T css , and sub slot for data transmission T t . The former is further divided into two parts, namely a time for local spectrum sensing T s and the time required to send the sensing result to the CH T sr . For reporting the local spectrum sensing result, a TDMA-based channel access scheme is employed, that is, the first CR-based device sends its decision in the first time slot, the second device in the second time slot, and so on (the same scheme is considered in [6]); thus, the total reporting time required for N cooperating devices is N·T sr .
TD-CRHAN time frame for a single channel
Note that we have
$$ T_{f} = T_{css} + T_{t} $$
$$ T_{css} = T_{s} + N T_{sr}. $$
In addition, if the transmission uses rectangular signal pulses, then the maximum data rate for a single channel can be calculated as
$$ C = \frac{mW}{2} ~~~(\text{bit/second}). $$
where W is the null-to-null bandwidth of the channel, and m= log2(M) (bit/symbol) is the modulation order of the transmission when M modulation levels are used.
In cognitive radio, each channel in the spectrum is periodically sensed and may only be utilized for data transmission if it is sensed idle, i.e., \(\hat {E} < \gamma \). This may happen under both \(\mathcal {H}_{0}\) and \(\mathcal {H}_{1}\). Let the achievable throughput under scenario \(\mathcal {H}_{0}\) be R 0. This throughput is smaller than C by a factor (1−Q f ), the probability that the channel is correctly detected as idle. Likewise, under \(\mathcal {H}_{1}\), the achievable throughput R 1 is smaller than C by a factor (1−Q d ), which is the probability that the occupied channel is wrongly detected as idle. This probability is significant in case the incumbent signal is weak (e.g., due to the distance from the incumbent node to the CR network).
We also need to consider that for both scenarios the throughput is scaled by a factor α=T t /T f , the fraction of time within a frame that data is transmitted. Using (12) and (13), we can write α as a function of the sensing time T s and number of sensing devices N as
$$ \alpha(T_{s},N) = 1-\frac{(T_{s} + N T_{sr})}{T_{f}} \,. $$
Overall, this gives
$$ R_{0} = \alpha(T_{s},N)(1-Q_{f}) C $$
$$ R_{1} = \alpha(T_{s},N) (1-Q_{d}) C, $$
The achievable throughput of a single channel can then be formulated as
$$ R = P(\mathcal{H}_{0}) R_{0} + P(\mathcal{H}_{1}) R_{1} $$
where \(P(\mathcal {H}_{0})\) and \(P(\mathcal {H}_{1})\) are the a priori probabilities that the channel is idle and occupied, respectively. These probabilities can be estimated before the CR network is deployed based on a long-term measurement or it can be measured online based on for example, the concept of MAC-layer sensing [8]. Substituting (18) in (16) and (17) gives
$$ R = \alpha(T_{s},N) C~\left(P(\mathcal{H}_{0})(1-Q_{f}) + P(\mathcal{H}_{1})(1-Q_{d}) \right). $$
Let R (i) be the achievable throughput for channel i, then the total achievable throughput for a cluster with I simultaneously active channels can be calculated as
$$ \begin{aligned} R_{t} &= \sum_{i=1}^{I} R^{(i)}= \sum_{i=1}^{I} \left[\alpha\left(T_{s}^{(i)},N^{(i)}\right) C~\left(P\left(\mathcal{H}_{0}^{(i)}\right)\left(1-Q_{f}^{(i)}\right) \right.\right.\\ &\quad\left.\left.+\, P\left(\mathcal{H}_{1}^{(i)}\right)\left(1-Q_{d}^{(i)}\right)\right) \right] \end{aligned} $$
Suppose that CR-based HAN device j has a throughput demand of d j . Then the total throughput demand in a cluster, coming of J CR-based HAN devices, becomes
$$ D_{t} = \sum_{j=1}^{J} d_{j} $$
This information can be acquired by the CH from each connected CR-based HAN device, for example at the time that the device is requesting to join the cluster, or updated by the CR-based HAN device to the CH whenever there is a change in its throughput demand.
Let ε=R t −D t be the difference between R t and D t . Using (19)–(21), we can write ε as
$$ \begin{aligned} \varepsilon(I,\alpha)& = \sum_{i=1}^{I} \left [\alpha\left(T_{s}^{(i)},N^{(i)}\right) C~\left(P\left(\mathcal{H}_{0}^{(i)}\right)\left(1-Q_{f}^{(i)}\right)\right.\right. \\ &\left.\left.\quad+\, P\left(\mathcal{H}_{1}^{(i)}\right)\left(1-Q_{d}^{(i)}\right)\right) \right]- D_{t} \end{aligned} $$
Throughput demand-based CR communication
It is important to ensure that the difference between R t and D t is as small as possible. A positive value of ε means that the available throughput of the active channels in the cluster is underutilized while a negative value means that the QoS of the throughput demand is not fulfilled. Notice that in a TD-CRHAN, in case that a cluster's demand is higher than the capacity of a single channel i.e., D t > R t , the CH in the particular cluster should ask for additional channels from the cognitive HAN controller until the demand is met.
Theoretically, if the number of channels I is unlimited, then the TD-CRHAN scheme can support any amount of throughput demand. With a higher number of channels, we can reduce T s and N (c.f., (20)). However, activating more channels will consume more bandwidth. Hence, optimal values of I,T s and N, that can give the minimum ε should be determined. This optimization problem can be written as
$$ \begin{aligned} \min_{I,T_{s}^{(i)},N^{(i)}} \varepsilon & = R_{t}\left(I,T_{s}^{(i)},N^{(i)}\right)-D_{t} \\ \textrm{s.t.}~~~~~ &~~~~0 \leq T_{s}^{(i)} \leq T_{f}\,,\quad \forall i \\ &\left(T_{s}^{(i)} + N^{(i)} T_{sr}\right) \leq T_{f} \,,\quad \forall i \\ &~~~~~Q_{d}^{(i)} \geq \beta^{(i)} \,,\quad \forall i \\ &~~~~~~~~~\varepsilon \geq 0 \\ &~~~~~~1 \leq I \leq I_{\text{max}} \end{aligned} $$
where \(T_{s}^{(i)}\) and N (i) are respectively the spectrum sensing duration and the number of cooperating nodes involved in CSS for channel i; I max is the maximum number of channels available to be exploited; \(Q_{d}^{(i)}\) is the cooperative probability of detection for channel i and β (i) is a lower bound on this. The constraint ε≥0 is included to ensure that the throughput demand is met.
It is shown in [16] that the optimal solution for (23) can be achieved when constraint \(Q_{d}^{(i)} \geq \beta ^{(i)}~,\forall i\) is satisfied with equality. When this constraint is at equality and for a chosen fusion threshold K (in this paper, we consider \(K = \left \lceil \frac {N}{2} \right \rceil \)), the corresponding device's probability of detection P d can be found from \(Q_{d}^{(i)}\) using Eq. (10). Notice that to use this equation, it is required that the probability of detection P d is the same for all sensing devices, while the effect of different SNR p is absorbed by the device's probability of false alarm P f (c.f. Eq. (5)). Although the simplified Eq. (10) is used to find the probability of detection P d , the general Eq. (9) is used to calculate the cooperative false alarm Q f . In addition, notice that finding the optimal T s and N is equivalent to finding the optimal α (i.e., maximizing α will minimize T s and N); hence, we also can write (23) as
$$ \begin{aligned} &\min~ \varepsilon\left(I,\alpha \left(T_{s}^{(i)},N^{(i)}\right)\right)\\ \textrm{s.t.,}~~ &~~~~~~0 \leq T_{s}^{(i)} \leq T_{f} ~~~,\forall i \\ &~~\left(T_{s}^{(i)} + N^{(i)} T_{sr}\right) \leq T_{f} ~~~,\forall i \\ &~~~~~~~~~~~~\varepsilon > 0 \\ &~~~~~~~~~1 \leq I \leq I_{\text{max}} \\ &~~~0 \leq \alpha \left(T_{s}^{(i)},N^{(i)}\right) \leq 1 ~~~,\forall i \end{aligned} $$
For this optimization problem, we propose to find the solution by using a two-dimensional search method.
In this section, we numerically analyze the performance of the TD-CRHAN and compare it with the conventional solution. For this section, let us assume that the \(\text {SNR}_{p_{j}}^{(i)}=\text {SNR}_{p}\) and \(P\left (\mathcal {H}_{0}^{(i)}\right)=P(\mathcal {H}_{0})\), i.e., are the same, for all i and j, and we note that these parameters will be randomized based on a uniform distribution during the simulation analysis (Section 5). The following values are considered and fixed throughout this section in which most of them are also used in [6]: T f =105 μs, T sr =4 μs, β (i)=0.9 and W (i)=5 MHz, for all i. Moreover, in this work, the majority fusion rule is considered for the CSS as this has been found to be optimal or nearly optimal [5, 6, 17]. For the solution of the optimization problem, we consider \(T_{s}^{(i)}=T_{s}\) and N (i)=N for all i.
Graphs of ε versus the total number of in-band channels I and a) the data transmission time coefficient α, and b) the duration required for local spectrum sensing T s , are shown in Fig. 6 a, b, respectively, where SNR p =−7 dB, \(P(\mathcal {H}_{0})=0.7\), N=6, and D t =3.5 Mb/s. Note that all points on the graphs satisfy every constraint given in (24). It is seen that for each value of I,ε(T s ) and ε(α) are both concave functions in which the peak points of these functions are the maximum achievable throughput of the cluster. In the conventional CR, these points are considered as optimal. However, it is seen from these graphs that there is an excess throughput (i.e., ε>0), which is then not going to be used by the network. This throughput underutilization becomes larger with increasing the number of in-band channels I. In contrast, TD-CRHAN tries to find the lowest point of this graph which is the minimum possible ε and at the same time satisfies all the constraints listed in (24). By doing this, TD-CRHAN can relax the required local spectrum sensing time T s and the number of cooperating nodes N of the CR system.
The difference between the total achievable and demanded throughput (i.e. ε) versus a I and α, and b I and T s , with Q d =β,D t =3.5 Mb/s, N=6
From Fig. 6, the optimal points of the conventional and the proposed TD-CRHAN are taken out and the normalized ε is plotted in Fig. 7 a, and the corresponding normalized sensing time T s (i.e., fraction of time used for spectrum sensing in a frame) is plotted in Fig. 7 b. It is seen that ε is linearly proportional to the number of in-band channels I for the conventional case. This is because the total achievable throughput R t for this case is equal to the maximum achievable throughput of each channel multiplied by the total number of in-band channels, i.e., R t =R·I (due to the above assumptions, we have R (i)=R, ∀i); hence, the larger I, the higher ε irrespective of the D t . On the other hand, in a TD-CRHAN, R t is adjusted as near as possible to D t , which is actually the minimization of ε with constraints. As a result, it can be seen that with a TD-CRHAN scheme, ε is maintained as low as possible, and the spectrum sensing time T s is significantly relaxed as compared to the conventional scheme (as shown in Fig. 7 b). These gains become larger as I becomes higher. It is also denoted in Fig. 7 a that the minimum I required to satisfy D t is 4. Projecting this point to Fig. 7 b (as depicted by the red arrows) shows that, even at this point, TD-CRHAN obtains around 51 % gain on the required T s in comparison with the conventional case.
Effects of different total number of in-band channels I on a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), and b fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)). This is a comparison between TD-CRHAN and the conventional scheme. For this, D t is fixed at 3.5 Mb/s and N=6
Further, the impact of the number of cooperative sensing devices N on the proposed TD-CRHAN is analyzed as depicted in Fig. 8. In general, it can be seen that, the higher N, the lesser the T s required, which means a higher N will reduce the burden of sensing on the individual CR-based HAN device. However, it can be noticed that T s is saturated and then becomes constant after a certain I (in this case I>6); thus, at this point, an increase of N or I would not reduce T s anymore, hence it will increase the value of ε as witnessed in Fig. 7 a.
Impact of varying number of cooperating sensing devices N on a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), and b fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)). For this, D t =3.5 Mb/s
Next, with the same setting, we analyze the performance of the proposed TD-CRHAN in comparison with the conventional one for different D t . Three scenarios of the conventional settings are considered: 1) maximization of R t with I=7, 2) maximization of R t with I=10, and 3) maximization of R t with I is set based on the network throughput demand D t such that R t ≥D t . Figure 9 a shows that the TD-CRHAN scheme satisfies the throughput demand at all times and has least throughput underutilization compared to other schemes, in particular for the cases that R is maximized without D t consideration. Worse, the conventional plot without D t consideration (i.e., scenarios 1 and 2) are unable to satisfy the demanded throughput after a certain point (for instance, in this case: scenario 1 could not satisfy the demand for D t >9.3 Mb/s as the number of in-band channels is fixed to 7). In contrast, in principle the proposed TD-CRHAN can support an unlimited D t if I is unlimited.
Effects of different cluster's throughput demand, D t on a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), b number of in-band channels I used, and c fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)). This is a comparison between TD-CRHAN and the conventional schemes. For this, we take SNR p =−7 dB and N=6
We then numerically analyze the impact of the channel conditions, i.e., SNR p and \(P(\mathcal {H}_{0})\), on the performance of TD-CRHAN as well as the three conventional schemes of which the results are shown in Figs. 10 and 11, respectively. It is witnessed that, for the proposed TD-CRHAN scheme, D t will always be satisfied at minimum ε for almost any SNR p or \(P(\mathcal {H}_{0})\) values (as shown in Figs. 10 a and 11 a). This is because, a TD-CRHAN allows for an adaptive number of active in-band channels I (refer to Figs. 10 b and 11 b) and local spectrum sensing duration T s (refer to Figs. 10 c and 11 c) where these values are optimized such that the resultant achievable throughput R t is very close to the corresponding demand D t . Specifically, for I, at a very low SNR p or \(P(\mathcal {H}_{0})\), its value will be increased while it will be reduced to the minimal at a high SNR p or \(P(\mathcal {H}_{0})\). Notice that D t is still satisfied even for \(P(\mathcal {H}_{0}) = 0\) in which the network throughput at this point is acquired from the \(P(\mathcal {H}_{1})\) part (i.e., at the expense of a high I). For T s , it is seen in Fig. 10 c that it is adjusted to a lower value at a very low SNR p . This is because at this point, ε (and R t ) is influenced more by T s but less by P f (and Q f ) as at a very low SNR p , a high T s does not provide a significant reduction to P f (i.e., this can be seen from (5), as plotted in Fig. 12). On such a case, a lower T s is more favorable in order to satisfy the demanded throughput D t and meet the ε≥0 constraint. However, P f (and Q f ) become more dominant with the increase of SNR p up to a certain point, but yet it is dominated by T s when P f becomes saturated; this can be observed in Fig. 10 c. Similarly with Fig. 11 c, that is at a very low \(P(\mathcal {H}_{0}), T_{s}\) will be set to a lower value as a high T s is not beneficial because at this instance most of the R t comes from \(P(\mathcal {H}_{1})\) (c.f., (19)). Note that in (19) a higher T s leads to a lower Q f and therefore a higher R t ; at a very low \(P(\mathcal {H}_{0})\), a lower Q f does not help because this part of R t is suppressed by the value of \(P(\mathcal {H}_{0})\) itself, and vice versa.
Performance of TD-CRHAN in comparison with the conventional schemes for different SNR p conditions. The performance is measured in term of a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), b number of in-band channels I used, and c fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)), with \(P(\mathcal {H}_{0})\) fixed at 0.7
Performance of TD-CRHAN in comparison with the conventional schemes for different \(P(\mathcal {H}_{0})\) conditions. The performance is measured in term of a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), b number of active in-band channels I used, and c fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)), with SNR p fixed at −7 dB
P f versus T s for different SNR p with P d fixed at 0.9
Simulation results and analysis
In this section, we run Monte Carlo simulations on the proposed TD-CRHAN scheme and the three conventional cases and compare the results with the numerical results. The settings for this simulation are the same as in Section 4 except the SNR of the incumbent user (i.e., \(\text {SNR}_{p_{j}}\)) is randomly set for each CR-based device j based on the Uniform distribution within the range of −11 and 3 dB (i.e., \(\text {SNR}_{p_{j}} \sim \mathcal {U}(-11, 3)\) dB, for all j). We consider this range for this simulation in order to capture the dynamic behavior of the sensing qualities (i.e., P d and P f ) and observe the impact of different sensing time T s values. For SNR p higher than 3 dB, only a few samples with single device sensing (without cooperation) are required to obtain an already very high probability of detection P d and very low probability of false alarm P f . For low SNR p used, i.e., −11 dB and less, an increase of the sensing time does not really give significant improvement of the sensing qualities. We repeat the simulation 1000 times and the results are averaged. It can be seen from Fig. 13 that, in general, the patterns of the simulation results are similar with the graphs from the numerical analysis (refer to Fig. 7). However, notice that the optimal sensing time in the simulation is less than in the numerical analysis which is caused by the possible high value of the incumbent user signal strength in the simulation (i.e., between −11 and 3 dB as compared to a fix −7 dB, respectively). In addition, Fig. 14 shows that the corresponding cooperative and the individual false alarm probabilities, i.e., Q f and \(P_{f_{j}},\forall j\), respectively, of the proposed TD-CRHAN varies according to the number of in-band channels I available. In TD-CRHAN, for the same total throughput demand D t , an increase of the number of in-band channels I will decrease the required achievable throughput R (i) of each channel i, hence this reduces the required Q f and the corresponding \(P_{f_{j}}, \forall j\). This then further reduces the required sensing time T s , as can be seen in Fig. 13. Besides, it can be observed from Fig. 14 a that the probability of false alarm \(P_{f_{j}}\) of sensing device j depends on its \(\text {SNR}_{p_{j}}\): a lower \(\text {SNR}_{p_{j}}\) device has a higher \(P_{f_{j}}\).
Simulation results on the effects of using different total number of in-band channels I on a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), and b fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)). This is a comparison between TD-CRHAN and the conventional schemes. For this, \(\text {SNR}_{p_{j}}\) is chosen randomly between −11 and 3 dB for all j
The corresponding a false alarm probability \(P_{f_{j}}\) of each device j, and b cooperative false alarm probability Q f , from the simulation of which the incumbent \(\text {SNR}_{p_{j}}\) are different at each sensing device j. In the simulation, the random generated \(\text {SNR}_{p_{j}}, \forall j\) are as the following: \(\text {SNR}_{p_{j}} = \{-6.5\,\text {dB}, -5.3~\text {dB}, -5.9~\text {dB}, -3.9~\text {dB}, -3.5~\text {dB}, -3.2~\text {dB}\}\) for j={1,…,6}, respectively
Finally, a Monte Carlo simulation is executed in which all network parameters are uniformly randomized (i.e., \(D_{t} \sim \mathcal {U}(3.5, 10)\) Mb/s, \(\text {SNR}_{p_{j}}^{(i)} \sim \mathcal {U}(-11, 3)\) dB and \(P\left (\mathcal {H}_{0}^{(i)}\right) \sim \mathcal {U}(0,1),~\forall i,j\)) to evaluate the performance of the TD-CRHAN in a more practical scenario. The graphs of the normalized ε and sensing time T s , versus the number of in-band channel I for N=1,2,…,6 are plotted as shown in Fig. 15. Similarly, it is observed that with the proposed TD-CRHAN scheme, the network throughput demand D t is satisfied at all times for all N. However, it is seen that a lower number of cooperating sensing devices N will require a higher sensing time T s , and moreover at a certain point, a higher number of channels I is even required (i.e., in this case, I≥4 for N=1 and 2 compared to I≥3 for N=3,4,5, and 6).
Simulation results of the total number of active channels I versus: a normalized \(\varepsilon = \frac {\varepsilon }{D_{t}}\), and b fraction of time used for spectrum sensing (i.e., normalized sensing time \(=\frac {T_{s}}{T_{f}}\)). This is for different number of cooperating devices i.e., N=1,2,⋯,6 with more practical scenario in which all network parameters are randomly chosen, i.e., \(D_{t} \sim \mathcal {U}(3.5, 10)\) Mb/s, \(\text {SNR}_{p_{j}}^{(i)} \sim \mathcal {U}(-11, 3)\) dB and \(P(\mathcal {H}_{0}) \sim \mathcal {U}(0,1)\), for all i and j
To support the ever-rising throughput demand of home area networks (HAN), we proposed in this paper a cognitive radio (CR)-based communication scheme called TD-CRHAN. The TD-CRHAN aims at satisfying the demanded network throughput with equality by determining the optimal local spectrum sensing time, the number of cooperating sensing devices, and the number of active in-band channels needed. This leads to an efficient scheme which provides a higher utilization of the occupied channels. It was shown by extensive numerical analysis and through simulations that TD-CRHAN is able to relax the tight cooperative spectrum sensing requirements and provides significant gains on the cooperative spectrum sensing parameters (i.e., spectrum sensing time and number of cooperating devices), compared to the conventional solution. More specifically, TD-CRHAN reduces the required local spectrum sensing time by more than 51 %. Furthermore, it was shown that these cooperative spectrum sensing parameters can be further improved with the availability of additional cooperating devices or channels (bandwidth).
M Nekovee, A survey of cognitive radio access to tv white spaces. Int. J. Digit. Multimed. Broadcast. 2010(236568) (2010). doi:10.1155/2010/236568.
Y-C Liang, Y Zeng, ECY Peh, AT Hoang, Sensing-throughput tradeoff for cognitive radio networks. IEEE Trans. Wirel. Commun. 7(4), 1326–1337 (2008). doi:10.1109/TWC.2008.060869.
L Tan, L Le, Joint cooperative spectrum sensing and MAC protocol design for multi-channel cognitive radio networks. EURASIP J. Wirel. Commun. Netw. 2014(1), 101 (2014). doi:10.1186/1687-1499-2014-101.
G Umashankar, AP Kannu, Throughput optimal multi-slot sensing procedure for a cognitive radio. IEEE Commun. Lett. 17(12), 2292–2295 (2013). doi:10.1109/LCOMM.2013.102613.131825.
RA Rashid, A. H. F. A. Hamid, N Fisal, MA Sarijari, RA Rahim, A Mohd, in 2012 IEEE Symposium on Wireless Technology and Applications (ISWTA). Optimal user selection for decision making in cooperative sensing, (2012), pp. 165–170. doi:10.1109/ISWTA.2012.6373834.
S Maleki, SP Chepuri, G Leus, Optimization of hard fusion based spectrum sensing for energy-constrained cognitive radio networks. Phys. Commun. 9(0), 193–198 (2013). doi:10.1016/j.phycom.2012.07.003.
M Najimi, A Ebrahimzadeh, SMH Andargoli, A Fallahi, A novel sensing nodes and decision node selection method for energy efficiency of cooperative spectrum sensing in cognitive sensor networks. IEEE Sensors J. 13(5), 1610–1621 (2013). doi:10.1109/JSEN.2013.2240900.
H Kim, KG Shin, Efficient discovery of spectrum opportunities with MAC-layer sensing in cognitive radio networks. IEEE Trans Mob Comput. 7(5), 533–545 (2008). doi:10.1109/TMC.2007.70751.
K-LA Yau, N Ramli, W Hashim, H Mohamad, Clustering algorithms for cognitive radio networks: A survey. J. Netw. Comput. Appl. 45(0), 79–95 (2014). doi:10.1016/j.jnca.2014.07.020.
IEEE Standard for information technology– local and metropolitan area networks– specific requirements– part 22: Cognitive wireless RAN medium access control (MAC) and physical layer (PHY) specifications: Policies and procedures for operation in the TV bands, 1–680 (2011). IEEE Std 802.22-2011, doi:10.1109/IEEESTD.2011.5951707.
M Timmers, S Pollin, A Dejonghe, L Van der Perre, F Catthoor, A distributed multichannel MAC protocol for multihop cognitive radio networks. IEEE Trans. Veh. Technol. 59(1), 446–459 (2010). doi:10.1109/TVT.2009.2029552.
S Chaudhari, J Lunden, V Koivunen, HV Poor, Cooperative sensing with imperfect reporting channels: Hard decisions or soft decisions?IEEE Trans. Sig. Process. 60(1), 18–28 (2012). doi:10.1109/TSP.2011.2170978.
R Rajbanshi, AM Wyglinski, GJ Minden, in 1st International Conference on Cognitive Radio Oriented Wireless Networks and Communications. An efficient implementation of NC-OFDM transceivers for cognitive radios, (2006), pp. 1–5. doi:10.1109/CROWNCOM.2006.363452.
VRS Banjade, N Rajatheva, in 8th International Symposium on Wireless Communication Systems(ISWCS). Primary user capacity maximization in cooperative detection network using m out of n fusion rule, (2011), pp. 482–486. doi:10.1109/ISWCS.2011.6125406.
YH Wang, On the number of successes in independent trials. Stat. Sin. 3(2), 295–312 (1993).
ECY Peh, Y-C Liang, YL Guan, Y Zeng, Optimization of cooperative sensing in cognitive radio networks: A sensing-throughput tradeoff view. IEEE Trans. Veh. Technol. 58(9), 5294–5299 (2009). doi:10.1109/TVT.2009.2028030.
W Zhang, RK Mallik, K Letaief, Optimization of cooperative spectrum sensing with energy detection in cognitive radio networks. IEEE Trans. Wirel. Commun. 8(12), 5761–5766 (2009). doi:10.1109/TWC.2009.12.081710.
The work has been supported partially by the Ministry of Education, Malaysia.
Faculty of Electrical Engineering, Mathematics and Computer Sciences, Delft University of Technology, Delft, 2628CD, the Netherlands
Mohd Adib Sarijari
, Gerard JM Janssen
& Alle-Jan van der Veen
Faculty of Electrical Engineering, Universiti Teknologi Malaysia, UTM Johor Bahru, Johor, 81310, Malaysia
& Mohd Sharil Abdullah
Search for Mohd Adib Sarijari in:
Search for Mohd Sharil Abdullah in:
Search for Gerard JM Janssen in:
Search for Alle-Jan van der Veen in:
Correspondence to Mohd Adib Sarijari.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Sarijari, M.A., Abdullah, M.S., Janssen, G.J. et al. On achieving network throughput demand in cognitive radio-based home area networks. J Wireless Com Network 2015, 221 (2015) doi:10.1186/s13638-015-0448-5
Accepted: 11 September 2015
Home area network communication
Dynamic Spectrum Access for Throughput, Delay, and Fairness Enhancement In Cognitive Radio Networks | CommonCrawl |
Circulatory efficiency in patients with severe aortic valve stenosis before and after aortic valve replacement
S. Nordmeyer1,2 na1,
C. B. Lee2,3 na1,
L. Goubergrits2,
C. Knosalla3,4,
F. Berger1,3,
V. Falk3,4,
N. Ghorbani2,3,
H. Hireche-Chikaoui5,
M. Zhu2,
S. Kelle5,6,
T. Kuehne1,2,3 &
M. Kelm1,2,3
Circulatory efficiency reflects the ratio between total left ventricular work and the work required for maintaining cardiovascular circulation. The effect of severe aortic valve stenosis (AS) and aortic valve replacement (AVR) on left ventricular/circulatory mechanical power and efficiency is not yet fully understood. We aimed to quantify left ventricular (LV) efficiency in patients with severe AS before and after surgical AVR.
Circulatory efficiency was computed from cardiovascular magnetic resonance (CMR) imaging derived volumetric data, echocardiographic and clinical data in patients with severe AS (n = 41) before and 4 months after AVR and in age and sex-matched healthy subjects (n = 10).
In patients with AS circulatory efficiency was significantly decreased compared to healthy subjects (9 ± 3% vs 12 ± 2%; p = 0.004). There were significant negative correlations between circulatory efficiency and LV myocardial mass (r = − 0.591, p < 0.001), myocardial fibrosis volume (r = − 0.427, p = 0.015), end systolic volume (r = − 0.609, p < 0.001) and NT-proBNP (r = − 0.444, p = 0.009) and significant positive correlation between circulatory efficiency and LV ejection fraction (r = 0.704, p < 0.001). After AVR, circulatory efficiency increased significantly in the total cohort (9 ± 3 vs 13 ± 5%; p < 0.001). However, in 10/41 (24%) patients, circulatory efficiency remained below 10% after AVR and, thus, did not restore to normal values. These patients also showed less reduction in myocardial fibrosis volume compared to patients with restored circulatory efficiency after AVR.
In our cohort, circulatory efficiency is reduced in patients with severe AS. In 76% of cases, AVR leads to normalization of circulatory efficiency. However, in 24% of patients, circulatory efficiency remained below normal values even after successful AVR. In these patients also less regression of myocardial fibrosis volume was seen.
Trial Registration clinicaltrials.gov NCT03172338, June 1, 2017, retrospectively registered.
Aortic valve stenosis (AS) is a frequent heart valve disease worldwide that exposes the left ventricle (LV) to chronic pressure overload [1,2,3]. This triggers a complex cascade of LV remodeling processes leading to hypertrophy and fibrosis [1, 2] and if treatment is performed too late regression of these LV remodeling processes is reduced and morbidity as well as mortality increase [4, 5].
Circulatory efficiency reflects the ratio between total LV work and the work required for maintaining cardiovascular circulation [6,7,8,9]. The approach might contribute to the understanding of potential regenerative processes in the pressure overloaded heart [10, 11]. In fact, only part of the LV work is directly used to maintain blood flow in the cardiovascular circulation in AS patients, the rest is needed to build up the pressure to overcome the resistance across the aortic valve and parts dissipate as heat [8].
Increases in LV pressure and myocardial hypertrophy can contribute to a reduction of cardiac efficiency, whereas small ventricles with normal LV ejection fraction (LVEF) show higher cardiac efficiency. Accordingly, the concept of cardiac efficiency has been analyzed in some initial studies of arterial hypertension, heart failure and valve disease [7, 11,12,13,14]. Güclu et al. have reported in a positron emission tomography (PET) cardiovascular magnetic resonance (CMR) study that efficiency is a determinant of functional improvement after aortic valve replacement (AVR) in patients with AS [13, 14]. However, the study was performed only on a small group of 10 patients and therefore more clinical data is warranted. The acquisition of clinical data, however, can be technically challenging and methods that were used in the past were often invasive or associated with ionizing radiation – thus limiting their clinical use.
A reduced surrogate marker of circulatory efficiency in patients with different stages of AS has been found using a recent noninvasive and radiation-free CMR method [15]. In the present study, we aimed to apply this novel noninvasive method to assess surrogate markers of circulatory efficiency and power in a cohort of 41 patients with severe AS before and after surgical AVR and in 10 age and sex-matched controls.
Study design and data acquisition
A total of 41 patients with severe AS (according to current diagnostic guidelines [16]) were included into the study (Table 1). Exclusion criteria were the presence of moderate to severe aortic regurgitation (AR), mitral, pulmonary or tricuspid valve disease [17], the presence of coronary artery disease and general contraindications to CMR.
Table 1 General demographic and clinical data; mean ± SD and n (%)
All patients underwent cuff-based blood pressure measurements, blood collection, clinical, echocardiographic and CMR examination before and 4 (± 38 days) months after surgical AVR. The mean and maximum pressure gradient across the aortic valve was measured using Doppler echocardiography (5-chamber-view). Mitral regurgitation was quantified using standard echocardiography. 10 age- and sex-matched controls (Table 1) underwent the same pre-operative study protocol and were compared to the AS patients. The study protocol was in agreement with the principles outlined in the Declaration of Helsinki and was approved by the Medical Ethics Review Committee. All patients gave written informed consent prior to inclusion.
Cardiovascular magnetic resonance imaging and post-processing
All CMR examinations were performed using a whole-body 1.5 T CMR system (Achieva R 3.2.2.0, Philips Healthcare, Best, The Netherlands) using a five-element cardiac phased-array coil. Gapless balanced Turbo Field Echo (bTFE) cine 2-dimensional short axis sequences were obtained using a previously applied CMR protocol [8] for LV volumetric and anatomical measurements. Analysis was performed using View Forum (R6.3V1L7 SP1; Philips Healthcare). LV epicardial and endocardial borders were manually drawn in every segment in diastole and systole to acquire automatically LV volumetric and anatomical data (LV mass (LVM), myocardial volume, end-systolic volume (ESV) and end-diastolic volume (EDV)). End-systolic mean myocardial wall thickness \({S}_{wall}\) and mean radius of the blood pool \({R}_{BP}\) were calculated considering the LV a cylinder:
$${R}_{BP} ={\left(\frac{{V}_{BP}}{{n}_{cine}*{h}_{cine}*\pi }\right)}^{1/2}$$
$${S}_{wall} ={\left(\frac{{{V}_{wall}+ V}_{BP}}{{n}_{cine}*{h}_{cine}*\pi }\right)}^{1/2}-{R}_{BP}$$
where \({n}_{cine}\) = number of 2D Cine CMR slices used for the LV volumetric measurements, \({h}_{cine}\) = Cine slice thickness (usually 7 mm), \({V}_{BP}\) = blood pool volume and \({V}_{wall}\) = myocardial wall volume.
4-dimensional and 2-dimensional velocity encoded (VENC) CMR was obtained using a previously described CMR protocol [18]. 4-D VENC CMR sequences were used to quantify blood flow across the aortic and mitral valve and the ascending aorta in order to measure auxobaric contraction time tABC, isovolumetric contraction time tIVC and the aortic pressure gradient. 4D data were analyzed using GT Flow program (version 2.0.10, Gyrotools, Zurich, Switzerland). Total systolic contraction time tCS is the sum of tABC and tIVC. The temporal solution was 25 timesteps for 4D flow measurements and 30 timesteps for 2D flow measurements. We could show that 4D flow measurements with 25 timesteps are a feasible alternative to flow measurements with higher temporal solution in a prior study [15].
Global Longitudinal Strain (GLS) Feature-tracking (CMR-FT)
CMR-FT based strain analyses were performed using commercially available software provided by Medis (QStrain, Version 2.1.12.2, Medis Medical Imaging Systems, Leiden, Nethrlands). FT was performed in the end-diastole and end-systole cardiac phases, at the endo- and epicardial borders. Global longitudinal strain (GLS) was assessed by averaging the peak systolic strain values of 17 segments extracted from three long axis images (2-, 3- and 4-long axis CV), while global circumferencial strain (GCS) was acquired from three short axis images (basal, midventricular and apical level) using a 16-segment model.
Circulatory power and efficiency (Fig. 1)
Illustration of the processing pipeline and the calculation of power and efficiency. After the initial acquisition of routine cine CMR and 2D/4D velocity encoded (VENC) CMR as well as heart rate and blood pressure measurements, image segmentation and the assessment of functional parameters is performed. These informations are required to calculate the mechanical circulatory power (CP), the left ventricular (LV) myocardial power (LVMP) and the resulting efficiency
Power is the rate of transferring or converting energy per unit time. The ratio of the power needed to pump a given blood volume against a given afterload (circulatory power, CP) to the power used by the heart to perform one heartbeat (Left ventricular myocardial power, LVMP) is described as circulatory efficiency (CircE). LVMP was defined as the surrogate power of the LV to perform one heartbeat since the applied method is an estimation [15, 19]:
$$LVMP =\frac{{V}_{wall}*{\upsigma }_{wall}}{{t}_{CS}}$$
Vwall = myocardial wall volume, σwall = wall stress, tCS = LV systolic contraction time.
Wall stress was calculated using a simplified approach of the law of Laplace:
$${\upsigma }_{wall} ={P}_{sys}* \frac{{R}_{BP}}{2*{S}_{wall}}$$
PSYS = LV peak systolic pressure, RBP = mean radius of the blood pool, Swall = mean myocardial wall thickness. Swall and RBP during systole were averaged from LV segmentations considering the LV as a cylindrical geometry for correction of potential regional differences. Psys = sum of the systolic blood pressure measured at the right arm and the maximum pressure gradient across the aortic valve. LVMP was indexed to body surface area (BSA).
Circulatory power (CP) is defined as the hydrodynamic power distally to the valve representing the power needed to maintain effective blood flow against systemic vascular resistance (afterload).
$$CP =MAP*\mathrm{COeff}$$
MAP = mean arterial pressure, COeff = effective cardiac output. The dimension of COeff is L/min. COeff is the product of heart rate and SVeff. SVeff = (EDV-ESV) * (1-regurgitation fraction).
Circulatory efficiency (CircE) is the ratio between CP and LVMP.
$$CircE =\frac{CP}{LVMP}$$
Calculation of diffuse myocardial fibrosis
Calculation of extracellular volume (ECV) was done using a prior described method [20, 21]:
$$ECV=\left(1-hematocrit\right)* \frac{(1/T myo post)-(1/T myo pre)}{(1/T blood post)-(1/T blood pre)}$$
myo = LV midwall myocardial T1 value, blood = LV blood pool T1 value, and pre and post refers to the measurement before and after contrast administration. Absolute ECV was calculated using the following equation: aECV = LV myocardial volume*ECV. LV myocardial volume = LV mass/1.05, where 1.05 is the myocardial density given in g/ml.
Reproducibility of power and efficiency measurements
As stated in a prior study [15], parameters of circulatory power and efficiency are combined parameters calculated from CMR acquired LV volumetric and 2D/4D blood flow measurements. Several previous studies have shown good reproducibility and accuracy of CMR acquired LV volumetric and 2D/4D blood flow measurements [18, 22,23,24].
Data are presented as mean ± SD unless stated otherwise. Shapiro–Wilk test was used for normality testing. A paired two-tailed Student's t-test or Wilcoxon test was performed where appropriate to investigate differences between pre- and post-operative measurements. Unpaired two-tailed Student´s t-test or Mann–Whitney-U Test was performed to investigate differences between disease groups and controls as appropriate. Pearson´s chi-square test was performed to investigate differences in unpaired categorical variables. McNemar test was used to test for differences in paired categorical data before and after an intervention. A linear regression model was used to identify the relationship between variables. The significance level was set at 0.05. Data were analyzed using SPSS (version 25, Statistical Package for the Social Sciences, International Business Machines, Inc., Armonk, New York, USA) and Stata (Version 15.1, StataCorp, College Station, Texas, USA).
Clinical effects of AVR
Table 1 summarizes demographic and clinical baseline characteristics. Table 2 shows CMR, laboratory and clinical parameters in patients and controls. In patients with AS mean aortic pressure gradient decreased and New York Heart Association (NYHA) classification improved after AVR. Furthermore, there was a significant reduction in hypertrophy (myocardial muscle mass), fibrosis (aECV) and NT-proBNP and a significant increase in GLS (Fig. 2).
Table 2 CMR, metabolic and clinical parameters before and after aortic valve replacement (AVR) and in healthy controls
Table 3 General demographic and clinical data in patients with and without restored circulatory efficiency (CircE) after AVR
Table 4 Pre- and Post-AVR parameters in patients with and without restored CircE post-AVR
Measurements in patients with aortic stenosis (AS) before and after aortic valve replacement (AVR) and in controls. First row (left, middle, right): Mean aortic pressure gradient, left ventricular (LV) mass, global longitudinal strain (GLS), Second row (left, middle, right): absolute extra cellular volume, NTpro-BNP and LV ejection fraction
CircE in Aortic Stenosis
In patients with severe AS, CircE was lower (9 ± 3 vs 12 ± 2%, p = 0.004) compared to healthy controls (Fig. 3). Furthermore, there were significant inverse correlations between pre-operative CircE and LV mass (r = − 0.591, p < 0.001), aECV (r = − 0.427, p = 0.015), ESV (r = − 0.609, p < 0.001), LVEF (r = 0.704, p < 0.001), NT-proBNP (r = − 0.444, p = 0.009) (Fig. 4) and GLS (r = − 0.539, p < 0.001) (Fig. 5).
Left ventricular myocardial power (LVMP) and circulatory efficiency (CircE) in aortic stenosis (AS) before and after aortic valve replacement (AVR) and in controls
Correlation of circulatory efficiency (CircE) with absolute extra cellular volume (aECV), NTpro-BNP, LV ejection fraction and LV mass
Correlation of LV myocardial power (LVMP) and circulatory efficiency (CircE) with global longitudinal strain (GLS)
LVMP was higher (8 ± 3 vs 5 ± 1 W/m2, p < 0.001) (Fig. 3) and CP was not different (1.3 ± 0.4 vs 1.1 ± 0.1 W, p = 0.097) compared to healthy controls. Pre-operative LVMP correlated significantly with GLS (r = 0.577, p < 0.001) (Fig. 5).
CircE after aortic valve replacement
After AVR, CircE significantly increased in the total cohort (9 ± 3 vs 13 ± 5%, p < 0.001) and showed no difference to healthy controls (13 ± 5 vs 12 ± 2%, p = 0.112) (Fig. 3).
Furthermore, there were significant correlations between post-operative CircE and post-operative LV mass (r = − 0.409, p = 0.008), ESV (r = − 0.454, p = 0.003) and LVEF (r = 0.555, p < 0.001).
LVMP decreased after AVR (8 ± 3 vs 5 ± 2 W/m2, p < 0.001) and showed no differences to healthy controls (5 ± 2 vs 5 ± 1 W/m2, p = 0.924) (Fig. 3). CP was not changed after AVR (1.3 ± 0.4 vs 1.3 ± 0.4 W, p = 0.176) and showed no differences to healthy controls (p = 0.462).
There was no significant correlation between improvement in efficiency and symptom improvement (p = 0.721). Improvement of CircE significantly correlated to decrease of LVMP (R2 = 0.249, p = 0.001). Decrease of LVMP significantly correlated to changes of aortic pressure gradient (R2 = 0.321, p < 0.001), LVM (R2 = 0.451, p < 0.001), ESV (R2 = 0.243, p = 0.001), EDV (R2 = 0.110, p = 0.034) and LVEF (R2 = 0.180, p = 0.006). Furthermore, improvement of CircE did not correlate with prosthesis size (p = 0.409).
CircE does not normalize in 24% of patients
The lowest value for CircE in controls was 10%. 10/41 (24%) patients displayed CircE of < 10% after AVR. Between the two groups without (n = 10) and with restored CircE (n = 31) after AVR, we found the following effects:
Pre-operative findings: In patients without restored CircE 70% were male, LVEF was lower (50 ± 13 vs 60 ± 8%, p = 0.031) and diastolic RR was higher (80 ± 6 vs 73 ± 11 mmHg, p = 0.015). There was no difference in CircE, mean gradient across the aortic valve, NYHA status or markers for hypertrophy or fibrosis (Tables 3 and 4). No parameter could be identified to predict which patient would show restored or not restored CircE after AVR.
Post-operative findings: Reduction in fibrosis (aECV) (32 ± 11 vs 26 ± 8 ml, p < 0.001), improvement in CircE (0.6 ± 2.8 vs 5.2 ± 6.0%, p = 0.009) and in NYHA (NYHA III-IV 42% vs 6%, p < 0.05) was only significant in patients with restored CircE. Improvement in LVEF (50% vs 57%, p < 0.05) was only significant in patients without restored CircE after AVR. Mean gradient across the aortic valve (14 ± 5 vs 10 ± 5 mmHg, p = 0.026) and LVMP (7 ± 2 vs 5 ± 2, p = 0.001) were higher in patients without restored CircE, however, there was no significant difference in post-operative LVP, NT-pro-BNP, cardiac function, NYHA status or markers for hypertrophy between patients with and without restored CircE. Furthermore, there were no differences in pre- to post-AVR changes of LV mass, aortic valve gradient and NT-proBNP between patients with and without restored CircE (ables 3 and 4).
Patients with lower pre-operative CircE show a higher absolute amount of fibrosis after AVR
Pre-operative CircE correlates inversely with aECV post-operative (r = − 0.542, p = 0.001) (Fig. 6).
Correlation of preoperative circulatory efficiency (CircE) and postoperative absolute extra cellular volume (aECV)
We quantified a surrogate marker of circulatory efficiency (CircE) longitudinally in patients with severe AS before and after surgical AVR using a non-invasive CMR technique. We found CircE to be reduced in patients with AS and lower CircE was associated with pronounced LV hypertrophy and fibrosis and reduced LV function. After surgical AVR, CircE did not increase and normalize in an important fraction of patients (24%). These patients also showed less reduction in LV myocardial fibrosis volume compared to patients with restored CircE after AVR. Improvement of CircE was significantly influenced by the decrease of LVMP. Furthermore, decrease of LVMP was significantly affected by changes of aortic pressure gradient, LVM, ESV, EDV and LVEF. Therefore, decrease of LVMP and improvement of CircE is affected by the decrease of afterload but is also influenced by cardiac reverse remodeling.
Myocardial adaptation processes like hypertrophy and fibrosis in patients with AS lead to higher LV energy demand and reduced efficiency [1,2,3, 9]. If left untreated transition from adaptive to maladaptive remodeling can lead to heart failure [1, 2]. The concept of LVMP and efficiency as an evaluation of myocardial performance in pressure loaded hearts and in heart failure have increasingly become of interest [7, 11, 13, 14, 25, 26] since prior studies have demonstrated LV efficiency to be reduced in pressure overloaded hypertrophied hearts.
In hypertrophied and failing hearts, a switch from aerobic mitochondrial fatty acid oxidation to anaerobic glycolysis has been described, which decreases myocardial efficiency due to inefficient ATP generation and increased adenosine triphosphate (ATP) consumption for other non-contractile purposes [27]. Even in normal hearts 20% of O2 are consumed by biochemical processes not directly associated with contraction (e.g. electrolyte homeostasis) [28].
Myocardial efficiency is defined as the ratio between external work and myocardial energy consumption [9, 29]. The area of the pressure–volume loop reflects external work (stroke work) and can be measured by using invasive catheter. Myocardial oxygen consumption reflecting myocardial energy consumption has also been measured using invasive tools. This approach has become gold standard to measure myocardial energetics. However, this approach is limited by its invasive nature and therefore, has been limited to specific indications in clinical routine. Our approach quantifies surrogate markers of myocardial power and efficiency by using only non-invasive CMR-based volumetric and blood flow measurements. Hence, our approach can easily be applied in clinical routine and research. The advantage and motivation of the proposed surrogate markers were well discussed earlier [15]. As also shown in a prior study our approach reflects disease specific alterations of myocardial power and efficiency in hearts with chronic pressure- and volume overload [15].
In our study, we calculated circulatory efficiency by measuring mechanically generated power of the LV necessary to perform contraction against a given afterload following the law of Laplace and considering only geometrical parameters assessed by CMR. We recently demonstrated a reduced circulatory efficiency in patients with AS and different grades of severity [15]. The presented approach is merely noninvasive, however, not yet validated against invasive standards. In the present study, the focus was on patients with severe AS, who received AVR and calculated circulatory efficiency before and after AVR and we found similar results compared to study results using invasive methods.
Hansson and colleagues previously quantified efficiency in mainly asymptomatic AS patients with and without heart failure and demonstrated reduced efficiency in patients with impaired LVEF compared to controls [14]. Our findings are in line with such measurements, showing correlations between CircE and LVEF in patients with AS. Güclu and colleagues demonstrated reduced efficiency in AS patients compared to controls and described efficiency as an important determinant of functional improvement after AVR [13]. However, their study was limited by a small patient number (n = 10) and non-age-matched controls.
AVR has beneficial effects on prognosis mainly due to reverse remodeling [30, 31]. In this study, AVR reduced pressure load, LV hypertrophy and fibrosis as expected and improved NYHA status and LV function looking at the whole cohort. Furthermore, CircE increased after AVR and normalized in the majority of patients. Güclu and colleagues described increased efficiency after AVR without normalization [13]. However, their controls were not age-matched [32] and normal efficiency was described as 49%, which is inconsistent with prior studies quantifying efficiency (14–35%) [10, 14].
In our study, CircE did not normalize in 10 (24%) patients. Mean value for preoperative LVEF was lower and diastolic blood pressure was higher in the non restored group. However, looking at the individual 10 patients with non restored CircE after AVR, 4 patients displayed LVEF lower than 45%, but in 6 patients LVEF was higher than 56%, showing the heterogeneity of the non restored group. NT-proBNP did not reach statistical significance between restored and non restored group. However, in general, patients who did not restore after AVR seem to be the patient group with patients, who were slightly sicker, although not many significant differences could be found. Postoperatively, NYHA and aECV only improved in the restored group and LVEF only improved in the non-restored group. Mean aortic pressure gradient was higher in the non restored group, however, postoperative mean aortic pressure gradient of 15 mmHg does not seem to be clinically relevant. Furthermore, the combined parameter LV pressure, which is part of the formula of CircE, was not significantly different between groups.
Statistically, we did not find any preoperative parameter that was predictive for patients showing postoperative non restored CircE and we could also not describe a main component, which was causative for showing non restored CircE after AVR. Further studies are needed and the two groups, especially the non restored group, is too small, however, circulatory efficiency taking into account different risk factors (LV pressure, LV mass, LV geometry) might be useful to categorize patients with pressure overload, who have not yet surpassed cut off values of single parameters.
Only patients with restored CircE after AVR showed improvement of CircE and myocardial fibrosis after AVR. Similar results were demonstrated by Güclu and colleagues where 4 out of 10 AS patients without efficiency improvement after AVR did not improve in exercise capacity after AVR [13]. Hence, CircE may identify patients at risk for insufficient reverse remodeling and could thus help to optimize timing for intervention. In further studies circulatory efficiency could be calculated in patients with AS longitudinally over time to investigate relationship between circulatory efficiency and myocardial adaptations, onset of symptoms and the optimal timing for intervention. According to the present data we can only speculate.
We found high CircE to be associated with high GLS, which is a measure of subclinical LV dysfunction and a predictor of reverse remodeling and outcome after AVR [33,34,35,36]. GLS is promising for risk stratification in patients with AS and for finding the optimal time for treatment [33,34,35,36]. Correlation between GLS and CircE might suggest similar clinical relevance of Circ E for patients with AS. Current AS guidelines mainly respect aortic pressure gradient for clinical decision making and staging [16] although the external load is not associated to onset of symptoms and LV hypertrophy [30, 37].
Pressure overload can trigger cellular pathways that lead to myocardial adaptation processes such as hypertrophy and fibrosis and is associated with heart failure in the long term [1,2,3]. Interestingly, CircE is correlated to absolute fibrosis load before and after AVR and might be an important contributor for pathophysiological understanding of early adaptation processes.
In our cohort of patients with severe AS we describe a reduction of LV mass and absolute fibrosis volume after AVR, however, fibrosis fraction (ECV) increased short term after AVR. This is in line with longitudinal biopsy studies from 1989 and recent CMR studies from Treibel TA et al., who described different cohorts of patients with severe AS and AVR and postoperative faster regression of myocardial mass than regression of fibrosis, which leads to an initial increase of fibrosis fraction short term after AVR, but constant decrease of the absolute amount of fibrosis load [38, 39].
In regard to efficiency we found a correlation of low pre-operative CircE with high post-operative fibrosis load. Moreover, there is only a significant reduction in absolute fibrosis volume in patients with restored CircE after AVR and not in patients with non-restored CircE after AVR. This suggests that reduced CircE in patients with severe AS is accompanied with delay in reverse remodeling after AVR at least concerning diffuse fibrosis since regression of myocardial mass and normalization of EDV and ESV is seen in all patients. In line with this suggestion recent literature studied the impact of myocardial fibrosis in patients with AS on LV reverse remodeling after aortic valve therapy. It was described that higher amount of myocardial fibrosis pre-treatment was associated with delay in normalization of LV geometry and function but not per se with absence of reverse remodelling and clinical improvement after treatment [40].
There was a high prevalence of bicuspid aortic valve (BAV) patients in our cohort. Prior studies comparing severe AS in patients with BAV and trileaflet aortic valve have shown that patients with trileaflet AS have a greater prevalence of cardiovascular risk factors and worse survival after AVR [41]. However, in their study patients with BAV were less likely to have multiple comorbidities.
In the present study, we did not find differences in LV power and circulatory efficiency, nor in markers for hypertrophy or fibrosis between AS patients with BAV and trileaflet AS before and after AVR. Looking at the patient characteristic there were no differences in age, aortic pressure gradient and cardiovascular risk factors such as diabetes, arterial hypertension and dyslipidemia. BAV patients showed a lower systolic blood pressure (134 ± 3vs 147 ± 7 mmHg; p = 0.042) and lower pre-operative pulse pressure (59 ± 2 vs 75 ± 6; p = 0.012), however, this did not have a relevant impact on the other parameters. It might be, that BAV and trileaflet AS patients in our patient cohort, were more comparable in their patient characteristics than in other studies describing relevant differences between these patients.
In a former publication we have described abnormal flow profiles in the ascending aorta to be present before and after AVR in the majority of patients [42]. In other studies, abnormal flow profiles are described to be associated with increased viscous energy loss, which can be used as a measure of LV load [43]. Thus, abnormal flow profiles might additionally influence LV work load and circulatory efficiency. However, this was not part of the present study.
Computing of myocardial energetics focused on systole, since it accounts for the majority of the heart's energy expenditure, without further consideration of the diastole, although diastolic relaxation is an active ATP-consuming process. However, little is known about myocardial energetics in diastole, and more research is needed to unveil the underlying mechanisms.
The parameter circulatory efficiency does not represent a true measurement but a mathematical formula that integrates the numerical information of a total of eight variables (i.e. myocardial wall volume). Because the parameter circulatory efficiency cannot be measured, neither as a single nor as a repeat measurement, intra- and/or inter-observer variabilities and scan-rescan variability cannot be computed. However, parameters of cardiac power and efficiency have been calculated using clinical established CMR LV volumetric and flow measurements. Good reproducibility of CMR LV volumetric, 2D and 4D flow measurements have been shown in several studies [22,23,24, 44, 45].
Moreover, this study was a purely mechanical approach without metabolic measurements of myocardial oxygen consumption derived by PET or invasive hemodynamic measurements that assumed LVMP to be the surrogate potential power generated by LV contraction following the simplified law of Laplace. Furthermore, the pressure recovery phenomenon was not considered since aortic pressure gradients were assessed using Doppler echocardiography as currently recommended by guidelines [16]. Future studies may help improve the method by using the continuity equation or model-based approaches. In addition, myocardial wall stress was calculated using a simplified approach to the law of Laplace. The geometrical shape of the LV as well as regional strain both determine myocardial wall stress and, subsequently, impact myocardial power. Therefore, more accurate models should be applied to calculate myocardial power more accurately in future projects.
In summary, the quantification of a surrogate marker of CircE in patients with severe AS before and after AVR has been demonstrated using a non-invasive CMR-based approach.
CircE was reduced in patients with AS and lower CircE was associated with pronounced hypertrophy and fibrosis and reduced LV function. After AVR, CircE increased and normalized in the majority of patients. In 24% of patients, CircE did not normalize and these patients showed no improvement of myocardial fibrosis compared to patients with restored CircE after AVR.
CircE, reflecting a combined parameter of LV adaptation to increased workload, could be valuable in the search for finding optimal timing of intervention in patients with AS to improve optimal long-term outcomes.
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
2D:
Four-dimensional
Σwall:
Wall stress
AS:
ATP:
AVR:
Aortic valve replacement
BAV:
Bicuspid aortic valve
BSA:
CCS:
Canadian Cardiovascular Society
CircE:
Circulatory efficiency
CMR:
CO:
Cardiac output
Effective cardiac output
Circulatory power
ECG:
ECV:
Extracellular volume
EDV:
End diastolic volume
EF:
Ejection fraction
ESV:
End systolic volume
FA:
Flip angle
FT:
Feature tracking
GCS:
Global circumferential strain
GLS:
Global longitudinal strain
Left ventricle/left ventricular
LVEDV:
Left ventricular end diastolic volume
LVESV:
Left ventricular end systolic volume
LVM:
Left ventricular mass
LVMP:
Left ventricular myocardial power
Mean arterial pressure
NT-proBNP:
N-terminal pro b-type natriuretic peptide
NYHA:
New York Heart Association
PSYS:
Peak systolic pressure
RBP:
Mean radius of the blood pool
RF:
Regurgitation fraction
SV:
SWall :
Mean myocardial wall thickness
tABC :
Auxobaric contraction time
TAV:
Trileaflet aortic valve
tCS :
Total systolic contraction time
tIVC :
Isovolumetric contraction time
Vwall :
Myocardial wall volume
VENC:
Velocity encoding
Cioffi G, Faggiano P, Vizzardi E, et al. Prognostic effect of inappropriately high left ventricular mass in asymptomatic severe aortic stenosis. Heart (British Cardiac Society). 2011;97(4):301–7.
Kupari M, Turto H, Lommi J. Left ventricular hypertrophy in aortic valve stenosis: preventive or promotive of systolic dysfunction and heart failure? Eur Heart J. 2005;26(17):1790–6.
Carabello BA, Paulus WJ. Aortic stenosis. Lancet (London, England). 2009;373(9667):956–66.
Osnabrugge RL, Mylotte D, Head SJ, et al. Aortic stenosis in the elderly: disease prevalence and number of candidates for transcatheter aortic valve replacement: a meta-analysis and modeling study. J Am Coll Cardiol. 2013;62(11):1002–12.
Miura S, Arita T, Kumamaru H, et al. Causes of death and mortality and evaluation of prognostic factors in patients with severe aortic stenosis in an aging society. J Cardiol. 2015;65(5):353–9.
Burkhoff D, Sagawa K. Ventricular efficiency predicted by an analytical model. Am J Physiol. 1986;250(6 Pt 2):R1021-1027.
Akins CW, Travis B, Yoganathan AP. Energy loss for evaluating heart valve performance. J Thorac Cardiovasc Surg. 2008;136(4):820–33.
Fernandes JF, Goubergrits L, Bruning J, et al. Beyond pressure gradients: the effects of intervention on heart power in aortic coarctation. PLoS ONE. 2017;12(1):e0168487.
PubMed PubMed Central Article CAS Google Scholar
Bing RJ, Hammond MM, Handelsman JC, et al. The measurement of coronary blood flow, oxygen consumption, and efficiency of the left ventricle in man. Am Heart J. 1949;38(1):1–24.
Knaapen P, Germans T, Knuuti J, et al. Myocardial energetics and efficiency: current status of the noninvasive approach. Circulation. 2007;115(7):918–27.
Paul Knaapen TG. Myocardial efficiency in heart failure: non invasive imaging. Heart and Metabolism 2008.
Laine H, Katoh C, Luotolahti M, et al. Myocardial oxygen consumption is unchanged but efficiency is reduced in patients with essential hypertension and left ventricular hypertrophy. Circulation. 1999;100(24):2425–30.
Guclu A, Knaapen P, Harms HJ, et al. Myocardial efficiency is an important determinant of functional improvement after aortic valve replacement in aortic valve stenosis patients: a combined PET and CMR study. Eur Heart J Cardiovasc Imaging. 2015;16(8):882–9.
Hansson NH, Sorensen J, Harms HJ, et al. Myocardial oxygen consumption and efficiency in aortic valve stenosis patients with and without heart failure. J Am Heart Assoc. 2017;6:2.
Lee CB, Goubergrits L, Fernandes JF, et al. Surrogates for myocardial power and power efficiency in patients with aortic valve disease. Sci Rep. 2019;9(1):16407.
Baumgartner H, Falk V, Bax JJ, et al. 2017 ESC/EACTS Guidelines for the management of valvular heart disease: The Task Force for the Management of Valvular Heart Disease of the European Society of Cardiology (ESC) and the European Association for Cardio-Thoracic Surgery (EACTS). Eur Heart J 2017.
Lancellotti P, Tribouilloy C, Hagendorff A, et al. Recommendations for the echocardiographic assessment of native valvular regurgitation: an executive summary from the European Association of Cardiovascular Imaging. Eur Heart J Cardiovasc Imaging. 2013;14(7):611–44.
PubMed Article PubMed Central Google Scholar
Nordmeyer S, Riesenkampff E, Crelier G, et al. Flow-sensitive four-dimensional cine magnetic resonance imaging for offline blood flow quantification in multiple vessels: a validation study. J Magn Reson Imaging. 2010;32(3):677–83.
Preston RR WT. Physiology (Lippincott Illustrated Reviews Series). 2012. p. 211–212.
Doltra A, Messroghli D, Stawowy P, et al. Potential reduction of interstitial myocardial fibrosis with renal denervation. J Am Heart Assoc. 2014;3(6):e001353.
Jerosch-Herold M, Sheridan DC, Kushner JD, et al. Cardiac magnetic resonance imaging of myocardial contrast uptake and blood flow in patients affected with idiopathic or familial dilated cardiomyopathy. Am J Physiol Heart Circ Physiol. 2008;295(3):H1234-h1242.
van Ooij P, Powell AL, Potters WV, Carr JC, Markl M, Barker AJ. Reproducibility and interobserver variability of systolic blood flow velocity and 3D wall shear stress derived from 4D flow MRI in the healthy aorta. J Magn Reson Imaging. 2016;43(1):236–48.
Noda C, Ambale Venkatesh B, Ohyama Y, et al. Reproducibility of functional aortic analysis using magnetic resonance imaging: the MESA. Eur Heart J Cardiovasc Imaging. 2016;17(8):909–17.
Grothues F, Smith GC, Moon JC, et al. Comparison of interstudy reproducibility of cardiovascular magnetic resonance with two-dimensional echocardiography in normal subjects and in patients with heart failure or left ventricular hypertrophy. Am J Cardiol. 2002;90(1):29–34.
Katz AM. Cardiomyopathy of overload. A major determinant of prognosis in congestive heart failure. New Engl J Med. 1990;322(2):100–10.
Cetin MS, Ozcan Cetin EH, Canpolat U, Sasmaz H, Temizhan A, Aydogdu S. Prognostic significance of myocardial energy expenditure and myocardial efficiency in patients with heart failure with reduced ejection fraction. Int J Cardiovasc Imaging. 2018;34(2):211–22.
Fillmore N, Mori J, Lopaschuk GD. Mitochondrial fatty acid oxidation alterations in heart failure, ischaemic heart disease and diabetic cardiomyopathy. Br J Pharmacol. 2014;171(8):2080–90.
Zheng J. Assessment of myocardial oxygenation with MRI. Quant Imaging Med Surg. 2013;3(2):67–72.
Suga H. Ventricular energetics. Physiol Rev. 1990;70(2):247–77.
Biederman RW, Magovern JA, Grant SB, et al. LV reverse remodeling imparted by aortic valve replacement for severe aortic stenosis; is it durable? A cardiovascular MRI study sponsored by the American Heart Association. J Cardiothor Surg. 2011;6:53.
Brennan JM, Edwards FH, Zhao Y, O'Brien SM, Douglas PS, Peterson ED. Long-term survival after aortic valve replacement among high-risk elderly patients in the United States: insights from the Society of Thoracic Surgeons Adult Cardiac Surgery Database, 1991 to 2007. Circulation. 2012;126(13):1621–9.
Cuspidi C, Meani S, Sala C, Valerio C, Negri F, Mancia G. Age related prevalence of severe left ventricular hypertrophy in essential hypertension: echocardiographic findings from the ETODH study. Blood Press. 2012;21(3):139–45.
Al Musa T, Uddin A, Swoboda PP, et al. Myocardial strain and symptom severity in severe aortic stenosis: insights from cardiovascular magnetic resonance. Quant Imaging Med Surg. 2017;7(1):38–47.
Dahl JS, Videbaek L, Poulsen MK, Rudbaek TR, Pellikka PA, Moller JE. Global strain in severe aortic valve stenosis: relation to clinical outcome after aortic valve replacement. Circ Cardiovasc Imaging. 2012;5(5):613–20.
Hwang JW, Kim SM, Park SJ, et al. Assessment of reverse remodeling predicted by myocardial deformation on tissue tracking in patients with severe aortic stenosis: a cardiovascular magnetic resonance imaging study. J Cardiovasc Magn Reson. 2017;19(1):80.
Ng ACT, Prihadi EA, Antoni ML, et al. Left ventricular global longitudinal strain is predictive of all-cause mortality independent of aortic stenosis severity and ejection fraction. Eur Heart J Cardiovasc Imag. 2018;19(8):859–67.
Dweck MR, Joshi S, Murigu T, et al. Left ventricular remodeling and hypertrophy in patients with aortic stenosis: insights from cardiovascular magnetic resonance. J Cardiovasc Magn Reson. 2012;14:50.
Treibel TA, Kozor R, Schofield R, et al. Reverse myocardial remodeling following valve replacement in patients with aortic stenosis. J Am Coll Cardiol. 2018;71(8):860–71.
Krayenbuehl HP, Hess OM, Monrad ES, Schneider J, Mall G, Turina M. Left ventricular myocardial structure in aortic valve disease before, intermediate, and late after aortic valve replacement. Circulation. 1989;79(4):744–55.
CAS PubMed Article PubMed Central Google Scholar
Puls M, Beuthner BE, Topci R, et al. Impact of myocardial fibrosis on left ventricular remodelling, recovery, and outcome after transcatheter aortic valve implantation in different haemodynamic subtypes of severe aortic stenosis. Eur Heart J. 2020;41(20):1903–14.
Huntley GD, Thaden JJ, Alsidawi S, et al. Comparative study of bicuspid vs. tricuspid aortic valve stenosis. Eur Heart J Cardiovasc Imaging. 2018;19(1):3–8.
Nordmeyer S, Hellmeier F, Yevtushenko P, et al. Abnormal aortic flow profiles persist after aortic valve replacement in the majority of patients with aortic valve disease: how model-based personalized therapy planning could improve results. A pilot study approach. Eur J Cardio-thor Surg 2019.
Barker AJ, van Ooij P, Bandi K, et al. Viscous energy loss in the presence of abnormal aortic flow. Magn Reson Med. 2014;72(3):620–8.
Olivotto I, Maron MS, Autore C, et al. Assessment and significance of left ventricular mass by cardiovascular magnetic resonance in hypertrophic cardiomyopathy. J Am Coll Cardiol. 2008;52(7):559–66.
Vogel-Claussen J, Finn JP, Gomes AS, et al. Left ventricular papillary muscle mass: relationship to left ventricular mass and volumes by magnetic resonance imaging. J Comput Assist Tomogr. 2006;30(3):426–32.
de Arenaza DP, Pepper J, Lees B, et al. Preoperative 6-minute walk test adds prognostic information to Euroscore in patients undergoing aortic valve replacement. Heart (British Cardiac Society). 2010;96(2):113–7.
Sado DM, Flett AS, Banypersad SM, et al. Cardiovascular magnetic resonance measurement of myocardial extracellular volume in health and disease. Heart (British Cardiac Society). 2012;98(19):1436–41.
Andre F, Steen H, Matheis P, et al. Age- and gender-related normal left ventricular deformation assessed by cardiovascular magnetic resonance feature tracking. J Cardiovasc Magn Reson. 2015;17:25.
We would like to thank Alireza Khasheei for his technical assistance and Manuela Bauer for her support as a study nurse.
Open Access funding enabled and organized by Projekt DEAL. SN and TK have received funding by the German Federal Ministry of Education and Research (BMBF) through the following grant: 031A427A. LG has received funding in a project supported by the German Research Foundation (DFG, Grant GO1067/6–1-KU1329/10–1, Berlin, Germany). Marcus Kelm is participant in the Charité Digital Clinician Scientist Program funded by DFG.
S. Nordmeyer and C. B. Lee contributed equally to this work
Department of Congenital Heart Disease, German Heart Centre Berlin, Berlin, Germany
S. Nordmeyer, F. Berger, T. Kuehne & M. Kelm
Institute for Imaging Science and Computational Modelling in Cardiovascular Medicine, Charité-Universitätsmedizin Berlin, Augustenburger Platz 1, 13353, Berlin, Germany
S. Nordmeyer, C. B. Lee, L. Goubergrits, N. Ghorbani, M. Zhu, T. Kuehne & M. Kelm
DZHK (German Centre for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
C. B. Lee, C. Knosalla, F. Berger, V. Falk, N. Ghorbani, T. Kuehne & M. Kelm
Department of Cardiothoracic and Vascular Surgery, German Heart Centre Berlin, Berlin, Germany
C. Knosalla & V. Falk
Department of Internal Medicine and Cardiology, German Heart Centre Berlin, Berlin, Germany
H. Hireche-Chikaoui & S. Kelle
Department of Internal Medicine and Cardiology, Charité–Universitätsmedizin Berlin, Berlin, Germany
S. Kelle
S. Nordmeyer
C. B. Lee
L. Goubergrits
C. Knosalla
F. Berger
V. Falk
N. Ghorbani
H. Hireche-Chikaoui
M. Zhu
T. Kuehne
M. Kelm
TK, SN and MK were responsible for conception and design of the study. SN contributed to subject recruitment. LG and CL established the measurement method. SN and CL contributed to data acquisition and analyzed and interpreted the data. SN and CL conducted the statistical analysis. SN and CL drafted the manuscript. MK and TK critically revised and reviewed the manuscript. All authors read and approved the final manuscript and agree to be accountable for all aspects of the work.
Correspondence to S. Nordmeyer.
The study was carried out according to the principles of the Declaration of Helsinki and approved by the local ethics committee (Ethics committee—Charité Universitätsmedizin Berlin). Written informed consent was obtained from the participants and/or their guardians. Trial Registration: clinicaltrials.gov NCT03172338, June 1, 2017.
The authors declared no competing interests.
Nordmeyer, S., Lee, C.B., Goubergrits, L. et al. Circulatory efficiency in patients with severe aortic valve stenosis before and after aortic valve replacement. J Cardiovasc Magn Reson 23, 15 (2021). https://doi.org/10.1186/s12968-020-00686-0
Hemodynamics | CommonCrawl |
Recovery of transversal metric tensor in the Schrödinger equation from the Dirichlet-to-Neumann map
Rapid exponential stabilization by boundary state feedback for a class of coupled nonlinear ODE and $ 1-d $ heat diffusion equation
Switching mechanism-based event-triggered fuzzy adaptive control with prescribed performance for MIMO nonlinear systems
Ruitong Wu 1, , Yongming Li 2,, , Jun Hu 3, , Wei Liu 3, and Shaocheng Tong 4,
College of Electrical Engineering, Liaoning University of Technology, Jinzhou, China
College of Science, Liaoning University of Technology, Jinzhou, China
Northeastern University, Shenyang, China
* Corresponding author: Yongming Li
Fund Project: This work is supported by National Natural Science Foundation (NNSF) of China under Grant 62173172 and Grant 61822307
This paper investigates the switching mechanism-based event-trig-gered fuzzy adaptive control issue of multi-input and multi-output (MIMO) nonlinear systems with prescribed performance (PP). Utilizing fuzzy logic systems (FLSs) to approximate unknown nonlinear functions. By using the switching threshold strategy, the system has more flexibility in strategy selection. The proposed control scheme can better solve the communication resource limitation. On account of the Lyapunov stability theory, the stability of the controlled system is proved. And all signals of the controlled system are bounded. Moreover, the tracking errors are controlled in a diminutive realm of the origin within the PP bounded. Simultaneously, the Zeno behavior is avoided. Finally, illustrate the effectiveness of the control scheme that has been proposed by demonstrating some simulation consequences.
Keywords: MIMO nonlinear systems, event-triggered, fuzzy adaptive control, prescribed performance.
Citation: Ruitong Wu, Yongming Li, Jun Hu, Wei Liu, Shaocheng Tong. Switching mechanism-based event-triggered fuzzy adaptive control with prescribed performance for MIMO nonlinear systems. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2021168
W. W. Bai, T. S. Li and S. C. Tong, NN reinforcement learning adaptive control for a class of nonstrict-feedback discrete-time systems, IEEE Transactions on Cybernetics, 50 (2020), 4573-4584. doi: 10.1109/TCYB.2020.2963849. Google Scholar
C. P. Bechlioulis and G. A. Rovithakis, A low-complexity global approximation-free control scheme with prescribed performance for unknown pure feedback systems, Automatica, 50 (2014), 1217-1226. doi: 10.1016/j.automatica.2014.02.020. Google Scholar
C. Deng, C. Wen, J. Huang, X. M. Zhang and Y. Zou, Distributed observer-based cooperative control approach for uncertain nonlinear MASs under event-triggered communication, IEEE Transactions on Automatic Control, (2021), 1–1. doi: 10.1109/TAC.2021.3090739. Google Scholar
K. W. Li and Y. M. Li, Adaptive fuzzy finite-time dynamic surface control for high-order nonlinear system with output constraints, International Journal of Control, Automation and Systems, 19 (2021), 112-123. doi: 10.1007/s12555-019-0986-4. Google Scholar
T. S. Li, W. W. Bai, Q. Liu, Y. Long and C. L. Philip Chen, Distributed fault-tolerant containment control protocols for the discrete-time multi-agent systems via reinforcement learning method, IEEE Transactions on Neural Networks and Learning Systems, (2021), 1–13. doi: 10.1109/TNNLS.2021.3121403. Google Scholar
X. D. Li and P. Li, Stability of time-delay systems with impulsive control involving stabilizing delays, Automatica, 124 (2021), 109336. doi: 10.1016/j.automatica.2020.109336. Google Scholar
X. D. Li and X. Y. Yang, Lyapunov stability analysis for nonlinear systems with state-dependent state delay, Automatica, 112 (2020), 108674. doi: 10.1016/j.automatica.2019.108674. Google Scholar
Y. M. Li, K. W. Li and S. C. Tong, Finite-time adaptive fuzzy output feedback dynamic surface control for MIMO non-strict feedback systems, IEEE Transactions on Fuzzy Systems, 27 (2019), 96-110. Google Scholar
Y. M. Li, Y. J. Liu and S. C. Tong, Observer-based neuro-adaptive optimized control for a class of strict-feedback nonlinear systems with state constraints, IEEE Transactions on Neural Networks and Learning Systems, (2021), 1–15. doi: 10.1109/TNNLS.2021.3051030. Google Scholar
Y. M. Li and S. C. Tong, Fuzzy adaptive control design strategy of nonlinear switched large-scale systems, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 48 (2018), 2209-2218. doi: 10.1109/TSMC.2017.2703127. Google Scholar
L. Liu, D. Wang, Z. H. Peng and Q. L. Han, Distributed path following of multiple under-actuated autonomous surface vehicles based on data-driven neural predictors via integral concurrent learning, IEEE Transactions on Neural Networks and Learning Systems, 32 (2021), 5334-5344. doi: 10.1109/TNNLS.2021.3100147. Google Scholar
M. Liu, L. X. Zhang, P. Shi and Y. X. Zhao, Fault estimation sliding-mode observer with digital communication constraints, IEEE Trans. Automat. Control, 63 (2018), 3434-3441. doi: 10.1109/TAC.2018.2794826. Google Scholar
Z. Liu, F. Wang, Y. Zhang, X. Chen and C. L. P. Chen, Adaptive tracking control for a class of nonlinear systems with a fuzzy dead-zone input, IEEE Transactions on Fuzzy Systems, 23 (2015), 193-204. Google Scholar
W. Qian, W. W. Xing and S. M. Fei, $H_{\infty}$ state estimation for neural networks with general activation function and mixed time-varying delays, IEEE Trans. Automat. Control, 32 (2021), 3909-3918. doi: 10.1109/TNNLS.2020.3016120. Google Scholar
J. Qiu, K. Sun, T. Wang and H. Gao, Observer-based fuzzy adaptive event-triggered control for pure-feedback nonlinear systems with prescribed performance, IEEE Transactions on Fuzzy Systems, 27 (2019), 2152-2162. doi: 10.1109/TFUZZ.2019.2895560. Google Scholar
Z. W. Ruan, Q. M. Yang, S. Z. S. Ge and Y. X. Sun, Adaptive fuzzy fault tolerant control of uncertain MIMO nonlinear systems with output constraints and unknown control directions, IEEE Transactions on Fuzzy Systems, (2021), 1–1. doi: 10.1109/TFUZZ.2021.3055336. Google Scholar
X. F. Shao and D. Ye, Fuzzy adaptive event-triggered secure control for stochastic nonlinear high-order MASs subject to DoS attacks and actuator faults, IEEE Transactions on Fuzzy Systems, 29 (2021), 3812-3821. doi: 10.1109/TFUZZ.2020.3028657. Google Scholar
W. Sun, S. F. Su, J. W. Xia and Y. Q. Wu, Adaptive tracking control of wheeled inverted pendulums with periodic disturbances, IEEE Transactions on Cybernetics, 50 (2020), 1867-1876. Google Scholar
S. C. Tong, X. Min and Y. X. Li, Observer-based adaptive fuzzy tracking control for strict-feedback nonlinear systems with unknown control gain functions, IEEE Transactions on Cybernetics, 50 (2020), 3903-3913. doi: 10.1109/TCYB.2020.2977175. Google Scholar
S. C. Tong, K. K. Sun and S. Sui, Observer-based adaptive fuzzy decentralized optimal control design for strict-feedback nonlinear large-scale systems, IEEE Transactions on Fuzzy Systems, 26 (2018), 569-584. doi: 10.1109/TFUZZ.2017.2686373. Google Scholar
J. H. Wang, Z. Liu, C. L. Philip Chen and Y. Zhang, Event-triggered neural adaptive failure compensation control for stochastic systems with dead-zone output, Nonlinear Dynamics, 96 (2019), 2179-2196. doi: 10.1007/s11071-019-04916-8. Google Scholar
T. Wang, Y. F. Zhang, J. B. Qiu and H. J. Gao, Adaptive fuzzy backstepping control for a class of nonlinear systems with sampled and delayed measurements, IEEE Transactions on Fuzzy Systems, 23 (2015), 302-312. Google Scholar
W. Wang and S. Tong, Observer-based adaptive fuzzy containment control for multiple uncertain nonlinear systems, IEEE Transactions on Fuzzy Systems, 27 (2019), 2079-2089. doi: 10.1109/TFUZZ.2019.2893339. Google Scholar
L. B. Wu, J. H. Park, X. P. Xie, C. Gao and N. N. Zhao, Fuzzy adaptive event-triggered control for a class of uncertain nonaffine nonlinear systems with full state constraints, IEEE Transactions on Fuzzy Systems, 29 (2021), 904-916. Google Scholar
L. Xing, C. Wen, Z. Liu, H. Su and J. Cai, Adaptive compensation for actuator failures with event-triggered input, Automatica, 85 (2017), 129-136. doi: 10.1016/j.automatica.2017.07.061. Google Scholar
L. Xing, C. Wen, Z. Liu, H. Su and J. Cai, Event-triggered adaptive control for a class of uncertain nonlinear systems, IEEE Trans. Automat. Control, 62 (2017), 2071-2076. doi: 10.1109/TAC.2016.2594204. Google Scholar
W. Q. Xu, X. P. Liu, H. Q. Wang and Y. C. Zhou, Event-triggered adaptive NN tracking control for MIMO nonlinear discrete-time systems, IEEE Transactions on Neural Networks and Learning Systems, (2021), 1–11. doi: 10.1109/TNNLS.2021.3084965. Google Scholar
J. P. Yu, P. Shi and L. Zhao, Finite-time command filtered backstepping control for a class of nonlinear systems, Automatica, 92 (2018), 173-180. doi: 10.1016/j.automatica.2018.03.033. Google Scholar
S. Zeghlache, L. Benyettou, A. Djerioui and M. Z. Ghellab, Twin rotor MIMO system experimental validation of robust adaptive fuzzy control against wind effects, IEEE Systems Journal, (2020), 1–11. doi: 10.1109/JSYST.2020.3034993. Google Scholar
Y. Zhang, X. H. Su, Z. Liu and C. L. P. Chen, Event-triggered adaptive fuzzy tracking control with guaranteed transient performance for MIMO nonlinear uncertain systems, IEEE Transactions on Cybernetics, 51 (2021), 736-749. doi: 10.1109/TCYB.2019.2894343. Google Scholar
Figure 1. System output $ {y_1} $ and the reference signal $ {y_{1,r}} $
Figure 2. System tracking error $ {y_1} - {y_{1,r}} $
Figure 3. Control signal
Figure 4. The time intervals $ {t_{kk + 1}} - {t_k} $ of triggering events
Figure 9. Tracking performance
Peng Cheng, Yanqing Liu, Yanyan Yin, Song Wang, Feng Pan. Fuzzy event-triggered disturbance rejection control of nonlinear systems. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3297-3307. doi: 10.3934/jimo.2020119
Hongru Ren, Shubo Li, Changxin Lu. Event-triggered adaptive fault-tolerant control for multi-agent systems with unknown disturbances. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1395-1414. doi: 10.3934/dcdss.2020379
Masashi Wakaiki, Hideki Sano. Stability analysis of infinite-dimensional event-triggered and self-triggered control systems with Lipschitz perturbations. Mathematical Control & Related Fields, 2022, 12 (1) : 245-273. doi: 10.3934/mcrf.2021021
Yuan Xu, Xin Jin, Saiwei Wang, Yang Tang. Optimal synchronization control of multiple euler-lagrange systems via event-triggered reinforcement learning. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1495-1518. doi: 10.3934/dcdss.2020377
Liqiang Jin, Yanyan Yin, Kok Lay Teo, Fei Liu. Event-triggered mixed $ H_\infty $ and passive control for Markov jump systems with bounded inputs. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1343-1355. doi: 10.3934/jimo.2020024
Tayel Dabbous. Adaptive control of nonlinear systems using fuzzy systems. Journal of Industrial & Management Optimization, 2010, 6 (4) : 861-880. doi: 10.3934/jimo.2010.6.861
Zbigniew Bartosiewicz, Ülle Kotta, Maris Tőnso, Małgorzata Wyrwas. Accessibility conditions of MIMO nonlinear control systems on homogeneous time scales. Mathematical Control & Related Fields, 2016, 6 (2) : 217-250. doi: 10.3934/mcrf.2016002
Jin-Zi Yang, Yuan-Xin Li, Ming Wei. Fuzzy adaptive asymptotic tracking of fractional order nonlinear systems with uncertain disturbances. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021144
Qiying Hu, Wuyi Yue. Optimal control for resource allocation in discrete event systems. Journal of Industrial & Management Optimization, 2006, 2 (1) : 63-80. doi: 10.3934/jimo.2006.2.63
Aliki D. Muradova, Georgios K. Tairidis, Georgios E. Stavroulakis. Adaptive Neuro-Fuzzy vibration control of a smart plate. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 251-271. doi: 10.3934/naco.2017017
Qiying Hu, Wuyi Yue. Optimal control for discrete event systems with arbitrary control pattern. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 535-558. doi: 10.3934/dcdsb.2006.6.535
Qi Li, Hong Xue, Changxin Lu. Event-based fault detection for interval type-2 fuzzy systems with measurement outliers. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1301-1328. doi: 10.3934/dcdss.2020412
Stefan Jerg, Oliver Junge, Marcus Post. Global optimal feedbacks for stochastic quantized nonlinear event systems. Journal of Computational Dynamics, 2014, 1 (1) : 163-176. doi: 10.3934/jcd.2014.1.163
Xiao-Li Hu, Han-Fu Chen. Optimal Adaptive Regulation for Nonlinear Systems with Observation Noise. Journal of Industrial & Management Optimization, 2007, 3 (1) : 155-164. doi: 10.3934/jimo.2007.3.155
Qiying Hu, Chen Xu, Wuyi Yue. A unified model for state feedback of discrete event systems II: Control synthesis problems. Journal of Industrial & Management Optimization, 2008, 4 (4) : 713-726. doi: 10.3934/jimo.2008.4.713
James P. Nelson, Mark J. Balas. Direct model reference adaptive control of linear systems with input/output delays. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 445-462. doi: 10.3934/naco.2013.3.445
Sangkyu Baek, Jinsoo Park, Bong Dae Choi. Performance analysis of transmission rate control algorithm from readers to a middleware in intelligent transportation systems. Numerical Algebra, Control & Optimization, 2012, 2 (2) : 357-375. doi: 10.3934/naco.2012.2.357
Xingyue Liang, Jianwei Xia, Guoliang Chen, Huasheng Zhang, Zhen Wang. $ \mathcal{H}_{\infty} $ control for fuzzy markovian jump systems based on sampled-data control method. Discrete & Continuous Dynamical Systems - S, 2021, 14 (4) : 1329-1343. doi: 10.3934/dcdss.2020368
Ruitong Wu Yongming Li Jun Hu Wei Liu Shaocheng Tong | CommonCrawl |
Nonstationary iterated thresholding algorithms for image deblurring
IPI Home
Nonlinear stability of the implicit-explicit methods for the Allen-Cahn equation
August 2013, 7(3): 697-716. doi: 10.3934/ipi.2013.7.697
Non-Gaussian dynamics of a tumor growth system with immunization
Mengli Hao 1, , Ting Gao 2, , Jinqiao Duan 3, and Wei Xu 1,
Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, 710129, China, China
Institute for Pure and Applied Mathematics, University of California, Los Angeles, Los Angeles, CA 90095, United States
Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL 60616
Received June 2012 Revised March 2013 Published September 2013
This paper is devoted to exploring the effects of non-Gaussian fluctuations on dynamical evolution of a tumor growth model with immunization, subject to non-Gaussian $\alpha$-stable type Lévy noise. The corresponding deterministic model has two meaningful states which represent the state of tumor extinction and the state of stable tumor, respectively. To characterize the time for different initial densities of tumor cells staying in the domain between these two states and the likelihood of crossing this domain, the mean exit time and the escape probability are quantified by numerically solving differential-integral equations with appropriate exterior boundary conditions. The relationships between the dynamical properties and the noise parameters are examined. It is found that in the different stages of tumor, the noise parameters have different influences on the time and the likelihood inducing tumor extinction. These results are relevant for determining efficient therapeutic regimes to induce the extinction of tumor cells.
Keywords: stochastic dynamics of tumor growth, Tumor growth with immunization, escape probability., non-Gaussian Lévy process, mean exit time, Lévy jump measure.
Mathematics Subject Classification: 60H15, 62P10, 65C50, 92C4.
Citation: Mengli Hao, Ting Gao, Jinqiao Duan, Wei Xu. Non-Gaussian dynamics of a tumor growth system with immunization. Inverse Problems & Imaging, 2013, 7 (3) : 697-716. doi: 10.3934/ipi.2013.7.697
J. A. Adam, The dynamics of growth-factor-modified immune response to cancer growth: One dimensional models,, Mathl. Comput. Modelling, 17 (1993), 83. doi: 10.1016/0895-7177(93)90041-V. Google Scholar
S. Albeverrio, B. Rüdiger and J. L. Wu, Invariant measures and symmetry property of lévy type operators,, Potential Analysis, 13 (2000), 147. doi: 10.1023/A:1008705820024. Google Scholar
D. Applebaum, "Lévy Processes and Stochastic Calculus,", Cambridge Studies in Advanced Mathematics, (2004). doi: 10.1017/CBO9780511755323. Google Scholar
F. Bartumeus, J. Catalan, U. L. Fulco, M. L. Lyra and G. Viswanathan, Optimizing the encounter rate in biological interactions: Lévy versus brownian strategies,, Phys. Rev. Lett., 88 (2002). doi: 10.1103/PhysRevLett.88.097901. Google Scholar
T. Bose and S. Trimper, Stochastic model for tumor growth with immunization,, Phys. Rev. E, 79 (2009). doi: 10.1103/PhysRevE.79.051903. Google Scholar
J. R. Brannan, J. Duan and V. J. Ervin, Escape probability, mean residence time and geophysical fluid particle dynamics,, Predictability: Quantifying uncertainty in models of complex phenomena (Los Alamos, 133 (1999), 23. doi: 10.1016/S0167-2789(99)00096-2. Google Scholar
H. Chen, J. Duan, X. Li and C. Zhang, A computational analysis for mean exit time under non-Gaussian lévy noises,, Applied Mathematics and Computation, 218 (2011), 1845. doi: 10.1016/j.amc.2011.06.068. Google Scholar
Z. Chen, P. Kim and R. Song, Heat kernel estimates for Dirichlet fractional laplacian,, J. European Math. Soc., 12 (2010), 1307. doi: 10.4171/JEMS/231. Google Scholar
L. G. de Pillis, W. Gu and A. E. Radunskaya, Mixed immunotherapy and chemotherapy of tumors: Modeling, applications and biological interpretations,, J. Theoret. Biol., 238 (2006), 841. doi: 10.1016/j.jtbi.2005.06.037. Google Scholar
J. R. R. Duarte, M. V. D. Vermelho and M. L. Lyra, Stochastic resonance of a periodically driven neuron under non-Gaussian noise,, Physica A, 387 (2008), 1446. doi: 10.1016/j.physa.2007.11.011. Google Scholar
A. Fiasconaro, A. Ochab-Marcinek, B. Spagnolo and E. Gudowska-Nowak, Monitoring noise-resonant effects in cancer growth influenced by external fluctuations and periodic treatment,, Eur. Phys. J. B, 65 (2008), 435. doi: 10.1140/epjb/e2008-00246-2. Google Scholar
A. Fiasconaro and B. Spagnolo, Co-occurrence of resonant activation and noise-enhanced stability in a model of cancer growth in the presence of immune response,, Phys. Rev. E, 74 (2006). doi: 10.1103/PhysRevE.74.041904. Google Scholar
T. Gao, J. Duan, X. Li and R. Song, Mean exit time and escape probability for dynamical systems driven by lévy noise, preprint,, , (). Google Scholar
R. P. Garay and R. Lefever, A kinetic approach to the immunology of cancer: Stationary states properties of effector-target cell reactions,, J. Theor. Biol., 73 (1978), 417. doi: 10.1016/0022-5193(78)90150-9. Google Scholar
W. Horsthemke and R. Lefever, "Noise-Induced Transitions. Theory and Applications in Physics, Chemistry and Biology,", Springer Series in Synergetics, (1984). Google Scholar
N. E. Humphries et al., Environmental context explains lévy and brownian movement patterns of marine predators,, Nature, 465 (2010), 1066. doi: 10.1038/nature09116. Google Scholar
L. Jiang, X. Luo, D. Wu and S. Zhu, Stochastic properties of tumor growth driven by white lévy noise,, Modern Physics Letters B, 26 (2012). doi: 10.1142/S0217984912501497. Google Scholar
D. Kirschner and J. C. Panetta, Modeling immunotherapy of the tumor-immune interaction,, J. Math. Biol., 37 (1998), 235. doi: 10.1007/s002850050127. Google Scholar
A. E. Kyprianou, "Introductory Lectures on Fluctuations of Lévy Processes with Applications,", Springer-Verlag, (2006). Google Scholar
R. Lefever and W. Horsthemk, Bistability in fluctuating environments. Implications in tumor immumology,, Bulletin of Mathematical Biology, 41 (1979), 469. doi: 10.1007/BF02458325. Google Scholar
D. Li, W. Xu, Y. Guo and Y. Xu, Fluctuations induced extinction and stochastic resonance effect in a model of tumor growth with periodic treatment,, Physics Letters A, 375 (2011), 886. doi: 10.1016/j.physleta.2010.12.066. Google Scholar
M. Liao, The dirichlet problem of a discontinuous markov process,, A Chinese summary appears in Acta Math., 33 (1989), 9. doi: 10.1007/BF02107618. Google Scholar
T. Naeh, M. M. Klosek, B. J. Matkowsky and Z. Schuss, A direct approach to the exit problem,, SIAM J. Appl. Math., 50 (1990), 595. doi: 10.1137/0150036. Google Scholar
A. Ochab-Marcinek and E. Gudowska-Nowak, Population growth and control in stochastic models of cancer development,, Physica A, 343 (2004), 557. doi: 10.1016/j.physa.2004.06.071. Google Scholar
I. Prigogine and R. Lefever, Stability problems in cancer growth and nucleation,, Comp. Biochem. Physiol, 67 (1980), 389. doi: 10.1016/0305-0491(80)90326-0. Google Scholar
H. Qiao, X. Kan and J. Duan, Escape probability for stochastic dynamical systems with jumps,, Malliavin Calculus and Stochastic Analysis, 34 (2013), 195. doi: 10.1007/978-1-4614-5906-4_9. Google Scholar
K.-I. Sato, "Lévy Processes and Infinitely Divisible Distributions,", Translated from the 1990 Japanese original. Revised by the author. Cambridge Studies in Advanced Mathematics, (1990). Google Scholar
D. Schertzer, M. Larchevêque, J. Duan, V. V. Yanovsky and S. Lovejoy, Fractional Fokker-Planck equation for nonlinear stochastic differential equations driven by non-Gaussian lévy stable noises,, J. Math. Phys., 42 (2001), 200. doi: 10.1063/1.1318734. Google Scholar
Z. Schuss, "Theory and Applications of Stochastic Differential Equations,", Wiley Series in Probability and Statistics, (1980). Google Scholar
C. Zeng, X. Zhou and S. Tao, Cross-correlation enhanced stability in a tumor cell growth model with immune surveillance driven by cross-correlated noises,, J. Phys. A, 42 (2009). doi: 10.1088/1751-8113/42/49/495002. Google Scholar
C. Zeng and H. Wang, Colored noise enhanced stability in a tumor cell growth system under immune response,, J. Stat. Phys., 141 (2010), 889. doi: 10.1007/s10955-010-0068-8. Google Scholar
C. Zeng, Effects of correlated noise in a tumor cell growth model in the presence of immune response,, Phys. Scr., 81 (2010). doi: 10.1088/0031-8949/81/02/025009. Google Scholar
W. Zhong, Y. Shao and Z. He, Pure multiplicative stochastic resonance of a theoretical anti-tumor model with seasonal modulability,, Phys. Rev. E, 73 (2006). doi: 10.1103/PhysRevE.73.060902. Google Scholar
W. Zhong, Y. Shao and Z. He, Spatiotemporal fluctuation-induced transition in a tumor model with immune surveillance,, Phys. Rev. E, 74 (2006). doi: 10.1103/PhysRevE.74.011916. Google Scholar
Hongjun Gao, Fei Liang. On the stochastic beam equation driven by a Non-Gaussian Lévy process. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1027-1045. doi: 10.3934/dcdsb.2014.19.1027
Yong-Kum Cho. On the Boltzmann equation with the symmetric stable Lévy process. Kinetic & Related Models, 2015, 8 (1) : 53-77. doi: 10.3934/krm.2015.8.53
Ziheng Chen, Siqing Gan, Xiaojie Wang. Mean-square approximations of Lévy noise driven SDEs with super-linearly growing diffusion and jump coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4513-4545. doi: 10.3934/dcdsb.2019154
Badr-eddine Berrhazi, Mohamed El Fatini, Tomás Caraballo, Roger Pettersson. A stochastic SIRI epidemic model with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2018, 23 (6) : 2415-2431. doi: 10.3934/dcdsb.2018057
Yongxia Zhao, Rongming Wang, Chuancun Yin. Optimal dividends and capital injections for a spectrally positive Lévy process. Journal of Industrial & Management Optimization, 2017, 13 (1) : 1-21. doi: 10.3934/jimo.2016001
Ahuod Alsheri, Ebraheem O. Alzahrani, Asim Asiri, Mohamed M. El-Dessoky, Yang Kuang. Tumor growth dynamics with nutrient limitation and cell proliferation time delay. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3771-3782. doi: 10.3934/dcdsb.2017189
Min Niu, Bin Xie. Comparison theorem and correlation for stochastic heat equations driven by Lévy space-time white noises. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 2989-3009. doi: 10.3934/dcdsb.2018296
Jiangyan Peng, Dingcheng Wang. Asymptotics for ruin probabilities of a non-standard renewal risk model with dependence structures and exponential Lévy process investment returns. Journal of Industrial & Management Optimization, 2017, 13 (1) : 155-185. doi: 10.3934/jimo.2016010
Kexue Li, Jigen Peng, Junxiong Jia. Explosive solutions of parabolic stochastic partial differential equations with lévy noise. Discrete & Continuous Dynamical Systems - A, 2017, 37 (10) : 5105-5125. doi: 10.3934/dcds.2017221
Justin Cyr, Phuong Nguyen, Roger Temam. Stochastic one layer shallow water equations with Lévy noise. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3765-3818. doi: 10.3934/dcdsb.2018331
Elena Izquierdo-Kulich, Margarita Amigó de Quesada, Carlos Manuel Pérez-Amor, Magda Lopes Texeira, José Manuel Nieto-Villar. The dynamics of tumor growth and cells pattern morphology. Mathematical Biosciences & Engineering, 2009, 6 (3) : 547-559. doi: 10.3934/mbe.2009.6.547
Yangjin Kim, Hans G. Othmer. Hybrid models of cell and tissue dynamics in tumor growth. Mathematical Biosciences & Engineering, 2015, 12 (6) : 1141-1156. doi: 10.3934/mbe.2015.12.1141
Max-Olivier Hongler. Mean-field games and swarms dynamics in Gaussian and non-Gaussian environments. Journal of Dynamics & Games, 2019, 0 (0) : 0-0. doi: 10.3934/jdg.2020001
Elena Izquierdo-Kulich, José Manuel Nieto-Villar. Mesoscopic model for tumor growth. Mathematical Biosciences & Engineering, 2007, 4 (4) : 687-698. doi: 10.3934/mbe.2007.4.687
Wen Chen, Song Wang. A finite difference method for pricing European and American options under a geometric Lévy process. Journal of Industrial & Management Optimization, 2015, 11 (1) : 241-264. doi: 10.3934/jimo.2015.11.241
Mohammad A. Tabatabai, Wayne M. Eby, Karan P. Singh, Sejong Bae. T model of growth and its application in systems of tumor-immune dynamics. Mathematical Biosciences & Engineering, 2013, 10 (3) : 925-938. doi: 10.3934/mbe.2013.10.925
Didier Bresch, Thierry Colin, Emmanuel Grenier, Benjamin Ribba, Olivier Saut. A viscoelastic model for avascular tumor growth. Conference Publications, 2009, 2009 (Special) : 101-108. doi: 10.3934/proc.2009.2009.101
Markus Riedle, Jianliang Zhai. Large deviations for stochastic heat equations with memory driven by Lévy-type noise. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1983-2005. doi: 10.3934/dcds.2018080
Kumarasamy Sakthivel, Sivaguru S. Sritharan. Martingale solutions for stochastic Navier-Stokes equations driven by Lévy noise. Evolution Equations & Control Theory, 2012, 1 (2) : 355-392. doi: 10.3934/eect.2012.1.355
Jiahui Zhu, Zdzisław Brzeźniak. Nonlinear stochastic partial differential equations of hyperbolic type driven by Lévy-type noises. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3269-3299. doi: 10.3934/dcdsb.2016097
Mengli Hao Ting Gao Jinqiao Duan Wei Xu | CommonCrawl |
Calculators Topics Go Premium About Snapxam
ENG • ESP
Processing image... Tap to take a pic of the problem
Special products is the multiplication of algebraic expressions that follow certain rules and patterns, so you can predict the result without necessarily doing the multiplication.
$\left(x+5\right)\left(x-7\right)$ 1d ago
$\left(x+5\right)\left(y+3\right)$ 1d ago
$\left(-a+1\right)\left(-a-1\right)$ 1d ago
$\left(x+2\right)\left(x+3\right)$ 1d ago
$\frac{1}{4}m+\frac{3}{4}-\frac{3}{8}\left(m+1\right)$ 1d ago
$\left(5x^2-3\right)\left(5x^2+3\right)$ 1d ago
$\left(9x+5\right)\left(9x+2\right)$ 1d ago
$16y^2-25x^2\left(4y-x\right)$ 1d ago
$25x^2-100y^2\left(5x+104\right)$ 1d ago
Struggling with math?
Access detailed step by step solutions to millions of problems, growing every day!
© 2018-2020 Snapxam, Inc. About Us Privacy Terms Contact
Calculators Topics Go Premium | CommonCrawl |
Voltammetric approach for pharmaceutical samples analysis; simultaneous quantitative determination of resorcinol and hydroquinone
Ebrahim Nabatian1,2,
Mahdi Mousavi2,
Mostafa Pournamdari3,
Mehdi Yoosefian ORCID: orcid.org/0000-0003-0096-50834 &
Saeid Ahmadzadeh ORCID: orcid.org/0000-0001-8574-94485,6
BMC Chemistry volume 16, Article number: 115 (2022) Cite this article
A simple and precise analytical approach developed for single and simultaneous determination of resorcinol (RC) and hydroquinone (HQ) in pharmaceutical samples using carbon paste electrode (CPE) modified with 1-Ethyl-3-methylimidazolium tetrafluoroborate as ionic liquid and ZnFe2O4 nanoparticle. A significant enhancement in the peak current and sensitivity of the proposed sensor observed by using modifiers in the composition of working electrode compared to bare CPE which is in accordance with the results obtained from electrochemical impedance spectroscopy investigations. Electrochemical investigations revealed a well-defined irreversible oxidation peak for RC over a wide concentration range from 3.0 µM to 500 µM in 0.1 M phosphate buffer solution (pH 6.0) with the linear regression equations of Ip (µA) = 0.0276 CRC (µM) + 0.5508 (R2 = 0.997). The limit of detection and quantification for RC analysis were found to be 1.46 µM and 4.88 µM, respectively. However, the obtained SW voltammograms for simultaneous determination of RC and HQ exhibited a desirable peak separation of about 360 mV potential difference and a satisfactory linear response over the range of 50–700 µM and 5-350 µM with the favorable correlation coefficient of 0.991 and 0.995, respectively. The diffusion coefficient (D) of RC and the electron transfer coefficient (α) at the surface of ZnFe2O4/NPs/IL/CPE estimated to be 2.83 × 10− 4 cm s− 1 and 0.76. The proposed sensor as a promising and low-cost method successfully applied for determination of RC in commercial pharmaceutical formulations such as the resorcinol cream of 2% O/W emulsion available on the market with the recovery of 98.47 ± 0.04.
Dihydroxybenzenes as the important phenolic compounds with high toxicity and low degradability suspected of being carcinogens extremely released into the environmental mediums since they used as the chemical intermediate for the synthesis of the variety of pharmaceuticals and other organic compounds such as dyes, photography chemicals, plastics, flavoring agents, antioxidant, rubber, and pesticides. Therefore, they listed as priority pollutants by environmental organizations such as US-EPA and EU [1, 2].
Resorcinol (RC, 1,3-dihydroxybenzene) and hydroquinone (HQ, 1,4-dihydroxybenzene) as two pharmaceutical products extensively used for the treatment of skin diseases. RC commonly applied for acne medication and the treatment of chronic skin diseases such as psoriasis and hidradenitis suppurativa [3, 4]. However, due to its toxic effect at the higher doses, RC disrupts the function of the nervous system which causes acute respiratory problems as well as the endocrine system such as thyroid gland function. On the other hand, HQ as a skin whitening product inhibits the enzymatic pathway of tyrosinase for producing pigment melanin from dopamine [5, 6]. Due to the extraordinary toxicity of HQ at high concentration, it causes nausea, edema of internal organs, headache, dizziness, and even kidney damage in humans [3, 7].
In order to discriminate the two mentioned dihydroxybenzene isomers RC and HQ with similar properties and structure, numerous analytical procedures employed including chromatography [8], fluorescence [9], spectrophotometry [10], fluorometry [3], chemiluminescence [4], and electrochemical methods [3, 6, 7] .
Most of the mentioned instrumental methods are time-consuming, costly; require complicated sample preparation and expert operator which is not suitable for routine analysis. In contrast, the electrochemical techniques received extraordinary attention due to their low cost, rapid response, easy operation, low detection limit, and relatively short analysis time [11, 12]. Recently a few modified electrochemical sensors developed for the simultaneous determination of RC and HQ in biological and pharmaceutical samples [3,4,5,6,7]. However, they suffered from the narrow dynamic concentration range and an undesirable lower detection limit.
Among the modified electrodes, carbon paste electrodes (CPEs) received extraordinary attention due to the advantages of easy preparation and renewability, generous surface chemistry, stable response, wide potential window and low ohmic resistance. In addition to all the benefits mentioned, the use of modifiers that effectively accelerate and facilitates the electron transport between the analyte and the electrode has made the modified carbon paste electrodes a suitable candidate for simultaneous measurement of the analytes by reducing the overpotential required for the electrode reactions [13, 14].
To improve the electrochemical conductivities of bare CPE, room temperature ionic liquid and synthesized nanoparticles namely 1-ethyl-3-methylimidazolium tetrafluoroborate and ZnFe2O4 used as the modifiers to form a stable carbon paste composite in the current work, respectively. The unique physicochemical characteristics of the mention materials resulted in a better electrochemical response of the modified CPE particularly for the quantitative determination of trace analytes [15, 16]. Ionic liquids (ILs) with remarkable chemical and thermal stability, acceptable electrochemical windows and desirable ionic conductivity properties received considerable attention in modifying the CPEs. ILs provide benefits such as improving the electron transfer rate, sensitivity, and conductivity of the modified CPEs compared to bare CPEs [17]. On the other hand, to provide a larger active surface area with desired catalytic activity for facilitating the electron transport between the analyte and modified CPEs surface, metal nanoparticles extensively applied in the fabrication of electrochemical sensors [18, 19]. Moreover, metal nanoparticles as an efficient catalyst enhanced the electrochemical reactions of electrochemical sensors and biosensors [8, 20,21,22].
Therefore, herein great attempts have been done to develop a highly selective and sensitive sensor for quantitative determination of trace amount of RC in commercial pharmaceutical formulations available on the market using the proposed modified CPE. To the best of our knowledge, for the first time a square wave voltammetric method developed for simultaneous determination of RC and HQ in the current work. The proposed sensor as a promising and low-cost method successfully applied for determination of RC in commercial pharmaceutical formulations such as the resorcinol cream of 2% O/W emulsion available on the market.
Chemicals and reagent
Analytical grade resorcinol (RC), iron (III) chloride hexahydrate (FeCl3·6H2O), zinc (II) chloride (ZnCl2), sodium hydroxide (NaOH), sodium bicarbonate (NaHCO3), calcium sulfate (CaSO4), magnesium nitrate hexahydrate (Mg(NO3)2·6H2O), potassium carbonate (K2CO3), 1-ethyl-3-methylimidazolium tetrafluoroborate (IL), graphite fine powder extra pure, and extra pure paraffin obtained from Sigma-Aldrich. Glucose, ascorbic acid, phenylalanine, methionine, alanine, valine, isoleucine, urea, and thiourea obtained from Merck. Phosphate buffer solutions (PBS) with the desired pH values prepared using 0.1 M H3PO4 and 0.1 M NaOH solutions.
The applied electrochemical compartment consisted of a conventional three-electrode system including ZnFe2O4/NPs/IL/CPE, platinum wire, and Ag/AgCl (3 M KCl) as working, counter, and the reference electrode, respectively. Electrochemical investigations carried out by Autolab PGSTAT204-Metrohm potentiostat/galvanostat programmed and controlled by NOVA 1.11 software and equipped with FRA module for electrochemical impedance spectroscopy studies.
All experiments carried out at room temperature. The pH adjustment performed by a Metrohm pH meter model 827 pH lab (Metrohm AG, Switzerland). To evaluate the morphological aspects of the synthesized ZnFe2O4 nanoparticle, field emission scanning electron microscopy (FE-SEM) X-Ray Diffraction (XRD), and UV-Vis spectroscopy analysis carried out using TESCAN MIRA3 XMU FE-SEM and Panalytical X'Pert Pro MPD X-Ray Diffraction System, and Optizen 3220 UV spectrophotometer, respectively,
Nanoparticle synthesis procedure
The aqueous solutions of 0.4 M iron chloride FeCl3·6H2O, 0.2 M zinc chloride ZnCl2 prepared in distilled water. The volume of 25 mL of each solution mixed to each other in Erlenmeyer flask under stirrer 300 rpm condition. Afterward, 25 mL of 3.0 M NaOH solution as the precipitating agent added dropwise to the above solution using burette under same stirrer condition where the gradual formation of precipitate observed. The obtained colloidal solution which synthesized by chemical coprecipitation method, filtered and the pH of the precipitate adjusted at 7.0 by washing with distilled water. Moreover, to improve the nucleation and growth of nanoparticles in the proposed solution, microwave heating and reflux process applied in the current work as follow. 25 mL of aqueous solution containing the collected precipitate placed in the microwave for 30 min at 600 watts. The pH of the solution adjusted at 7.0 by adding required amount of 0.5 M NaOH solution. The obtained precipitate dried at room temperature overnight. Subsequently, the reflux process carried out for the obtained precipitate in the presence of H2O:EtOH (1:2) binary solvent for 45 min. finally, the achieved precipitate filtered and its pH adjusted at 7.0 using distilled water and dried at room temperature for 24 h.
Electrode modification procedure
To prepare the ZnFe2O4/NPs/IL/CPE, an optimized proportion of 0.1 g synthesized ZnFe2O4 nanoparticles and 0.9 g graphite powder mixed with abrasion in a mortar. To ensure the uniformity of the resulting mixture, ethyl ether as a highly volatile and ineffective solvent added to the mixture. The mixing process continued until the solvent evaporated completely. Then, an optimized proportion of 0.2 g 1-Ethyl-3-methylimidazolium tetrafluoroborate as ionic liquid and 0.8 g paraffin was added to the mixture dropwise, and after each drop, the mixture mixed with mortar to obtain a uniform paste. An appropriate portion of the prepared paste injected into a glass tube and connected to the electrochemical workstation by a copper wire. To achieve a perfectly flat and uniform surface of the working electrode, the paste pushed by wire and the end of the glass tube polished on a glossy sheet of paper.
Characterization of the synthesized ZnFe2O4 nanoparticle
To evaluate the successful synthesis of ZnFe2O4 nanoparticle, FE-SEM, XRD and UV-Vis techniques employed.
Field emission scanning electron microscopy (FE-SEM) with high-resolution operating at 15 KeV accelerating voltage applied to investigate the surface details and morphology of the synthesized ZnFe2O4 nanoparticle. As demonstrated in Fig. 1A, a three-dimensional nanostructure with a high surface area obtained. According to the obtained micrograph, the ZnFe2O4 nanoparticles exhibit a typical homogeneous morphology with a spherical structure which aggregate to some extent. It concluded that the enhancement in peak current of modified carbon paste electrode attributed to the increase in the active surface area of the working electrode due to the usage of the ZnFe2O4 nanoparticle.
The XRD analysis performed from 2.0° (2θ) to 80.0° (2θ) and the diffraction data analyzed using PDF2 database. As seen from the XRD patterns of ZnFe2O4 nanoparticle presented in Fig. 1C, the diffraction peaks at 2θ of 30.06°, 35.45°, 43.03°, 53.54°,57.16°,62.72°,and 73.99° with the calculated d-spacings of 0.297 nm, 0.253 nm, 0.210 nm, 0.171 nm, 0.161 nm, 0.148 nm, and 0.128 nm can be assigned to (220), (311), (400), (422), (511), (440), and (533) reflection planes of the regular spinel cubic structure of ZnFe2O4 with the space group of Fd3m (JCPDS No. 77–0011), respectively. To calculate the size of the synthesized nanoparticle, the Scherrer equation employed. The average size of 15 nm obtained for ZnFe2O4 nanoparticle using the peak corresponding to (311) reflection plane.
On the other hand, UV-Vis spectroscopy applied to evaluate the particle size of the synthesized nanoparticle. The absorbance recorded at the wavelength from 250 to 600 nm with 5 nm step size. As seen from the obtained absorption spectra in Fig. 1B, the maximum absorption peak achieved at 350 nm. The average particle size of the synthesized ZnFe2O4 nanoparticle calculated using the equation expressed as follow [23]:
$$Particle\;size\; (nm)={ \left[\left\{\frac{-0.2963+(-40.1970+ \frac{13,620}{{\lambda }_{p}}}{-7.34+ \frac{2418.6}{{\lambda }_{p}}}\right\}\right]}^{2}$$
The calculated particle size was found to be 13.5 nm which is in excellent accordance with the obtained size from FE-SEM and XRD analysis.
Electrochemical behavior of RC in different pH
According to the Nernst equation, the pH of the electrolyte solution and the existence of proton play an important role in the intensity of the oxidation process of electro-active species. Therefore, the effect of solution pH on the electrochemical oxidation of RC investigated. The pH of the electrolyte solution was changed from 4 to 9 using 0.1 M PBS and the oxidation peaks of RC (500 µM) recorded applying ZnFe2O4/NPs/IL/CPE. The obtained results revealed that the potential of the oxidation peak shifted to less positive values by increasing the solution pH which indicated that the proton involved in the electrocatalytic oxidation of RC (see Fig. 2A). As it is obvious, the value of anodic peak current enhanced by increasing the solution pH from 4 to 6, however, for further increase in solution pH from 6 to 9, a decrease in the value of anodic peak current observed [24]. Accordingly, the optimum solution pH of 6 with the maximum amount of the oxidation current was selected as the ideal buffer solution and applied throughout the current work.
By plotting the peak potential (Ep.a. in V) versus the solution pH a straight line with the linear regression equation of Ep (V) = − 0.0585 pH + 1.1213 (R2 = 0.9863) obtained (see Fig. 2B). By comparing the obtained slope of 0.0585 with the Nernstian slope of 0.0591 m/n, where m and n denote the number of protons participated and electron transferred through the electrochemical reaction, it can be concluded that the number of protons and electrons that involved in the oxidation process of RC are equal. In accordance with the evidence presented above, the Scheme 1 could be suggested as the oxidation mechanism of RC (Scheme 1).
Improvement of modified CPE electrochemical performance
To investigate the electrocatalytic effect of modifications process on the characteristics performance of the applied carbon paste electrode in the current work, the proposed bare electrode modified over several steps [25]. All investigations conducted with the optimum value of pH solution 6 in 0.1 M PBS at the scan rate of 100 mV s− 1 in the presence of 500 µM RC using cyclic voltammetry technique. The RC oxidation peak potential was found to be around 780 mV with the peak current of 22.3 µA by applying the bare CPE. However, optimizing the catalytic ability of CPE ingredients by adding a fraction of nanoparticles and ionic liquid instead of graphite powder and paraffin, respectively, revealed a substantial increase on the surface conductivity of the applied CPE which resulted in enhancement of oxidation current along with shifting the oxidation potential to a more negative value. As seen from Fig. 3A, the overvoltage of RC oxidation process decreased on the surface of CPE, IL/CPE, ZnFe2O4/NPs/CPE, and ZnFe2O4/NPs/IL/CPE (curves a–d, respectively). As a result, the recorded RC oxidation peak on the surface of ZnFe2O4/NPs/CPE exhibited significant oxidation current of 35.5 µA around 765 mV.
Furthermore, the surface current density of the mentioned electrodes calculated from the obtained related oxidation peak current and demonstrated in Fig. 3B. The obtained results revealed that the applied modifiers resulted in enhancement of active surface area of proposed electrodes which is in accordance with the conducted electrochemical impedance spectroscopy investigations [26].
Electrochemical impedance characterization
Electrochemical impedance spectroscopy (EIS) as a dominant diagnostic tool applied for characterization of the interface structure of electrolyte solution/electrode and the electrode surface nature. Herein, EIS as an experimental technique used for describing the observed changes in characteristic performance of carbon paste electrode throughout its modification process using ZnFe2O4/NPs and 1-Ethyl-3-methylimidazolium tetrafluoroborate as the binder.
The EIS investigations conducted over the frequency range of 10− 2 to 105 Hz. As seen from Fig. 4A, the obtained Nyquist plots for CPE, ZnFe2O4/NPs/CPE, IL/CPE and ZnFe2O4/NPs/IL/CPE revealed that by modifying the CPE, the diameter of the semicircle portion at higher frequency which corresponds to the charge transfer limited process decreased and indicated that the electron transfer resistance on the surface of the proposed electrodes gradually diminished and accordingly, the highest charge transfer rate observed at the surface of ZnFe2O4/NPs/CPE.
The linear part of the Nyquist plot at lower frequency represent the limited diffusion process. The values of the charge transfer resistance for CPE, ZnFe2O4/NPs/CPE, IL/CPE and ZnFe2O4/NPs/IL/CPE found to be 16.30 kΩ, 11.30 kΩ, 9.06 kΩ, and 5.25 kΩ, respectively.
The equivalent circuit of Fig. 4B obtained by modeling the impedance data of Nyquist plots in the term of an electrical circuit. The proposed equivalent circuit constituted of Rs denotes the electrolyte solution resistance in series with the parallel circuit of Zf and Cdl denote the Faradaic impedance and the double layer capacitance, respectively. Zf composed of two parameters including Rct and Zw denote the charge transfer resistance and the Warburg impedance, respectively.
Characterization of scan rate effect
To investigate the nature of the RC oxidation and its kinetic parameters at ZnFe2O4/NPs/IL/CPE, the relationship between the potential scan rate and peak current over the range of 5-900 mV/s studied in the presence of 500 µM of RC, using cyclic voltammetry. As seen from the voltammograms in Fig. 5A increasing the potential scan rate resulted in a gradual increase in the oxidation peak current. However, the potential of the oxidation peak shift towards more positive values which indicated that the electro-oxidation of the RC was irreversible.
As seen from Fig. 5B, by investigating the relationship between the anodic peak current intensity (Ip) vs. potential scan rate (ν) and the square root of potential scan rate (ν1/2), it was found that a satisfactory linear relationship between Ip and ν1/2 with a correlation coefficient of 0.993 observed which confirmed that the electrode process of RC oxidation controlled by diffusion mechanism at ZnFe2O4/NPs/IL/CPE [27]. The obtained correlation equation expressed as below:
$${{\text{I}}_{\text{p}}}\left( {\mu {\text{A}}} \right) = {4.5689}{{\text{n}}^{{1}/{2}}}({{\text{mV}}^{{1}/{2}}}{{\text{s}}^{ - {1}/{2}}}) - {8.4357}\left({\text{R}^{2}} = {0.9928} \right)$$
Alternatively, by plotting the Log Ip versus Log ν, the electrode process regarding mass transport mechanism can be specified. It is known that the slope values around 0.5 denote the redox process controlled under the diffusion step. However, the slope values around 1.0 indicated that the redox processes ruled by adsorption. The obtained result from Fig. 5C was in accordance with the recommended mechanism in the previous section.
To determine the electron transfer coefficient (α) of the irreversible oxidation process of RC, the relationship between the oxidation peak potential (E, V) and the Naperian logarithm of the potential scan rate (Ln ν, V s− 1) investigated. The obtained plot in Fig. 6A, revealed an adequate linear relationship with the regression equation expressed as follow:
$${{\text{E}}_{\text{p}}}\left( {\text{V}} \right) = {0.0275}\,{\text{Lnn}}\left( {{{\text{Vs}}^{-1}}} \right) +{0.8518} \left( {{\text{R}}^{2}} = {0.9902} \right)$$
According to the equation proposed by Nicholson and Shain correspond to the graph of E (V) vs. Ln ν (V.s− 1) expressed as follow, the value of electron transfer coefficient (α) of 0.766 calculated from the slope of the obtained plot which is m/2, where m is equal to RT/[(1–α)nαF] .
$${{\text{E}}_{\text{p}\text{a}}=\text{E}}^{0}+\text{m}[0.78+\text{ln}\left({\text{D}}^{\frac{1}{2}}{{\text{k}}_{\text{s}}}^{-1}\right)-0.5\text{ln}\text{m}]+\left(\frac{\text{m}}{2}\right)\text{l}\text{n}({\upnu })$$
where Ep.a., E0, ν, and ks denote the oxidation peak potential, formal potential, potential scan rate, and the electron transfer rate constant, respectively. Assuming the number of electrons involved through the electro-oxidation process (n) is equal to 2. Furthermore, R, T, and F are equal to 8.314 J mol− 1 K− 1, 298 K, and 96,485 C mol− 1, respectively.
Additionally, via the data derived from the raising part of the RC oxidation curve (current vs. potential), the Tafel plot was developed [28]. As seen from Fig. 6B, a linear relationship between peak potential (Ep.a.) and the logarithm of the peak current (Log I) with the satisfactory correlation coefficient of 0.999 observed. The respective equation expressed as below:
$${{\text{E}}_{\text{p}}}\left( {\text{V}} \right) ={0.136}{\text{Log I}}\left({\mu {\text{A}}} \right) + {0.5051}\left({{{\text{R}}^{2}} ={0.9998}} \right)$$
Herein, alternatively, the value of electron transfer coefficient (α) can be calculated from the slope of the Tafel plot which is equal to 2.303RT/[(1–α)nαF]. The electron transfer coefficient value was found to be 0.783 which is in accordance with the obtained value for (α) from to the graph of E (V) vs. Ln ν (V s− 1).
Chronoamperometric investigation
Chronoamperometry technique employed to assess the diffusion coefficient (D) of RC at the surface of ZnFe2O4/NPs/IL/CPE. The working electrode potential set at 1000 mV vs. the reference electrode. As seen in Fig. 7A, chronoamperograms recorded for three concentrations of 300, 500, and 700 µM resorcinol in 0.1 M phosphate buffer solution (pH 6.0).
Through the data derived from the mass transport limited part of the chronoamperogram curves, by plotting the peak current (Ip) versus the minus square roots of time (t− 1/2) the Cottrell plots obtained. As demonstrated in Fig. 7B, oxidation currents has a linear relation with t− 1/2 at all three mention concentrations which confirmed that the mass transport mechanism at the surface of working electrode controlled under the diffusion step from the bulk solution toward the ZnFe2O4/NPs/IL/CPE surface [29]. The average value of the diffusion coefficient was found to be 2.83 × 10− 4 cm2s− 1 by substituting the slopes of Cottrell plots and other parameters in the Cottrell equation.
Analytical performance characterization
The characteristics performance of the fabricated ZnFe2O4/NPs/IL/CPE investigated with regard to several operating parameters including linearity of the proposed sensor response over the wide concentration range of RC, limit of detection (LOD) and quantification (LOQ), repeatability and reproducibility, lifetime and the effect of interferences presence.
The square wave voltammetry (SWV) with lower background current compare to cyclic voltammetry adopted for the determination of RC over a wide concentration range in 0.1 M phosphate buffer solution (pH 6.0). As seen from Fig. 8, a linear relationship between oxidation peaks current (Ip) and the concentration of RC over the range from 3.0 µM to 500 µM with the satisfactory correlation coefficient of 0.997 observed.
The observed deviation from the linear response at higher concentration probably attributed to the diffusion of RC or accumulation of the undesired oxidation products on the surface of the proposed electrode. The respective linear regression equations expressed as below:
Ip (µA) = 0.0276 CRC (µM) + 0.5508 (R2 = 0.9973) (6).
The value of the limit of detection (LOD) for the proposed electrode calculated according to the definition of 3Sb/m, where Sb as the standard deviation of peak current derived from 10 measurements of the blank solution (Sb=1.348 × 10− 8) and m is the slope of the linear calibration plot (m = 0.0276). The lower detection limit of the proposed sensor was found to be 1.46 µM RC. Furthermore, the value of the limit of quantification (LOQ) according to the definition of 10Sb/m was found to be 4.88 µM RC by SWV employing ZnFe2O4/NPs/IL/CPE.
To evaluate the accuracy and precision of the fabricated ZnFe2O4/NPs/IL/CPE, the repeatability and reproducibility of the proposed electrode assessed. The repeatability investigated with five successive scans in one day and five scans during five days using the same electrode. The examined solutions contain 5.0 µM RC. The obtained results revealed satisfactory repeatability with the relative standard deviation of ± 1.33 and ± 2.70, respectively. On the other hand, for reproducibility studies, five different electrodes were used only once each and the obtained results indicated good reproducibility of the proposed sensor with the relative standard deviation of ± 3.41.
Moreover, to evaluate the stability of the response, ZnFe2O4/NPs/IL/CPE immersed in an aqueous solution and applied for the quantitative determination of RC known concentrations in various samples. Assessment of the obtained results indicated that the electrode revealed stable response within 180 min and afterward the background current increased. The observed behavior is possibly related to the leakage of 1-butyl-3-methylimidazolium tetrafluoroborate from the modified carbon paste which increased the roughness of the fabricated electrode. The obtained results showed that ZnFe2O4/NPs/IL/CPE has acceptable repeatability and reproducibility with a satisfactory stability.
Resorcinol and hydroquinone simultaneous electrochemical determination
In order to provide a voltammetric approach for simultaneous determination of RC and HQ in pharmaceutical products, SWV technique with high sensitivity and capability of oxidation peaks separation employed in the current work. The SWVs plot recorded by the simultaneous change of RC and HQ concentrations over a wide range in 0.1 M phosphate buffer solution (pH 6.0). As seen in Fig. 9, two separate and highly intense oxidation peaks at 290 mV and 650 mV related to HQ and RC achieved. A satisfactory linear relationship between oxidation peak current (Ip) and the concentration of HQ and RC over the range of 50–700 µM and 5-350 µM with the favorable correlation coefficient of 0.991 and 0.995, respectively, observed.
It is apparent that at the surface of ZnFe2O4/NPs/IL/CPE a desirable peak separation with the potential difference of about 360 mV (vs. Ag/AgCl reference electrode) for HQ and RC obtained. The obtained results revealed that the proposed electrode could be applied for the concurrent determination of RC and HQ in the presence of each other without significant deviation in the electrochemical response.
Determination of RC in the presence of coexisting interfering species
Further studies carried out to evaluate the ability of the proposed sensor for discriminating between the desired target of RC and the potential interfering species which present in real samples such as pharmaceutical formulations and biological fluids. Herein, the effect of interfering species including various kind of electrolytes, amino acids, and sugar on the characteristic performance of ZnFe2O4/NPs/IL/CPE investigated in the presence of 50 µM RC under the optimized experimental condition (see Table 1). It is noteworthy to mention that the tolerance limit of the proposed electrode defined as the maximum amount of interfering compounds which resulted in a peak current deviation more than 5.0% for determination of RC compared to square wave voltammograms of RC solution alone.
Table 1 The effect of some coexisting substances on the determination of 50 µM resorcinol (n = 3)
The obtained results revealed that the oxidation peak current of RC deviated from the tolerance limit in the presence of more than 500-fold excess of common electrolytes including Ca2+, Mg2+, Na+, K+, SO42−, CO32−, NO3−, and HCO3−. However, the presence of glucose and ascorbic acid in the 500-fold and 200-fold excess concentration had no interference with RC detection, respectively. On the other hand, the oxidation peak current of RC evaluated in the presence of 700-fold excess of common amino-acids including phenylalanine, methionine, alanine, valine, and isoleucine which used in the biosynthesis of proteins. It found that the mention concentration did not show interference in the quantitative determination of RC.
Lastly, the presence of some substances which need to excrete from the body such as urea evaluated on the characteristic performance of the proposed electrode. The results showed that the ZnFe2O4/NPs/IL/CPE response deviated from the tolerance limit in the presence of more than 400-fold excess of urea and thiourea.
Analysis of the pharmaceutical sample
In order to evaluate the performance of the fabricated ZnFe2O4/NPs/IL/CPE for precise determination of RC in the real samples, the resorcinol cream of 2% O/W emulsion available on the market studied. To prepare the real sample of resorcinol cream for analysis, firstly, RC which is a weakly acidic compound must be extracted from the cream by the following method.
1 g of the cream carefully weighed and completely dissolved in 9 mL of distilled water. To extract the RC from the obtained milky color diluted base, 2 g of NaCl salt added for salting out process. Subsequently, the pH of the obtained emulsion adjusted around 12.5 by adding NaOH 0.1 M, which is 3 units higher than the pKa for RC. Accordingly, the RC compound converted into the ionized and hydrophilic form which could easily extract from the lipophilic components of the obtained emulsion and transferred into the aqueous phase. The obtained mixture centrifuged for 20 min at 4000 rpm. 100 µL of the supernatant phase transferred into the electrochemical cell and diluted up to 10 mL by the phosphate buffer solution (0.1 M, pH 6.0). As seen from Table 2, the commercial resorcinol cream contains % 1.9 (w/w) resorcinol in the O/W emulsion.
Table 2 Determination of RC in resorcinol cream of 2% O/W emulsion available on the market by the proposed sensor
Analytical performance comparison of the proposed sensor with previous works
The analytical performance of ZnFe2O4/NPs/IL/CPE for simultaneous determination of RC and HQ compared to the other reported sensors. As seen from Table 3, the lower detection limit of the proposed electrode was better than some reported graphene-based cases [3, 4]. However, the carbon paste electrode in the current work is much cheaper than the mention electrodes. On the other hand, the present sensor revealed a wider dynamic linear range compared to most of the summarized reported sensors [3, 5, 6]. It can be concluded that the present electrode is either comparable or superior compared to the other reported sensors for simultaneous determination RC and HQ.
Table 3 Comparison of the analytical performance of the proposed sensor for the simultaneous determination of RC and HQ with other electrochemical sensors found in the literature
The excellent electro-catalytic performance of RC and HQ at the surface of ZnFe2O4/NPs/IL/CPE which was not susceptible to common interferences provided a new promising approach for simultaneous determination of trace amount of RC and HQ in pharmaceutical samples using square wave voltammetry technique. The SW voltammograms revealed two well-defined separated oxidation peaks with a desirable peak separation and a satisfactory linear response over a wide concentration range of RC and HQ. The developed modified carbon paste electrode showed a considerable improvement in the kinetics of the electron transfer with an excellent reproducible analytical performance which indicated that the proposed sensor could be applied successfully for routine analysis.
A FE-SEM image of ZnFe2O4 nanoparticles. B UV-Vis absorption spectra of ZnFe2O4 nanoparticles. C Representative XRD pattern of ZnFe2O4 nanoparticles
A Cyclic voltammetric curves of 500 µM RC on the surface of ZnFe2O4/NPs/IL/CPE at different pH values of phosphate buffer solution (PBS): 4(a), 5(b), 6(c), 7(d), 8(e), and 9(f). B Peak potential dependence on solution pH for RC oxidation on the surface of ZnFe2O4/NPs/IL/CPE
A Cyclic voltammetric curves of 500 µM RC in PBS (0.1 M) pH 6 on the surface of (a) ZnFe2O4/NPs/IL/CPE, (b) IL/CPE, (c) ZnFe2O4 /NPs /CPE and (d) CPE at scan rate of 100 mV. s− 1. B The current densities derived from cyclic voltammetric curves at the same electrodes
A Nyquist diagrams of (a) ZnFe2O4/NPs/IL/CPE, (b) IL/CPE, (c) ZnFe2O4 /NPs /CPE and (d) CPE. Conditions: 500 µM RC in PBS (0.1 M) pH 6, over the frequency range of 0.1 to 100,000 Hz. B Corresponding equivalent circuits
A Cyclic voltammetric curves of the ZnFe2O4/NPs/IL/CPE at different potential scan rates of 5, 15, 25, 50, 80, 100, 150, 250, 300, 400, 600, 800 and 900 mV s− 1 in PBS (0.1 M) pH 6 containing 500 µM RC. B Peak current dependence on the square root of scan rate for RC oxidation on the surface of ZnFe2O4/NPs/IL/CPE. C Relationship between the logarithm of peak potential and logarithm of the potential scan rate
A Nicholson and Shain's plot of oxidation peak potential vs. the Naperian logarithm of different potential scan rates of 5, 15, 25, 50, 80, 100, 150, 250, 300, 400, 600, 800 and 900 mV s− 1 in PBS (0.1 M) pH 6 containing 500 µM RC. B Tafel's plot of oxidation peak potential vs. the logarithm of the peak current for the electro-oxidation of 500 µM RC on the surface of ZnFe2O4/NPs/IL/CPE at the scan rate of 25 mV s− 1 in PBS (0.1 M) pH 6 containing 500 µM RC.
A Exponential single potential-step chronoamperometic curves of 300 (a), 500 (b) and 700 (c) µM RC in PBS (0.1 M) pH 6. B Cottrell's plot of oxidation peak current vs. the minus square roots of time for 300 (a), 500 (b) and 700 (c) µM RC in PBS (0.1 M) pH 6
A Square wave voltammetric curves for successive additions of RC into PBS (0.1 M) pH 6 including RC concentrations of 3, 5, 10, 20, 30, 50, 100, 150, 200, 250, 300, 350, 400, and 500 µM on the surface of ZnFe2O4/NPs/IL/CPE. B Typical calibration curve corresponding to RC additions up to 500 µM
A Square wave voltammetric curves for simultaneous additions of HQ and RC into PBS (0.1 M) pH 6; from inner to outer including HQ and RC concentrations of 50.0 + 5.0, 100.0 + 50.0, 200.0 + 100.0, 300.0 + 150.0, 400.0 + 200.0, 500.0 + 250.0, and 700.0 + 300.0 µM, respectively. B and C Typical calibration curves corresponding to HQ and RC additions up to 700 and 300 µM, respectively
Scheme 1
Mechanism of RC electro-oxidation on the surface of ZnFe2O4/NPs/IL/CPE
Adequate and clear descriptions of the applied materials and tools are provided in the materials and method section of manuscript. In addition, the obtained data is clearly justified by mentioning the figures and tables in the manuscript.
Hudari FF, et al. Voltammetric sensor for simultaneous determination of p-phenylenediamine and resorcinol in permanent hair dyeing and tap water by composite carbon nanotubes/chitosan modified electrode. Microchem J. 2014;116:261–8.
Gomez MaR, et al. Simultaneous determination of cloramphenicol, salicylic acid and resorcinol by capillary zone electrophoresis and its application to pharmaceutical dosage forms. Talanta. 2003;61(2):233–8.
Yang C, et al. Gold nanoparticle–graphene nanohybrid bridged 3-amino-5-mercapto-1, 2, 4-triazole-functionalized multiwall carbon nanotubes for the simultaneous determination of hydroquinone, catechol, resorcinol and nitrite. Anal Methods. 2013;5(3):666–72.
Zhang H, Bo X, Guo L. Electrochemical preparation of porous graphene and its electrochemical application in the simultaneous determination of hydroquinone, catechol, and resorcinol. Sens Actuators B. 2015;220:919–26.
Liu W, et al. Simultaneous electrochemical determination of hydroquinone, catechol and resorcinol at nitrogen doped porous carbon nanopolyhedrons-multiwall carbon nanotubes hybrid materials modified glassy carbon electrode. Bull Korean Chem Soc. 2014;35(1):204–10.
Wei C, et al. Simultaneous electrochemical determination of hydroquinone, catechol and resorcinol at Nafion/multi-walled carbon nanotubes/carbon dots/multi-walled carbon nanotubes modified glassy carbon electrode. Electrochim Acta. 2014;149:237–44.
Zhang D, et al. Application of multielectrode array modified with carbon nanotubes to simultaneous amperometric determination of dihydroxybenzene isomers. Sens Actuators B. 2009;136(1):113–21.
Ding Y-P, et al. Direct simultaneous determination of dihydroxybenzene isomers at C-nanotube-modified electrodes by derivative voltammetry. J Electroanal Chem. 2005;575(2):275–80.
Pistonesi MF, et al. Determination of phenol, resorcinol and hydroquinone in air samples by synchronous fluorescence using partial least-squares (PLS). Talanta. 2006;69(5):1265–8.
Prathap MA, Satpati B, Srivastava R. Facile preparation of polyaniline/MnO2 nanofibers and its electrochemical application in the simultaneous determination of catechol, hydroquinone, and resorcinol. Sens Actuators B. 2013;186:67–77.
Suea-Ngam A, et al. Electrochemical droplet-based microfluidics using chip-based carbon paste electrodes for high-throughput analysis in pharmaceutical applications. Anal Chim Acta. 2015;883:45–54.
Charoenkitamorn K, et al. Low-cost and disposable sensors for the simultaneous determination of coenzyme Q10 and α-lipoic acid using manganese (IV) oxide-modified screen-printed graphene electrodes. Anal Chim Acta. 2018;1004:22–31.
Bagheri H, et al. Determination of tramadol in pharmaceutical products and biological samples using a new nanocomposite carbon paste sensor based on decorated nanographene/tramadol-imprinted polymer nanoparticles/ionic liquid. Ionics. 2018;24(3):833–43.
Zeinali H, et al. Nanomolar simultaneous determination of tryptophan and melatonin by a new ionic liquid carbon paste electrode modified with SnO2-Co3O4@rGO nanocomposite. Mater Sci Eng C. 2017;71:386–94.
Ma L, Zhao G-C. Simultaneous determination of hydroquinone, catechol and resorcinol at graphene doped carbon ionic liquid electrode. Int J Electrochem. 2012;2012:1–9. https://doi.org/10.1155/2012/243031
Yin H, et al. Electrochemical behavior of catechol, resorcinol and hydroquinone at graphene–chitosan composite film modified glassy carbon electrode and their simultaneous determination in water samples. Electrochim Acta. 2011;56(6):2748–53.
Gupta VK, et al. Removal of the hazardous dye—tartrazine by photodegradation on titanium dioxide surface. Mater Sci Engineering: C. 2011;31(5):1062–7.
Gupta VK, et al. Chromium removal from water by activated carbon developed from waste rubber tires. Environ Sci Pollut Res. 2013;20(3):1261–8.
Yukird J, et al. ZnO@graphene nanocomposite modified electrode for sensitive and simultaneous detection of cd (II) and pb (II). Synth Met. 2018;245:251–9.
Daneshgar P, et al. Ultrasensitive flow-injection electrochemical method for detection of anticancer drug tamoxifen. Talanta. 2009;77(3):1075–80.
Zaheiritousi N, et al. Fabrication of a new modified Tm3+ - Carbon paste sensor using multi-walled carbon nanotubes (MWCNTs) and nanosilica based on 4-Hydroxy salophen. Int J Electrochem Sci. 2017;12(4):2647–57.
Boobphahom S, et al. TiO2 sol/graphene modified 3D porous ni foam: a novel platform for enzymatic electrochemical biosensor. J Electroanal Chem. 2019;833:133–42.
Gupta VK, et al. Application of response surface methodology to optimize the adsorption performance of a magnetic graphene oxide nanocomposite adsorbent for removal of methadone from the environment. J Colloid Interface Sci. 2017;497:193–200.
Prabaharan M, Mano J. Chitosan-based particles as controlled drug delivery systems. Drug Delivery. 2004;12(1):41–57.
Pardakhty A, et al. Highly sensitive and efficient voltammetric determination of ascorbic acid in food and pharmaceutical samples from aqueous solutions based on nanostructure carbon paste electrode as a sensor. J Mol Liq. 2016;216:387–91.
Gupta VK, et al. A novel magnetic Fe@ au core–shell nanoparticles anchored graphene oxide recyclable nanocatalyst for the reduction of nitrophenol compounds. Water Res. 2014;48:210–7.
Sadegh H, et al. The role of nanomaterials as effective adsorbents and their applications in wastewater treatment. J Nanostructure Chem. 2017;7(1):1–14.
Fouladgar M, Ahmadzadeh S. Application of a nanostructured sensor based on NiO nanoparticles modified carbon paste electrode for determination of methyldopa in the presence of folic acid. Appl Surf Sci. 2016;379:150–5.
Saravanan R, et al. ZnO/Ag nanocomposite: an efficient catalyst for degradation studies of textile effluents under visible light. Mater Sci Eng C. 2013;33(4):2235–44.
Woods SW. Chlorpromazine equivalent doses for the newer atypical antipsychotics. J Clin Psychiatry. 2003;64(6):663–7.
The authors express their appreciation to Kerman University of Medical Sciences for supporting the current work (Grant No. 99001163).
Kerman University of Medical Sciences (Kerman, Iran) has provided financial support for the project (Grant No. 99001163).
Student Research Committee, Kerman University of Medical Sciences, Kerman, Iran
Ebrahim Nabatian
Department of Chemistry, Faculty of Sciences, Shahid Bahonar University of Kerman, Kerman, Iran
Ebrahim Nabatian & Mahdi Mousavi
Department of Medicinal Chemistry, Faculty of Pharmacy, Kerman University of Medical Sciences, Kerman, Iran
Mostafa Pournamdari
Department of Chemistry, Faculty of Chemistry and Chemical Engineering, Graduate University of Advanced Technology, Kerman, Iran
Mehdi Yoosefian
Pharmaceutics Research Center, Institute of Neuropharmacology, Kerman University of Medical Sciences, Kerman, Iran
Saeid Ahmadzadeh
Pharmaceutical Sciences and Cosmetic Products Research Center, Kerman University of Medical Sciences, Kerman, Iran
Mahdi Mousavi
SA conceived the original idea, supervised the project, and prepared the manuscript. EN carried out the experiments. MP, MY and MM contributed to the interpretation of the results and writing the manuscript. All authors discussed the results and contributed to the final manuscript. All authors read and approved the final manuscript.
Correspondence to Mehdi Yoosefian or Saeid Ahmadzadeh.
The current work was conducted in the autumn of 2021, after receiving approval from the ethics committee of Kerman University of Medical Sciences [IR.KMU.REC.1399.673].
Nabatian, E., Mousavi, M., Pournamdari, M. et al. Voltammetric approach for pharmaceutical samples analysis; simultaneous quantitative determination of resorcinol and hydroquinone. BMC Chemistry 16, 115 (2022). https://doi.org/10.1186/s13065-022-00905-y
Voltammetric analysis
Pharmaceutical samples
Modified carbon paste electrode | CommonCrawl |
Risk-adjusted CUSUM control charts for shared frailty survival models with application to hip replacement outcomes: a study using the NJR dataset
Alexander Begun ORCID: orcid.org/0000-0002-2886-21181 na1,
Elena Kulinskaya ORCID: orcid.org/0000-0002-9843-16631 &
Alexander J MacGregor ORCID: orcid.org/0000-0003-2163-23252
Continuous monitoring of surgical outcomes after joint replacement is needed to detect which brands' components have a higher than expected failure rate and are therefore no longer recommended to be used in surgical practice. We developed a monitoring method based on cumulative sum (CUSUM) chart specifically for this application.
Our method entails the use of the competing risks model with the Weibull and the Gompertz hazard functions adjusted for observed covariates to approximate the baseline time-to-revision and time-to-death distributions, respectively. The correlated shared frailty terms for competing risks, corresponding to the operating unit, are also included in the model. A bootstrap-based boundary adjustment is then required for risk-adjusted CUSUM charts to guarantee a given probability of the false alarm rates. We propose a method to evaluate the CUSUM scores and the adjusted boundary for a survival model with the shared frailty terms. We also introduce a unit performance quality score based on the posterior frailty distribution. This method is illustrated using the 2003-2012 hip replacement data from the UK National Joint Registry (NJR).
We found that the best model included the shared frailty for revision but not for death. This means that the competing risks of revision and death are independent in NJR data. Our method was superior to the standard NJR methodology. For one of the two monitored components, it produced alarms four years before the increased failure rate came to the attention of the UK regulatory authorities. The hazard ratios of revision across the units varied from 0.38 to 2.28.
An earlier detection of failure signal by our method in comparison to the standard method used by the NJR may be explained by proper risk-adjustment and the ability to accommodate time-dependent hazards. The continuous monitoring of hip replacement outcomes should include risk adjustment at both the individual and unit level.
Continuous monitoring of healthcare, and increasingly, social care across various providers is an important task of the healthcare regulator, such as the Care Quality Commission (QCC) in the UK. Additionally, a number of professional bodies and registers take on the same function for their clinical discipline. For instance, in regards to joint replacement, surgeon and operating unit-level outcomes are compiled by National Joint Registry for England and Wales (NJR). Methods of continuous monitoring of production quality have been initially developed and employed in quality control in the industry [1]. One of the most popular methods is the cumulative sum (CUSUM) chart, a graphical method based on sequential monitoring of cumulative performance over time. This method is based on sequential procedures and allows timely identification of a deterioration in performance. A number of CUSUM-based quality control systems are being implemented in various clinical disciplines, with the earliest application being in cardiothoracic surgery [2]. Currently they are used in surveillance of the healthcare quality by QCC [3], and by Dr Foster unit at Imperial College [4]. In this paper we expand the CUSUM methodology and adapt it for monitoring the performance of hip prostheses using the NJR data.
A hip replacement is a surgical operation where the damaged hip joint is replaced by a prosthesis. This operation is recommended to reduce pain and improve mobility of a patient after other therapies have failed. There are currently hundreds of types and brands of prosthesis components for use in the hip replacement surgery, and new brands of implant continue to be introduced through technological innovations. An important aspect of an implant brand's performance is its expected time-to-revision. The current exception is that all prostheses used as treatment for end stage arthritis should have a failure rate of less than 5% at 10 years. Because of a relatively long time-to-failure of the hip prosthesis, long-term premarketing clinical trials are unfeasible. Therefore, continuous monitoring methods are needed for early detection of poor performance and timely withdrawal of the inferior components from clinical practice.
The first CUSUM-based methods for healthcare were based on binomial or Poisson distributions, monitoring failure rates within a fixed time interval, e.g. 30-days mortality [5], or one year hip replacement failure rates [6]. CUSUM methods for survival data are a natural extension of the methods for binary data. Censoring, truncation, adjustment for observed covariates and unobserved factors (frailties) can be easily included in survival models. By monitoring the individual-specific outcomes, the CUSUM score can be evaluated sequentially, changing at each individual failure. However, this method seems not to be appropriate in the case of hip replacement, where the expected time-to-revision is longer than 10 years. Hardoon et al. [7] proposed to compare the number of revisions within a certain time interval to that expected given a target revision rate and the total number of hip years in the interval. That is, patients contribute to a CUSUM score until revision or censoring (death or end of follow-up). They analysed the data from Swedish Arthroplasty register using Weibull distribution to model time to revision of hip replacement.
However, time-to-revision of hip prostheses varies depending on the patient characteristics, and on the type of fixation used [8]. This necessitates the use of case mix adjusted monitoring methods. The first risk adjusted CUSUM methods for time-to-failure (survival) data were introduced by Biswas and Kalbfleisch [9]. This method was picked up by the Scottish Arthroplasty Project, where CUSUMs are used to monitor complication rates of joint replacements by surgeon and unit from 2010. This is achieved by likelihood-based scoring method with risk adjustment for age, sex, osteoarthritis (OA) and rheumatoid arthritis (RA) [10]. A Bayesian-based CUSUM method for Weibull survival time is described in Assareh et al. [11].
Although the event of interest in our study is a revision, a priori death should not be treated as noninformative censoring. We develop a general competing risk version of the survival model for NJR data, where death is a competing risk. To safeguard the properties of the CUSUM charts, the control limits for risk-adjusted CUSUMs need to be revised to accommodate the estimation error.
We propose and implement a parametric version of the approach by Gandy and Kvaløy [12], of using bootstrap to provide the control limits conditional on the estimated in-control distribution, resulting in less conservative, i.e. more powerful, procedures.
We are using the Weibull distribution for fitting the baseline revision-specific hazard function, because this distribution has a good fit to the empirical distribution of time-to-revision [7]. The Gompertz distribution is used for fitting the baseline mortality-specific hazard function. The observed covariates and the correlated frailty components at the unit level are included in the model, assuming that all patients from a unit share the same unobservable gamma distributed risks of prosthesis revision and of death after hip replacement surgery.
We develop a bootstrap-based boundary adjustment for the risk-adjusted CUSUM chart to guarantee a given conditional probability of the false alarm rates. We also propose a score characterizing the quality of the hip replacement surgery in a unit. This score is based on the estimate of the posterior conditional frailties for units given the observed data. Mathematical development of the CUSUM scores for a Weibull/Gompertz survival model with shared frailty is provided in the Appendix.
The developed methods are applied to the 2003-2012 hip replacement data from the NJR. We illustrate the use of risk-adjusted CUSUM methodology to monitor the performance of two specific hip prostheses brands: the DePuy ASR Resurfacing Cup and the Biomet M2A-38 cup, which were flagged as outliers by NJR [13].
An artificial hip includes three major components: a stem that is inserted into the femur, a head (a ball) attached to the top of the femur and a cup, also called the acetabular component, that is implanted into the pelvis. A hip resurfacing procedure is typically used in younger patients where it can delay the need for a total hip replacement, it replaces the socket with an artificial cup and resurfaces the head of the femur instead of removing it. In 2010, NJR recorded 123 brands of acetabular cups, 13 brands of resurfacing cups and 146 brands of femoral stems used in primary and revision procedures [14].
Given a vast variety of available types and brands of prosthesis components for use in the hip replacement surgery, monitoring implant quality is the main objective of the NJR implant scrutiny group that was established in 2009. According to the current NJR methodology [15], an implant is considered to be a Level 1 outlier when its Patient Time Incident Rate (PTIR) is twice the PTIR of the implant group, where the group rate is weighted by the relevant implant types. From 2009 to 2014, three hip stems, three hip acetabular components and 17 hip stem/cup combinations were reported as Level 1 outliers [13].
To test our analytical approach on real world data, our analysis will focus on two of these outlier compoents: (i) the DePuy ASR Resurfacing Cup (first identified as a part of an outlier head/cup combination in April 2010 and last implanted in July 2010) and (ii) the Biomet M2A-38 acetabular cup (first identified by the NJR as an outlier in 2014, and last implanted in June 2011).
A standard CUSUM chart usually has a learning period where the parameters of the relevant null distribution are estimated, and the deviation from the null of clinical concern is decided upon to calibrate the control limits. The chart is then run with these control limits. An example of this approach is by Hardoon et al. [7], 2007 who monitored a constant target revision rate in a time interval. However, the failure rates differ by implant types, the age of the patients, and other case mix characteristics. They also may vary by the site at which operations take place (the operating unit). Therefore we consider a risk-adjusted CUSUM where the target rates are estimated for the popular implants (top 80%), and experienced units (more than 1 surgery per week, on average), which requires an introduction of shared frailty terms, describing similarities within and heterogeneity between units, to our survival models, and an appropriate adjustment of the control limits.
Description of the NJR data
The NJR data were made available after a formal request to the NJR Research Committee. The dataset is related to the data cut used in the 10th NJR Annual Report [16]. The data were anonymised in respect to patient, to surgeon and to operating unit identifying details. Approval was obtained from Computing Subcommittee of the University of East Anglia Ethics Committee, reference number CMP/1718/F/10A. The NJR dataset provides the following four groups of variables used in the time-to-failure analysis of the hip replacements to risk-adjust the CUSUM boundaries.
Information on procedures, such as date of operation or revision, and side;
Institution and staff involved, such as unit and consultant IDs (anonymised), and surgeon grade;
Hip prosthesis characteristics, such as fixation type (cemented, uncemented, hybrid, resurfacing), its components (head, cup, stem, and liner brands), head size, bearing surfaces (metal, polyethylene, ceramic);
Patient characteristics, such as age, sex, ASA physical status classification [17] at 5 levels from healthy (1) to near death (5), Body Mass Index (BMI), index of multiple deprivation (IMD)[18] (a higher IMD means higher proportion of people in the area classed as deprived), and death date.
Since about a half of records had missing BMI values, this factor was excluded from further consideration. ASA scores were grouped into two categories in further analysis: ASA 1-2 - normal healthy patients and patients with mild systemic disease, ASA 3-5 - patients with serious, non-incapacitating systemic disease, patients with life-threatening incapacitating systemic disease and patients that are near death.
Data selection in SQL (elimination of duplicates, second and subsequent revisions) resulted in 504,024 records with the fields listed above. By further cleaning the following records have been additionally excluded:
Patients with bilateral operations;
Records with missing or misreported side;
Records with time to revision equal to 0;
Records with date of operation after 31 December 2012;
Patients younger than 50 years at operation day;
Records with missing values of IMD.
This process resulted in 281,265 records. Finally, all records for the patients operated in units with less than 52 operations per year (i.e. less than once per week, on average), and all records with implanted cup/head brands in the bottom 20% in popularity that year, as well as cup/head brands "DePuy" and "Biomet" were excluded in the in-control dataset, resulting in 113,772 records in total. To test the efficiency of our CUSUM procedure, we have also selected two test datasets including only the records with cup brands "DePuy ASR Resurfacing Cup" (1734 records) and "Biomet M2A 38" (764 records), respectively. The cases for prostheses revised within three months of implantation were censored at the time of revision to exclude failures that might be directly attributive to surgical technique or postoperative complications. Description of the three datasets is given in Table 1. We provide analysis of these data performed in R [19] in the "Results" section.
Table 1 Description of the datasets
Basics of CUSUM method for time-to-event data
The CUSUM method is a sequential analysis technique based on the calculation of the series Wi, i=0,1,2,..., defined by a simple recurrent equation
$$\begin{array}{@{}rcl@{}} \begin{aligned} W_{0} & = 0, \\ W_{i+1} & = \max \{0,\; W_{i}+X_{i}\}, \end{aligned} \end{array} $$
where index i stands for a single observation or for a group of observations and Xi is the weight or score assigned to index i. The CUSUM alerts when Wi crosses a control limit, usually chosen to guarantee a long average run length (ARL) when the process is in control, or to provide a low false alarm probability [20]. In applications to survival data, and assuming independent competing risks of revision and death, the score Xi for an individual i with time-to-revision ti and vector of covariates ui can be defined as the logarithm of the revision-specific factor of the likelihood ratio
$$\begin{array}{@{}rcl@{}} \begin{aligned} X_{i} = \log \left (\frac {f^{1}_{i}(t_{i}|\mathbf{u}_{i})^{\delta_{i}}S^{1}_{i}(t_{i}|\mathbf{u}_{i})^{1-\delta_{i}}}{f^{0}_{i}(t_{i}|\mathbf{u}_{i})^{\delta_{i}}S^{0}_{i}(t_{i}|\mathbf{u}_{i})^{1-\delta_{i}}}\right), \end{aligned} \end{array} $$
where δi is a censoring indicator, \(S^{j}_{i}(.)\) and \(f^{j}_{i}(.)\) are survival and density functions, respectively, and index j=0,1, stands for null hypothesis H0 (process is under control) and alternative hypothesis H1 (failure rate is higher than expected by a certain margin). Under the assumption of independent competing risks, the revision-specific factor of the likelihood coincides with the likelihood function that would be obtained be treating failures from any other causes as censored observations.
For a set I of independent individuals, the score XI can be calculated as a sum of individual scores Xi, i∈I:
$$\begin{array}{@{}rcl@{}} \begin{aligned} X_{I} = \sum_{i\in I} X_{i}. \end{aligned} \end{array} $$
Assuming proportional hazards model with the Weibull baseline distribution under hypotheses Hj, j=0, 1, the hazard functions hj(t|u)=μj(t)χ(u) are proportional to the Weibull baseline hazards μj(t) and a regressor function χ(u). The regressor function is usually specified as χ(u)= exp(β∗u) (the Cox's regression term) for a transposed column vector of unknown parameters β. The baseline hazard function under H0 corresponds to the hazard function μ0(t)=(k/λ)(t/λ)k−1 for the Weibull distribution with the shape parameter k and the scale parameter λ, and the baseline hazard function μ1(t) under the alternative hypothesis H1 is proportional to μ0, μ1(t)=HRμ0(t). The hazard ratio HR represents the departure from the target survival that we want to detect.
For consecutive time intervals T, consider a subset I=IT of NI individuals observed (prostheses in use) over the time interval T. In this case, the scores XI can be calculated as [7]
$$\begin{array}{@{}rcl@{}} \begin{aligned} X_{I}=O_{I}\log(\text{HR})-(\text{HR}-1)E_{I}, \end{aligned} \end{array} $$
where OI is the observed number of failures (revisions) occurring during the interval T and EI is the number of failures that would be expected in the same interval under hypothesis H0.
Denote by (t1i,t2i) an intersection of the interval T with the lifetime of the prosthesis i implanted at t0i. Then t1i is the maximum of the lower bound of interval T and t0i, and t2i is the minimum of the upper bound of interval T, the time of revision of prosthesis i and the time of censoring of the patient with prosthesis i. From this, the value of (t2i−t1i) is equal to the length of time when prosthesis i is in use in the time interval T. The values of EI can be computed as
$$\begin{array}{@{}rcl@{}} \begin{aligned} E_{I}=\sum_{i=1}^{N_{I}} \lambda^{-k}\left ((t_{2i}-t_{0i})^{k}-(t_{1i}-t_{0i})^{k}\right). \end{aligned} \end{array} $$
CUSUM scores for shared frailty competing risks model
Under the proportional hazards model with frailty, the hazard functions h(t|u,Z) for an observed vector of covariates u and unobserved non-negative random frailty component Z, is proportional to the baseline hazard μ(t), frailty term Z, and a regressor function χ(u)= exp(β∗u). The conditional survival function is given by
$$ {\begin{aligned} S(t|\mathbf u, Z)\,=\,\exp(-\int_{0}^{t}h (x|\mathbf{u},Z)dx)=\exp(-Z\chi (\mathbf{u})\int_{0}^{t} \mu(x)dx). \end{aligned}} $$
The marginal survival function is defined by
$$\begin{array}{@{}rcl@{}} \begin{aligned} S(t|\mathbf u)=\mathbb {E}S(t|\mathbf u, Z). \end{aligned} \end{array} $$
We will use the index f, f=r,d, to denote the types of failure (revision of implant or death of a patient without implant failure, respectively), considered as competing risks. For mathematical convenience, it is frequently assumed that frailty Zf is gamma-distributed with mean 1 and unknown variance \(\sigma _{f}^{2}\). The assumption of gamma distributed frailty is not too restrictive, as a number of authors demonstrated that gamma-based shared frailty models are robust for a wide class of frailty distributions [21, 22]. The frailty variance \(\sigma _{f}^{2}\) characterizes heterogeneity in the population.
We also assume that the baseline hazard functions are \(\phantom {\dot {i}\!}\mu _{0,r}(t)=(k_{r}/\lambda _{r})(t/\lambda _{r})^{k_{r}-1}\) and \(\mu _{0,d}(t)=\lambda _{d}\exp (k_{d}t)\phantom {\dot {i}\!}\) with the shape parameter kf and the scale parameter λf, f=r,d, for the Weibull and Gompertz distributions, respectively. In this case, the type-of-failure specific marginal survival function is given by
$$\begin{array}{@{}rcl@{}} \begin{aligned} S_{f}(t|\mathbf u_{f})=(1+\sigma_{f}^{2}e^{\beta^{*}\mathbf u_{f}}H_{f}(t))^{-1/\sigma_{f}^{2}} \end{aligned} \end{array} $$
with the type-of-failure specific baseline cumulative hazards \(H_{r}(t)=(t/\lambda _{r})^{k_{r}}\phantom {\dot {i}\!}\) and \(\phantom {\dot {i}\!}H_{d}(t)=(\lambda _{d}/k_{d})(\exp (k_{d}t)-1)\).
Correlated frailty terms for revision and death can be constructed as
$$\begin{array}{@{}rcl@{}} \begin{aligned} Z_{r}= &Y_{0}+Y_{r}, \\ Z_{d}= &\frac {m_{r}}{m_{d}}Y_{0}+Y_{d} \end{aligned} \end{array} $$
for independent gamma distributed random variables Y0∼G(l0,mr) and Yf∼G(lf,mf) with \(l_{f}=1/\sigma _{f}^{2}-l_{0}\), \(m_{f}=1/\sigma _{f}^{2}\), f=r,d; 0≤ρ≤ min(σr/σd,σd/σr). The result of this construction is that the frailties are gamma-distributed with \(\mathbb {E}Z_{f}=1\), \(\text {Var}Z_{f}=\sigma _{f}^{2}\), and Corr(Zr,Zd)=ρ. Given the frailties (Zr,Zd) and the covariates (ur, ud), type-of-failure specific instantaneous risks are assumed to be conditionally independent at any time t.
The bivariate marginal survival function for the type-of-failure specific latent time moments (tr, td) is given by the formula
$$\begin{array}{@{}rcl@{}} {\begin{aligned} S(t_{r},t_{d}|\mathbf u_{r},\mathbf u_{d})= &\mathbb {E}S(t_{r},t_{d}|\mathbf u_{r},\mathbf u_{d},Z_{r},Z_{d}) \\ =&\mathbb {E}\exp (-Z_{r}\chi (\mathbf u_{r})H_{r}(t_{r})-Z_{d}\chi(\mathbf u_{d})H_{d} (t_{d})) \\ = & \frac {\left(1+\sigma_{r}^{2}\chi (\mathbf u_{r})H_{r}(t_{r})\right)^{-l_{r}}\left(1+\sigma_{d}^{2}\chi (\mathbf u_{d})H_{d} t_{d}\right)^{-l_{d}}}{\left (1+\sigma_{r}^{2}\chi (\mathbf u_{r})H_{r}(t_{r})+\sigma_{d}^{2}\chi (\mathbf u_{d})H_{d}(t_{d})\right)^{l_{0}}}& \end{aligned}} \end{array} $$
[23]. If left truncation is present at ages (t0r, t0d), we calculate the conditional survival function by dividing the bivariate survival function by S(t0r,t0d|ur,ud).
In the context of hip replacement, the shared frailty terms arise from the assumption that the nj patients who have undergone surgery in the same unit j, j=1,⋯,J, have the same, possibly correlated, unobserved risks of revision and death. This means that the full likelihood function for our model has a form of \({\mathcal L}=\prod _{j=1}^{J}{\mathcal L}_{j}(\bar t_{jr},\bar t_{jd}|\bar {\mathbf {u}}_{jr},\bar {\mathbf {u}}_{jd})\) for
$$\begin{array}{@{}rcl@{}} {\begin{aligned} &{\mathcal L}_{j}(\bar t_{jr},\bar t_{jd}| \bar{\mathbf{u}}_{jr}, \bar{\mathbf{u}}_{jd})=\prod_{i=1}^{n_{j}}\left (-\frac{\partial }{\partial t_{jir}}\right)^{\delta_{jir}}\left (-\frac{\partial }{\partial t_{jid}}\right)^{\delta_{jid}}S_{j}(\bar{t}_{jr},\bar t_{jd}| \bar{\mathbf{u}}_{jr},\bar{\mathbf{u}}_{jd}), \end{aligned}} \end{array} $$
where δf=0,1 is the censoring indicator with δf=0 indicating right censoring, and \(\bar t_{jf}\) and \( \bar {\mathbf {u}}_{jf}\) are the vectors of cause-specific latent times and of covariates for the patients from unit j, respectively, f=r, d, and
$$\begin{array}{@{}rcl@{}} {\begin{aligned} &S_{j}(\bar t_{jr},\bar t_{jd}| \bar{\mathbf{u}}_{jr}, \bar{\mathbf{u}}_{jd}) \\ &=\frac {\left (1+\sigma_{r}^{2}\sum_{i=1}^{n_{j}}\chi (\mathbf u_{jir})H_{r}(t_{jir})\right)^{-l_{r}}\left (1+\sigma_{d}^{2}\sum_{i=1}^{n_{j}}\chi (\mathbf u_{jid})H_{d}(t_{jid})\right)^{-l_{d}}}{\left (1+\sigma_{r}^{2}\sum_{i=1}^{n_{j}}\chi (\mathbf u_{jir})H_{r}(t_{jir})+\sigma_{d}^{2}\sum_{i=1}^{n_{j}}\chi (\mathbf u_{jid})H_{d}(t_{jid})\right)^{l_{0}}}, \end{aligned}} \end{array} $$
where a subscript i, i=1,...,nj, corresponds to a current patient i from unit j. This likelihood can be used for parameter estimation.
Proposed CUSUM scores for a competing risks model with shared frailty are based on the likelihood ratio \({\mathcal L}\). For a time interval T, let Ij(T) be a set of individuals from unit j whose implants are in use during the period T, and \(I=I(T)=\bigcup I_{j}(T)\). The scores XI(T) for the time interval T are defined as
$$\begin{array}{@{}rcl@{}} {\begin{aligned} X_{I}(T) = \sum_{j =1}^{J}\log \left (\frac {\mathbb {E}\prod_{i \in I_{j}(T)}{\mathcal L}^{1}(t_{jir},t_{jid}|\mathbf{u}_{jir},\mathbf{u}_{jid},Z_{jr},Z_{jd})}{\mathbb {E}\prod_{i \in I_{j}(T)}{\mathcal L}^{0}(t_{jir},t_{jid}|\mathbf{u}_{jir},\mathbf{u}_{jid},Z_{jr},Z_{jd})}\right), \end{aligned}} \end{array} $$
where Zjr, Zjd are the shared frailty terms for unit j, the superscript h, h=0,1, stands for hypothesis, and
$$\begin{array}{@{}rcl@{}} \begin{aligned} &{\mathcal L}^{h}(t_{jir},t_{jid}|\mathbf{u}_{jir},\mathbf{u}_{jid},Z_{jr},Z_{jd}) \\ &=\left (-\frac{\partial }{\partial t_{jir}}\right)^{\delta_{jir}}\left (-\frac{\partial }{\partial t_{jid}}\right)^{\delta_{jid}}S^{h}(t_{jir},t_{jid}|\mathbf u_{jir},\mathbf u_{jid},Z_{jr},Z_{jd}). \end{aligned} \end{array} $$
In general case, expression for XI(T) does not have a simple closed form. In the special case of ρ=0, the competing risks of revision and death are independent, and the score XI(T) is the sum of the respective component scores for revision and death (see Appendix). If the interest lies in the risk of revision only, death can be treated as a non-informative censoring, and we concentrate on the CUSUM analysis of revision scores to the end of this Section.
For the baseline Weibull hazard function, under the proportionate alternatives μ1(t)=HRμ0(t), we can rewrite the revision component of the score (3) as
$$\begin{array}{@{}rcl@{}} {\begin{aligned} &X_{I}^{r}(T) = O_{I}\log (\text{HR})-\sum_{j =1}^{J}({\sigma_{r}^{-2}}+O_{j})\\ &\times\log \left(\frac {1+\sigma_{r}^{2}\text{HR}\sum_{i \in I_{j}(T)}e^{\beta^{*}\mathbf u_{i}}\lambda^{-k}((t_{2i}-t_{0i})^{k}-(t_{1i}-t_{0i})^{k})}{1+\sigma_{r}^{2}\sum_{i \in I_{j}(T)}e^{\beta^{*}\mathbf u_{i}}\lambda^{-k}((t_{2i}-t_{0i})^{k}-(t_{1i}-t_{0i})^{k})}\right), \end{aligned}} \end{array} $$
where Oj is a number of revisions in the unit j during period T so that \(O_{I}=\sum _{j}O_{j}\) (see Additional file 1 for proof).
Often, the proportional hazards assumption is too strong; different groups of patients and prostheses do not necessarily have proportional hazard functions for the hip revision times and/or for death. We weaken this assumption by allowing different shape parameters kf(u) in the baseline Weibull and Gompertz hazard functions which depend on covariates through additional Cox-regression multipliers, \(k_{f}({\mathbf u})=\exp (\beta ^{*}_{k} \mathbf u)k_{f}\). Then the CUSUM scores for revision are calculated as
$$\begin{array}{@{}rcl@{}} {\begin{aligned} &X_{I}^{r}(T) = O_{I} \log(\text{HR})-\sum_{j=1}^{J}(\sigma_{r}^{-2}+O_{j}) \\ &\times\log \left (\frac {1\,+\,\sigma_{r}^{2}\text{HR}\sum_{i \in I_{j}(T)}e^{\beta^{*}\mathbf u_{ji}}\lambda^{\,-\,k_{r}({\mathbf u_{ji}})}((t_{j2i}\,-\,t_{j0i})^{k_{r}({\mathbf u_{ji}})}\,-\,(t_{j1i}\,-\,t_{j0i})^{k_{r}({\mathbf u_{ji}})})}{1\,+\,\sigma_{r}^{2}\sum_{i \in I_{j}(T)}e^{\beta^{*}\mathbf u_{ji}}\lambda^{\,-\,k_{r}({\mathbf u_{ji}})}((t_{j2i}\,-\,t_{j0i})^{k_{r}({\mathbf u_{ji}})}\,-\,(t_{j1i}-t_{j0i})^{k_{r}({\mathbf u_{ji}})})}\right). \\ \end{aligned}} \end{array} $$
CUSUM chart control limits for the shared frailty model for revision
The unknown parameters of the time-to-revision model under the null hypothesis H0 are estimated from the in-control (learning) dataset. These are the Cox-regression parameters β and βk, parameters of the Weibull baseline distributions k and λ, and the variance of the frailty term σ2. The vector of unknown parameters ξ=(lnk, lnλ, lnσ2,β,βk) is estimated using the maximum likelihood method to obtain the estimates \(\hat \xi \). The time-to-failure distribution with these estimated parameters is then used to compute the CUSUM scores for the two test datasets and to estimate the control limits for the CUSUM chart: See Additional file 1 for details of calculation of the CUSUM score. Let P=P(ξ) be the true distribution function for revision times, and τ=τc(P;ξ) is the time at which the chart alerts when it exceeds a threshold c. The false alarm probability in T time units is \(hit(P;\xi) = \mathbb P(\tau _{c}(P;\xi) \leq T)\) for some finite T>0. The threshold chit(P;ξ)= inf{c>0:hit(P;ξ)≤α} for some 0<α<1 is needed to restrict the false alarm probability to α. However, only \(\hat P\) and \(\hat \xi =\xi (\hat P)\) are known.
A parametric version of the bootstrap algorithm proposed by Gandy and Kvaløy [12] is used to estimate the control limits to guarantee, that the false alarm rate of a CUSUM chart with the in-control distribution P, conditional on \(\hat \xi \), is below nominal level α with high probability 1−γ.
Define the first time \(\tau _{c}(P|\hat \xi)\) at which the CUSUM chart conditional on \(\hat \xi \) exceeds the given value c. We are interested in the boundary \(c_{hit}(P|\hat \xi)\) defined by equation \(c_{hit}(P|\hat \xi)=\inf \{c>0:\; \mathbb P(\tau _{c} (P|\hat \xi)\leq T)\leq \alpha \}\) for some 0<α<1. Since P is unknown, \(c_{hit}(P|\hat \xi)\) is unknown too and the estimate \(c_{hit}(\hat P|\hat \xi)\) is usually used instead. However, such estimate does not guarantee the false alarm rate of the chart. Following [12], we estimate the 1−γ quantile for the threshold \(c_{hit}(P|\hat \xi)\) for some 0<γ<1 using the following algorithm.
Algorithm.
Let N be the number of records (patients) in the control dataset, NSim be the number of simulations needed to estimate \(c_{hit}(\hat P|\hat \xi)\), NBoot be the number of bootstrap replicates, and T=[Tmin,Tmax] be the observation period.
Calculate the maximum likelihood estimate (MLE) \(\hat \xi \) of the vector of unknown parameters ξ as well as the estimate \(\widehat {\text {Cov}}\) of the covariance matrix cov (inverse Hessian) for \(\hat \xi \) using the control dataset and the survival model with Weibull hazard described above;
Generate from the multivariate normal distribution with mean \(\hat \xi \) and the covariance matrix \(\widehat {\text {Cov}}\), a random vector ξcur;
Keeping the covariates in all three test datasets fixed, generate for all patients new times-to-revision trev on the basis of the survival model with Weibull hazard described above and vector ξcur. Update the censoring using the rule δ=1 if trev<= min{tdeath,Tmax} and δ=0, otherwise. Replace trev for δ=0 by trev= min{tdeath,Tmax}. Repeat NSim times and calculate for the test dataset j, j=1,2, the values of \(c_{\text {hit}}^{j}(\hat P_{cur}|\hat \xi _{cur})\) and \(c_{\text {hit}}^{j}(\hat P|\hat \xi _{cur})\);
To take into account multiple testing, we set \(c_{\text {hit}}(\hat P_{cur}|\hat \xi _{cur})=\underset {j=1,2}\max \{c_{\text {hit}}^{j}(\hat P_{cur}|\hat \xi _{cur})\}\) and \(c_{\text {hit}}(\hat P|\hat \xi _{cur})=\underset {j=1,2}\max \{c_{\text {hit}}^{j}(\hat P|\hat \xi _{cur})\}\). Calculate \(p_{cur}=c_{\text {hit}}(\hat P_{cur}|\hat \xi _{cur})-c_{\text {hit}}(\hat P|\hat \xi _{cur})\);
Repeat steps 2-4 NBoot times and calculate the 1−γ empirical quantile pγ of pcur.
The estimate of the adjusted threshold is equal to \(c_{hit}(\hat P|\hat \xi)-p_{\gamma }\). This threshold guarantees that in approximately 100(1−γ)% of the applications the probability of false alarm will not exceed the value of α.In the "Results" section, we use the values of NSim=100, NBoot=100, α=0.1, and γ=0.1, Tmin=01.01.2005, and Tmax=31.12.2012 for the analysis of the NJR data.
Estimating operating unit performance
Estimating performance across surgical units is also of potential importance in the quality control setting. The posterior frailty distribution obtained from the fitted shared frailty survival model described in the "Methods" section, can be used for this purpose. Given the prior gamma distribution with (shape, scale) parameters (a,b)=(σ−2,σ2), mean ab=1 and variance ab2, and the observed data Dj, the posterior frailty distribution for unit j, is the gamma distribution with (shape, scale) parameters (aj,bj) equal to
$$\begin{array}{@{}rcl@{}} \begin{aligned} a_{j}&=a+O_{j}, \\ b_{j}&=\frac {b}{1+b\sum_{i \in I_{j}}H(t_{i},\mathbf{u}_{i})}, \end{aligned} \end{array} $$
where Oj is the number of observed revisions in unit j, Ij is set of all patients from unit j, and H(ti,ui) is the cumulative hazard for individual i from unit j with time to revision (or censoring) ti and the vector of covariates ui [24].
The effects of the units (shared frailties) are given by the conditional expectation \(\mathbb {E}(Z_{j}|D_{j})=a_{j}b_{j}\), and parameters aj and bj can be estimated by substituting the MLE estimates \(\hat \xi \) of the unknown parameters ξ [21]. Given the proportional hazards formulation, the shared frailty term can be interpreted as an excess hazard of a unit relative to the baseline hazard. Because of this interpretation, we refer to these estimated frailties as unit-level hazard ratios and denote them by HRj.
Additionally, we propose a new score characterizing the quality of the hip replacement surgery in a unit as
$$ Q_{j}= P\{Z_{j}|D_{j}\} <1, $$
where Dj is the data from the control dataset relating to unit j. Large value of Q indicates a decreased hazard of revision in a unit, whereas small value of Q indicates poor performance of a unit. Since the values of Q and HR depend on the vector of unknown parameters ξ and only the MLE estimate \(\hat \xi \) of this vector is available, we generate a set of Naverage estimates \(\hat \xi _{l}\) from \(\mathrm {N}(\hat \xi,\widehat {cov})\) distribution, and take the average of the obtained estimates of \(Q(\hat \xi _{l})\) and of \(\text {HR}_{j}(\hat \xi _{l})\) over this set of parameters.
For the control dataset described in the "Methods" section, we estimated unknown parameters of the competing risks model with and without shared frailty terms maximizing the likelihood function (2). These include the parameters for the baseline hazard distributions and the coefficients of the Cox's regressions for time-to-revision and time-to-death, allowing for the possible covariate-dependent shape parameters, as described at the end of the "Methods" section. Significant predictors had been chosen using the backward elimination in stepwise regression. The estimated coefficients and their confidence intervals for the models with and without frailty components are given in Table 2. The notation \(\phantom {\dot {i}\!}^{\prime \prime }{k}_{f}^{\prime \prime }\) before the name of a variable means that its coefficient relates to the shape parameter kf. The baseline values for the categorical and binary regressors were: males for sex, cemented for fixation, ceramic/ceramic for cup/head bearing surfaces, and operation date before 01.01.2007.
Table 2 Description, parameter estimates and confidence intervals for the competing risks models with/without frailty
Comparing likelihood, AIC and BIC values in Table 2 we see that the correlation between cause-specific frailties Zr and Zd does not differ significantly from zero, and the best (in terms of AIC and BIC) model includes a frailty term only for revision. That is, the risks of revision and death can be modelled as independent, and formula (5) can be used to calculate CUSUM scores for revision.
Females had a decreased hazard of revision of hip prostheses compared to males on the time-to-revision interval [ 0,λ]. Hazard of revision decreased with age and head size. Uncemented hip prostheses had an increased hazard of revision compared to cemented or hybrid fixation. The cup/head combinations with resurfacing/metal and resurfacing/resurfacing bearing surfaces also had increased hazards compared to other types of bearings, whereas the polyethylene/ceramic bearing surfaces provided a decreased risk of revision compared to the ceramic/ceramic ones. These results agree with the findings by [8]. Those patients who underwent the surgery after 01.01.2007 had an increased hazard of revision. This may reflect the fact that early revisions were missed by the NJR due to poor data quality in the early years. We also have found a significant random effect of units, with the estimated frailty variance \(\sigma ^{2}_{r}\) equal to 0.18 with confidence interval of (0.12−0.28). i.e. the hazard of revision differed by units.
Patients with serious disease (ASA P3-P5) and patients from areas with high deprivation (IMD 4-5) had increased hazards of death. The cup/head combination with polyethylene/metal bearing surfaces had a significantly increased hazard of death compared to ceramic/ceramic bearing. The shape parameters for baseline hazards of death also differed by these factors and by the date of surgery before/after 01.01.2007.
Based on the fitted revision submodel with frailty under independent competing risks, and targeting the hazard ratios of 1.25, 1.50 and 1.75 under alternative hypotheses, the CUSUM scores were calculated quarterly for the period 2005-12. The bootstrap-based boundaries were calculated at the false alarm rate α=0.1 and the tolerance level 1−γ=0.9 and adjusted for multiple comparisons for two tested hip implants. The CUSUM scores did not differ much between the models with and without frailty component. Figure 1 presents the CUSUM charts for the two test datasets as well as the in-control dataset for the models without/with frailty component at all three target hazard ratios. The CUSUM charts without frailty for DePuy ASR Resurfacing Cup produced alarm in the 4th quarter of 2009 for HRs of 1.25 and 1.75, and in the 3rd quarter of 2009 for HR of 1.50. The charts with frailty produced alarm somewhat later, in the 4th quarter of 2009 for all three values of the hazard ratio. This is comparable with the alarm based on PTIR by NJR in April 2010. For the Biomet M2A 38, the CUSUM charts without frailty hit the boundary in the second quarter of 2011 for HR=1.25, in the first quarter of 2011 for HR of 1.50, and in the second quarter of 2010 for HR= 1.75. The CUSUM charts with frailty alarm in the 2rd, the 1nd and the 2nd quarter of 2011, respectively. This is 3 to 4 years prior to the NJR alarm issued in 2014 [8].
CUSUM charts calculated for quarterly revision rates in the three NJR datasets: DePuy ASR Resurfacing Cup (black), Biomet M2A 38 (blue) and in-control dataset (magenta), over the period 2005-12. The control bounds (solid red lines) are estimated by the parametric bootstrap
The estimates of the quality scores Qj and the hazard ratios HRj have been calculated for 269 units included in the control dataset using Naverage=100. Our results demonstrate high heterogeneity in performance. 17 units out of the total of 269 had the quality scores greater than 0.9. HRs for these units were between 0.38 and 0.67. 15 units had the quality score values less than 0.1. Their HRs varied from 1.52 to 2.28.
To check the goodness-of-fit of chosen parametric distributions in our models for revision and mortality, we compared semiparametric estimates of baseline cumulative hazard functions to baseline cumulative hazards obtained from our parametric models, separately within each strata of a moderate to large size with a particular shape value. The results are shown in Fig. 2 for the Weibull baseline hazards in the revision model, and in Fig. 3 for the Gompertz baseline hazards in the mortality model. Additionally, these figures include plots of the residuals between the parametric and semiparametric estimates of the baseline hazards pooled across the strata. In Fig. 2, the larger deviations are still very small in absolute value, and mostly correspond to the small number of operations performed before 2007. Figure 3 is the confirmation of a well-known fact [25] that the Gompertz distribution describes human mortality well only up to 95 years, and the oldest patients in Fig. 3 are the outliers. Overall, the Weibull and the Gompertz models fit the revision and the mortality data, respectively, very well.
Comparison of the baseline cumulative hazard functions estimated using semiparametric (magenta) and parametric (Weibull model, grey) methods. Age&sex groups for revision data
Comparison of the baseline cumulative hazard functions estimated using semiparametric (magenta) and parametric (Gompertz model, grey) methods. Date of operation & cup/head bearing groups for mortality data
To assess the predictive value of our models, we also calculated the Harrell's concordance index [26, 27] between the predicted and the observed survival. In the models without frailty, the estimates of the concordance were equal to 0.818 (SE=0.009) and 0.732 (SE=0.003) for revision and mortality data, respectively. For the models with frailty, the concordance values were equal to 0.819 (SE=0.009) and 0.732 (SE=0.003), respectively.
In hip replacement surgery, the continuous monitoring of the revision experience of hip prostheses is necessary due to delayed outcomes after the introduction of new brands into practice. CUSUM charts are a useful tool for early detection of changes in the revision rates after hip replacement. In the standard applications of the CUSUM-based monitoring, the learning data set required for the model identification is usually chosen from a preceding period. This assumes the stationarity of the process and leads to loss of information and the reduction of the period under study. Instead, we chose the in-control and the test data from the same period. This novel approach is especially beneficial for the future development of the adaptive version of the algorithm.
In the absence of the gold standard, the choices of the learning dataset and the model describing the data play an important role in the analysis using a self-starting CUSUM. After the routine cleaning of the original dataset, we excluded the records from units with less than 52 hip replacements per year to guarantee to some degree the sufficient experience of the implant within surgical teams. Similarly, only the top 80% of cup/head brands in each year were included to exclude rarely used brands, where the measure of failure rate was unlikely to be stable or robust.
Naive analysis treating competing risk events as noninformative censoring can lead to bias in estimates if competing risks are not independent. The competing risks model with dependent unobserved risk factors (frailties) is a convenient analytical tool for such data.
Two types of failure - revision and death without revision - are considered in this study. Other events during the follow-up period (e.g. loss to follow-up due to migration) are treated as noninformative censoring. In addition to observed factors, we included in the competing risks model correlating type-of-failure-specific random effects and all patients from a unit shared their values [28]. Sex, age, fixation, bearing surfaces, head size, and the date of operation were significantly associated with the life-time of the hip prosthesis. Bad health (ASA 3-5), high deprivation (IMD 4-5), polyethylene/metal bearing surfaces, and the date of operation were significantly associated with the higher hazards of death. These effects were robust against the frailty settings.
Identifiability of the competing risks model with random effects was studied in [29]. The main assumption for the identifiability of this model is the finite mean of the frailty. Identifiability of the bivariate survival models with time-dependent frailties given by the correlated Lévy-processes was studied in the recent publication [30]. Our methodology can be easily adapted to this scenario.
There is no consensus on whether the risks of revision and death are independent in hip replacement. Shwarzer et al. [31] showed these risks to be dependent in their data. However, a recent publication by Sayers et al. [32] argued for independence. Comparing the results from four competing risk models with and without shared frailty terms, we found that the best model included the shared frailty for revision but not for death. This means that the competing risks of revision and death are independent in the NJR data. The variance of the frailty term for revision differed significantly from zero, in other words, there were significant differences between units.
We used the classical AIC and BIC for the model selection. However, the conditional AIC (cAIC) [33–35] is more appropriate for use in frailty models, since the marginal AIC favors smaller models excluding random effects. We believe that the use of cAIC would not have changed our models because of the negligibly small values of the estimates for the variance of the frailty for mortality, the very small correlation between frailties, and the practically unchanged value of the log-likelihood compared to the model without a random effect for mortality. The cAIC methods are also very computationally intensive. However, our final model includes the random effect for revision. We intend to incorporate cAIC for model selection in our further work.
We proceeded with CUSUM monitoring of revision rates. The two cup brands, DePuy ASR Resurfacing Cup and Biomet M2A 38, were not included in the learning dataset and their performances were monitored using CUSUM charts. We calculated the adjusted boundaries for three target values of the hazard ratios to guarantee approximately 10% of false alarm rate with probability of 0.9 during the observation period 2005-12. The estimates of the boundary calculated using the models with the frailty component were higher, i.e. more conservative, than the one calculated using the model without the frailty component. This delayed two of the alarms, by three and by 12 months. The charts were comparatively robust to the changes in the target HR levels. The estimated CUSUM scores of the DePuy ASR Resurfacing Cup consistently increased from mid-2009. The increase of the CUSUM scores for the Biomet cup also started in 2009 and produced alarms in 2010-11, four years before the increased failure rate came to the attention of the UK regulatory authorities [15].
Estimating the posterior frailty distribution allows to compare the quality of the hip replacement surgery across units. From the 269 units included in the control dataset, 17 (6.3%) had a decreased hazard of revision with a quality score higher than 0.90 and 15 (5.6%) had an increased hazard of revision with a quality score less than 0.10. The associated hazard ratios of revision across the units varied from 0.38 to 2.28.
Due to low revision rates, the data set under study has about 90% censoring. The properties of the statistical methods in highly censored data sets are not well known. A further simulation study is required to assess the performance of our methods under varying amounts of censoring. Another limitation of this study is the choice of the gamma distribution for the correlated frailties. The advantage of the gamma frailty is a closed form expression for its Laplace transform. It allows for simple expressions for CUSUM scores. However, this choice results in necessarily positive correlations between revision and mortality frailties. Other forms of the frailty distributions (e.g. log-normal) to allow possible negative correlations will be pursued in our future work.
This study developed and implemented, for the NJR data, continuous monitoring methods for surgical outcomes. We used the Weibull and the Gompertz hazard functions to describe the baseline hazards of revision and death, respectively. These functions appear to provide a good approximation to the respective type-of-failure life-time. However, adjustment for observed covariates is necessary to improve this approximation and to better understand the influence of the different factors on the life-times of the hip prosthesis and the patient.
Flexible parametrization taking into account possible influence of observed covariates on the shape and the slope parameters of the revision and mortality hazard functions as well as inclusion of the random effects (frailties) accommodate non-proportional hazards and improve the fit of our models to observed data.
Our results demonstrate that the competing risks of revision and death are independent in the NJR data. This finding will facilitate further development of continuous monitoring methods for these data.
We developed a novel method of CUSUM-based monitoring of revision rates. This method includes the choice of the in-control and the test data from the same period, and can be expanded for the subsequent development of an adaptive algorithm. Implementation of the special bootstrap algorithm to estimate the control limits in the CUSUM method guarantees with high probability that the false alarm rate is below a prespecified level. An earlier detection of failure signal by our method in comparison to the PTIR method may be explained by proper risk-adjustment and the ability to accommodate time-dependent hazards.
We found considerable variation in the hazard ratios of revision across the units. Therefore, the continuous monitoring of hip replacement outcomes should include risk adjustment at both the individual and unit level.
Our approach can be easily adapted to other practice areas requiring the continuous monitoring of the failure rates. Further development of the dynamic CUSUM-based methodology similar to that of [36] is needed to adapt our approach to real-time applications, where the new data are regularly updated. Additionally, more sophisticated methods are required to adjust for multiplicity if testing hundreds of various implant brands. We intend to address these further challenges elsewhere.
The NJR data are available to interested researchers subject to approval of the data access request by the Healthcare Quality Improvement Partnership (HQIP) and governance controls. The R programs used to analyse the data are available from the authors on request.
CUSUM:
Cumulative sum
HR:
Hazard Ratio
IMD:
Index of Multiple Deprivation
NJR:
National Joint Register
PTIR:
Patient time incident rate
QCC:
Page E. Continuous inspection schemes. Biometrika. 1954; 14:100–15.
de Leval M, Franćois K, Bull C, Brawn W, Spiegelhalter D. Analysis of a cluster of surgical failures: Application to a series of neonatal arterial switch operations. J Thorac Cardiovasc Surg. 1994; 107(3):914–924.
Spiegelhalter D, Sherlaw-Johnson C, Bardsley M, Blunt I, Wood C, Grigg O. Statistical methods for healthcare regulation: rating, screening and surveillance. J R Stat Soc Ser A Stat Soc. 2012; 175(1):1–47.
Bottle A, Aylin P. Intelligent information: A national system for monitoring clinical performance. Health Serv Res. 2008; 43:10–31.
Grigg O, Farewell V, Spiegelhalter D. Use of risk-adjusted CUSUM and RSPRT charts for monitoring in medical contexts. Stat Methods Med Res. 2003; 12(2):147–170.
Biau D, Meziane M, Bhumbra R, Dumaine V, Babinet A, Anract P. Monitoring the quality of total hip replacement in a tertiary care department using a cumulative summation statistical method (CUSUM). J Bone Joint Surg Br. 2011; 93:1183–1188.
Hardoon S., Lewsey J., van der Meulen J. Continuous monitoring of long-term outcomes with application to hip prostheses. Stat Med. 2007; 26(28):5081–5099.
National Joint Register. 14th Annual report 2017. surgical data to 31 December 2016. 2017. https://reports.njrcentre.org.uk/Portals/6/PDFdownloads/NJR%2014th%20Annual%20Report%202017.pdf.
Biswas P, Kalbfleisch J. A risk-adjusted CUSUM in continuous time based on the Cox model. Stat Med. 2008; 27(17):3382–3406.
Macpherson G, Brenkel I, Smith R, Howie C. Outlier analysis in orthopaedics: Use of CUSUM: The Scottish Arthroplasty Project: Shouldering the burden of improvement. J Bone Joint Surg Am. 2011; 93:81–88.
Assareh H, Smith I, Mengersen K. Bayesian estimation of the time of a linear trend in risk-adjusted control charts. Int J Comput Sci. 2011; 38(4):409–417.
Gandy A, Kvaløy J. Guaranteed conditional performance of control charts via bootstrap methods. Scand Stat Theory Appl. 2013; 40:647–668.
National Joint Register. 12th Annual Report 2015. Surgical data to 31 December 2014. 2015. http://www.njrcentre.org.uk/njrcentre/Portals/0/Documents/England/Reports/12th%20annual%20report/NJR%20Online%20Annual%20Report%202015.pdf.
National Joint Register. 8th Annual Report 2011. Surgical data to 31 December 2010. 2011. http://www.njrcentre.org.uk/njrcentre/Portals/0/Documents/NJR%208th%20Annual%20Report%202011.pdf.
National Joint Register. NJR implant performance analysis methodology. 2017.
National Joint Register. 10th Annual Report 2013. Surgical data to 31 December 2012. 2013.
Owens W, Felts J, Spitznagel JE. ASA physical status classifications: a study of consistency of ratings. Anesthesiol. 1978; 49:239–43.
English Indices of Deprivation. Guidance Document. https://www.gov.uk/government/uploads/system/uploads/\\attachment_data/file/6222/1871538.pdf.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2016. https://www.R-project.org/.
Gandy A, Lau F-H. Non-restarting CUSUM charts and control of the false discovery rate. Biometrika. 2013; 100(1):261–8.
Glidden D, Vittinghoff E. Modelling clustered survival data from multicentre clinical trials. Stat Med. 2004; 23(3):369–88.
Gleiss A, Gnant M, Schemper M. Explained variation in shared frailty models. Stat Med. 2017; 37(9):1472–90.
Wienke A. Frailty Models in Survival Analysis. New York: Chapman & Hall; 2010.
Nielsen G, Gill R, Andersen P, Sørensen T. A counting process approach to maximum likelihood estimation in frailty models. Scand J Stat Theory Appl. 1992; 19:25–43.
Vaupel JW. Biodemography of human ageing. Nature. 2010; 464(7288):536–542.
Harrell Jr FE, Lee KL, Mark DB. Multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat Med. 1996; 15(4):361–87.
Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology. 2010; 21(1):128–138.
Gorfine M, Hsu L. Frailty-based competing risks model for multivariate survival data. Biometrics. 2011; 67(2):415–26.
Abbring JH. The identifiability of the mixed proportional hazards competing risks model. J R Statist Soc B. 2003; 65(3):701–10.
Begun A, Yashin A. Study of the bivariate survival data using frailty models based on lévy processes. AStA Adv Stat Anal. 2018; 103(1):37–67. https://doi.org/10.1007/s10182-018-0322-y.
Shwarzer G, Schumacher M, Maurer T, PE O. Statistical analysis of failure times in total joint replacement. J Clin Epidemiol. 2001; 54:997–1003.
Sayers A, Evans J, Whitehouse M, Blom A. Are competing risks models appropriate to describe implant failure?. Acta Orthopaedica. 2018; 89(3):256–8.
Vaida F, Blanchard S. Conditional Akaike information for mixed-effects models. Biometrika. 2005; 92(2):351–70.
Greven S, Kneib T. On the behaviour of marginal and conditional AIC in linear mixed models. Biometrika. 2010; 97(4):773–89.
Ha ID, Jeong JH, Lee Y. Statistical Modelling of Survival Data with Random Effects. Singapore: Springer; 2017.
Zhang X, Woodall W. Dynamic probability control limits for risk-adjusted Bernoulli CUSUM charts. Stat Med. 2015; 34(25):3336–3348.
The authors thank Sophie E. Garrett and Dr Wenjia Wang for the extraction of the preliminary NJR dataset in an analysis friendly format. The authors also thank the referees, Ha Il Do and Vera Tomazalla for their useful suggestions for improving the presentation of the material of this article.
We thank the patients and staff of all the hospitals in England, Wales and Northern Ireland who have contributed data to the National Joint Registry. We are grateful to the Healthcare Quality Improvement Partnership (HQIP), the NJR Research Sub-committee and staff at the NJR Centre for facilitating this work. The authors have conformed to the NJR's standard protocol for data access and publication. The views expressed represent those of the authors and do not necessarily reflect those of the National Joint Registry Steering Committee or the Health Quality Improvement Partnership (HQIP) who do not vouch for how the information is presented.
The Healthcare Quality Improvement Partnership ("HQIP") and/or the National Joint Registry ("NJR") take no responsibility for the accuracy, currency, reliability and correctness of any data used or referred to in this report, nor for the accuracy, currency, reliability and correctness of links or references to other information sources and disclaims all warranties in relation to such data, links and references to the maximum extent permitted by legislation. HQIP and NJR shall have no liability (including but not limited to liability by reason of negligence) for any loss, damage, cost or expense incurred or arising by reason of any person using or relying on the data within this report and whether caused by reason of any error, omission or misrepresentation in the report or otherwise. This report is not to be taken as advice. Third parties using or relying on the data in this report do so at their own risk and will be responsible for making their own assessment and should verify all relevant representations, statements and information with their own professional advisers.
The work by A. Begun and E. Kulinskaya was supported by the Economic and Social Research Council [grant number ES/L011859/1]. The work by A. Begun was also supported by the Orthopaedics Trust.
Alexander Begun, Elena Kulinskaya and Alexander J MacGregor contributed equally to this work.
School of Computing Sciences, University of East Anglia, Norwich Research Park, Norwich, NR47TJ, UK
Alexander Begun
& Elena Kulinskaya
Norwich Medical School, University of East Anglia, Norwich Research Park, Norwich, NR47TJ, UK
Alexander J MacGregor
Search for Alexander Begun in:
Search for Elena Kulinskaya in:
Search for Alexander J MacGregor in:
All authors have made contributions to conception, design and methodology of this study. AJM formulated the problem and obtained the data, AB and EK contributed to methods development, AB carried out the analysis, and EK drafted the first version of the manuscript. All authors have been involved in revisions, read and approved the final manuscript.
Correspondence to Elena Kulinskaya.
The NJR data were made available after a formal request to the NJR Research Committee. The data were anonymised in respect to patient, to surgeon and to operating unit identifying details. Approval was obtained from Computing Subcommittee of the University of East Anglia Ethics Committee, reference number CMP/1718/F/10A.
The authors declare that they have no competing interest.
Additional file 1 calculation of the CUSUM score
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Begun, A., Kulinskaya, E. & MacGregor, A.J. Risk-adjusted CUSUM control charts for shared frailty survival models with application to hip replacement outcomes: a study using the NJR dataset. BMC Med Res Methodol 19, 217 (2019) doi:10.1186/s12874-019-0853-2
CUSUM charts
Baseline hazard function
Shared frailty | CommonCrawl |
Conditioning factors of test-taking engagement in PIAAC: an exploratory IRT modelling approach considering person and item characteristics
Frank Goldhammer ORCID: orcid.org/0000-0003-0289-95341,
Thomas Martens2 &
Oliver Lüdtke3
Large-scale Assessments in Education volume 5, Article number: 18 (2017) Cite this article
A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to investigate whether item indicators of disengagement constitute a continuous latent person variable by domain. Second, the effects of person and item characteristics were jointly tested using explanatory item response models.
Analyses were based on the Canadian sample of Round 1 of the PIAAC, with N = 26,683 participants completing test items in the domains of literacy, numeracy, and problem solving. Binary item disengagement indicators were created by means of item response time thresholds.
The results showed that disengagement indicators define a latent dimension by domain. Disengagement increased with lower educational attainment, lower cognitive skills, and when the test language was not the participant's native language. Gender did not exert any effect on disengagement, while age had a positive effect for problem solving only. An item's location in the second of two assessment modules was positively related to disengagement, as was item difficulty. The latter effect was negatively moderated by cognitive skill, suggesting that poor test-takers are especially likely to disengage with more difficult items.
The negative effect of cognitive skill, the positive effect of item difficulty, and their negative interaction effect support the assumption that disengagement is the outcome of individual expectations about success (informed disengagement).
The validity of inferences based on (average) test scores obtained from large-scale assessments depends heavily on test-takers' engagement when taking the test, that is, the degree to which they were motivated to show what they actually know and can do, in other words, to deliver their maximum performance (Cronbach 1970). However, in low-stakes assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) (OECD 2013a), test-takers or groups of test-takers may differ in the effort they exert when taking the test (Wise and DeMars 2005). The negative consequences of this can include, inter alia, the underestimation of respondents' true proficiency levels and the introduction of construct-irrelevant variance (Finn 2015; Haladyna and Downing 2004; Kong et al. 2007; Wise 2015).
Ideally, low test-taking engagement for test instruments administered under low-stakes testing conditions should be avoided. One option is for test administrators to employ strategies that can elicit effort and decrease inattention (Lau et al. 2009). Another option is to give a monetary reward (Braun et al. 2011). However, empirical findings on whether incentives increase test-taking engagement seem to be heterogeneous, dependent on various factors, and also raise ethical issues (Finn 2015).
Alternatively, disengaged responses can be identified after the assessment and taken into account when estimating test scores and population parameters (e.g., Rios et al. 2017). For instance, the effort-moderated IRT model proposed by Wise and DeMars (2006) applies a 3-parameter logistic (PL) IRT model for responses given in the solution behavior mode, while a constant probability model is applied for rapid-guessing behavior. Information on disengagement can also be used to fine-tune the scoring of response behavior. In the PIAAC, fast non-responses that can be understood as disengaged responses were classified as not attempted items, while non-responses taking more than 5 s were considered wrong responses (OECD 2013b).
Regardless of which strategy is chosen—avoiding disengaged responses or dealing with them in the data analysis phase—it is important to understand the process of disengaged responding and related conditioning factors. Therefore, the present study pursued two goals: First, we used a model-based approach to investigate whether behavioral item indicators of disengagement constitute a continuous latent person variable by assessment domain in PIAAC. Second, we tested the joint effects of person and item characteristics on disengagement in PIAAC using explanatory item response models.
Representing differences in test-taking engagement
Previous research has applied approaches other than (continuous) latent variable modelling to represent differences in test-taking engagement. These studies used both model-based and descriptive methods to capture differences in test-taking engagement and took item responses, item response times or both into account.
Schnipke and Scrams (1997) suggested distinguishing between two modes of response behavior: solution behavior, indicating that the test taker is engaged in the task of obtaining a correct response, and rapid-guessing behavior, indicating that the test taker is making quick responses, which can occur because he or she is running out of time, for example. In line with this distinction, the HYBRID model by Yamamoto and Everson (1997) incorporates a mixture of response processes. The regular response process is captured by an IRT model of a particular form, and the random response strategy by an alternative response model in which the (constant) probability of success is independent of ability. Solution behavior is not assumed to be known, but the switching-point from solution behavior to rapid guessing, which may differ across test-takers, is estimated as part of the model. In contrast, the effort-moderated IRT model proposed by Wise and DeMars (2006) incorporates a variable derived from response time to indicate solution behavior and whether or not the regular IRT model holds for a particular item-person combination. To identify different response modes, Schnipke and Scrams (1997) proposed a log-normal mixture model of item response time assuming two types of response-time distributions, one for rapid guessing and the other for solution behavior (expressed as a bimodal empirical response time distribution). The model has been used to, inter alia, investigate whether the proportion of guessing behavior increases with item position. Meyer (2010) combined the log-normal mixture model with a Rasch mixture model to identify the mode of response behavior using both item response times and item responses. These (mixture) item response models have proven to be beneficial for estimating model parameters accurately in the context of rapid guessing behavior.
Another line of research has directly addressed the degree to which test-takers exert effort when proceeding through a test. To detect low effort in low-stakes testing, Wise and Kong (2005) developed a continuous measure of test taking effort called response time effort (RTE) as the proportion of items completed with solution behavior. Wise and colleague used the effort measure to filter test taker data from the data set (motivation filtering), and investigated beneficial effects on test score reliability and convergent validity.
The approach in the present study expands upon previous work by using a model-based method to define a continuous latent variable of test-taking engagement. Specifically, (item response) measurement models are used to investigate whether binary indicator variables representing (non-)solution behavior for a person and an item constitute a common continuous latent variable. The concept of RTE proposes that test-takers' engagement when proceeding through a test differs continuously. In line with this, Setzer et al. (2013) analyzed binary solution-behavior indicators by means of a hierarchical generalized linear model including random intercepts for person and institution (but without random item intercepts or explanatory person and item variables). Thus, our first goal was to test whether there are actually systematic person differences in disengagement across test items that can be captured by a latent variable. Providing evidence that a measurement model can be established would also justify summing across indicator variables, as is done when computing the RTE measure. If the uni-dimensional 1-parameter logistic (1PL or Rasch) model holds, the sum score accurately represents the 1PL person parameter (Rost 2004). A model-based approach is also beneficial for complex test designs (e.g., multi-matrix design, adaptive test design) such as PIAAC, where different test-takers complete different item sets within a domain, and summing across different sets of engagement indicator variables may not provide comparable measures.
Behavioral indicators of test-taking engagement
Self-report effort measures completed after finishing the test are sometimes used to assess test-taking engagement; however, such measures may have accuracy and validity problems (Wise and DeMars 2005; Wise and Kong 2005). An alternative approach is to infer test-taking engagement directly from test-taking behavior (see Fig. 1). Specifically, engagement as the willingness to deliver maximum performance can be derived from the amount of time taken to complete a task, as the investment of time is a necessary (although not sufficient) condition of completing a task successfully. Note that the relation between task completion time and success can be curvilinear (cf. Fig. 3). Thus, although a minimum amount of time is needed to obtain a correct solution, taking much more time can be indicative of failure and quicker responses of greater success (Goldhammer et al. 2014).
Test-taking behavior is influenced by both the to-be assessed competency and individual test-taking engagement. Test-taking behavior is used to draw inferences about competency (response data) and can also be used to judge test-taking engagement (response time data). The expectancy of solving an item successfully and the personal value of taking the test are considered antecedents of test-taking engagement
Wise and Kong (2005) proposed using item response times to distinguish between solution behavior and rapid guessing behavior (Wise 2017). Following this notion, we assume that engaged item completion (i.e., solution behavior) involves taking at least a certain minimum amount of time required to read and understand the test instructions, process the stimulus' content, and finally give a response, whereas disengaged test-taking behavior means taking less time or guessing rapidly.
Response time thresholds distinguishing between engaged and disengaged responses can be identified in various ways. The three-second rule is commonly used as a constant threshold (Kong et al. 2007; Lee and Jia 2014). The idea of item-specific thresholds relates to the assumption that engaged test-taking behavior is associated with taking a minimum amount of time to be able to respond correctly, and that this amount of time can be assumed to differ across items (Goldhammer et al. 2016). One approach to determine item-specific thresholds is to inspect the response time distribution visually (Kong et al. 2007). The goal is to identify the threshold as the response time at what is judged to be the end point of the short time spike in a bimodal response time distribution. Wise and Ma (2012) proposed an automated way to determine the threshold. Their normative threshold method defines a certain percentage (e.g., 10%) of the average item response time as the threshold and assumes a maximum threshold value of, for instance, 10 s. Lee and Jia (2014) applied another method, previously considered by Ma et al. (2011), to multiple-choice (MC) items. First, the proportion correct conditional on response time was computed for each item. The threshold was defined as the first response time which is clearly associated with a proportion correct greater than the chance level for success (e.g., 25% for a MC item with four response options).
Similarly, the present study obtains item-specific response-time thresholds by conditioning proportion correct on response time. We consider all response behavior with a response time below the threshold as disengaged, that is, rapid responses and rapid non-responses (omissions), while response behavior above the threshold was considered to be engaged, that is, slow responses and slow non-responses (Goldhammer et al. 2016; Wise and Gao 2017). Note that slow (non-)responses are not necessarily engaged responses (see "Discussion" section).
Explaining differences in test-taking engagement
This section outlines how disengaged responses can occur for various reasons at the person level, the item level, or the interaction of both (Finn 2015). The term test-taking engagement already suggests that performance on a test depends not only on ability but also on motivational and emotional aspects (Asseburg and Frey 2013). Differences in test-taking motivation among test-takers influence the degree to which test scores truly reflect individual differences in competence or ability. Some test-takers may not reveal their true competence level simply because they are not motivated to comply with the instructions and, for instance, rush through the test or skip items. Theories of motivated behavior provide a conceptual framework for identifying sources of test-taking engagement. The most prominent of these is expectancy-value theory (Eccles (Parsons) et al. 1983). Thus, a basic model of test-taking motivation and engagement should include the "expectancy" of solving the test item as well as the "value" that the test-taker attaches to solving the test item (see Fig. 1). Note that expectancy and value may be positively correlated, but still additively predict performance, as might the interaction term (see Trautwein et al. 2012).
Person level
Assuming individual differences in test-taking disengagement suggests that there is an individual disposition to be more or less engaged when proceeding through a test. That is, some test-takers consistently give more disengaged answers than others. Identifying the person as a source of variation in disengagement is a descriptive step, and a precondition for explaining individual differences by way of person-level variables, as discussed in this section.
Large-scale assessments are typically experienced as a low-stakes situation (Asseburg and Frey 2013; Sundre and Kitsantas 2004; Wise 2009) in that test-taking behavior does not have consequences for the test taker. Thus, from the expectancy-value theory perspective, the "value" component should be similarly low across test-takers and therefore not related to individual differences in test-taking engagement. However, the perceived expectancy of being capable of solving an item may vary considerably across test-takers (Asseburg and Frey 2013; Cole et al. 2008). A major factor determining expectancy is ability self-concept, that is, one's perceived competence in performing specific tasks. Thus, it can be assumed that test-takers with a more positive self-concept will have more positive expectations and thus higher test-taking engagement than those with a negative self-concept.
In a recent review, Finn (2015) discussed several person-level predictors of low test-taking motivation. Test-takers who were less compliant, that is, less motivated, to take the test tended to show higher levels of reactance (Brown and Finney 2011). Another line of research has shown that boredom negatively affects test-taking effort (e.g., Asseburg and Frey 2013). With regard to gender differences, previous studies suggest that male students tend to exhibit lower levels of test-taking engagement than female students (e.g., DeMars et al. 2013). Similarly, Setzer et al. (2013) demonstrated that females exhibited greater response time effort than males, and those whose primary language is English exhibited greater response time effort than speakers of other languages. Personality measures of agreeableness and conscientiousness have also proven to be positively related to test-taking effort (DeMars et al. 2013). In a study by Penk et al. (2014), invested effort, that is, the self-reported willingness to engage with test items, was explained by individual differences in task-irrelevant cognition, specifically distraction.
It seems worthwhile to also consider findings from the field of missing data since disengagement is defined in the present study as including both rapid responding and skipping items rapidly. In fact, omitted responses are often at least partially due to a lack of test-taking motivation (Jakewerth et al. 1999; Wise and DeMars 2005). Latent variable modelling of omission propensity has revealed a negative relation with ability, that is, stronger test-takers omit fewer items (Holman and Glas 2005; Pohl et al. 2013). Köhler et al. (2015) provided some evidence that people without a migration background and with higher levels of education exhibit a lower omission propensity.
According to expectancy-value theory, a major determinant of the expectancy component at the item level is the test-taker's estimate of item difficulty (Eccles (Parsons) et al. 1983). More specifically, if perceived item difficulty is high relative to one's competence, test-taking engagement will be negatively affected. Wolf et al. (1995) demonstrated that performance differences between more highly and less motivated students can be specifically explained by item difficulty (p value), the degree to which an item is mentally taxing (expert rating), and fatigue (item position). Thus, these findings suggest that differences in test-taking engagement have a particularly strong negative effect on performance for difficult and mentally taxing items presented late in the test. Relatedly, Asseburg and Frey (2013) showed that test-taking effort was higher when items had only moderate difficulty relative to ability. In the study by Penk et al. (2014), self-reported effort invested into completing test items was accounted for by the test's perceived attractiveness, reflecting how much fun one had when taking the test, and perceived usefulness.
As suggested by the study by Wolf et al. (1995), an item's position can be assumed to be another determinant of differences in test-taking engagement. Research on item position effects has shown that an item presented at a later position in the test is more difficult than when presented at the beginning (e.g., Debeer et al. 2014). This common phenomenon is usually explained by a decrease in test-taking motivation and/or an increase in fatigue. Increased item difficulty may also be due to more and more rapid guessing towards the end of a timed test (Schnipke and Scrams 1997). Setzer et al. (2013) showed that items presented later in the test and including more text as well as ancillary reading material are more strongly associated with test-taking disengagement. Other potential item-level predictors of disengagement are suggested by research on response omissions. For instance, there is evidence that item difficulty increases the probability of omitting an item (Stocking et al. 1988).
Person and item level
Following expectancy-value theory, the present study particularly focusses on interactions between person- and item-level factors that may influence expectations about completing an item successfully. Specifically, we assume that disengaged responding depends on the test taker's ability self-concept and perceived item difficulty, and is thus the outcome of an informed decision process. Given the positive relation between self-concept and corresponding ability (Eccles (Parsons) et al. 1983; Marsh and Craven 2006) as well as between perceived item difficulty and actual item difficulty (e.g., Wolf et al. 1995), we predict that individual differences in cognitive ability and differences in item difficulty explain test-taking engagement. Moreover, we assume that the positive effect of item difficulty on test-taking disengagement is weaker for more able test-takers because the relative item difficulty is lower for them.
Research goals and hypotheses
Overall, the present study aimed to investigate the conditioning factors of disengaged responses observed in the PIAAC domains of literacy, numeracy, and problem solving. Thereby, we hope to shed some light on whether the process of disengagement is erratic or instead systematic or even strategic (i.e., informed disengagement). This was done by pursuing two related goals.
We first addressed the question of whether disengaged responses across items and by domain can be explained by a single latent person variable, which would suggest that each individual test-taker is engaged or disengaged to a consistent degree when taking a test. If a continuous latent person variable can be defined using a uni-dimensional measurement model for each domain, the next step is to investigate whether these individual differences are similar across domains. Therefore, we also explored the correlational structure of disengagement across the domains of literacy, numeracy, and problem solving.
Second, we investigated the joint effect of person and item characteristics on disengagement using explanatory item response models. At the person level, we tested the effects of educational attainment, language, gender, age, and cognitive skill on individual test-taking disengagement in literacy, numeracy, and problem solving. Given previous research, we expected that educational attainment, fluency in the test language, being female and cognitive skill would be negatively related to test-taking disengagement. We had no hypothesis with regard to age. We further investigated how item characteristics affect disengagement. Given previous findings, we expected items completed in the second part of the PIAAC assessment to be associated with significantly greater disengagement than the same items completed in the first part. Furthermore, in accordance the informed disengagement hypothesis presented above, we assumed that disengagement would increase with item difficulty across all three domains. This effect was hypothesized to be moderated by individual cognitive skill such that the negative effect of item difficulty on disengagement would be smaller for stronger participants. Put simply, strong test-takers stay engaged when they encounter difficult items, whereas poor test-takers give up quickly.
Several previous studies have addressed how to represent differences in test-taking disengagement and how these are related to person and item characteristics. The present study adds to them by focusing on the adult population as assessed in the Canadian sample of the PIAAC. Furthermore, unlike previous studies, we propose a model-based approach assuming both (random) person and item effects on test-taking disengagement, and incorporate explanatory variables at both the person and item levels as well as their interaction to test the hypothesis of informed disengagement.
The target population for PIAAC 2012 (Round 1) consisted of all non-institutionalized adults between the ages of 16 and 65 (inclusive) residing in each country (meaning that their usual place of residency is in that country) at the time of data collection. To address our research goals, we selected the largest PIAAC sample from Round 1, which was Canada with N = 26,683 participants. The public use file (PUF) was downloaded from the OECD webpage on 24 October 2015. Canada published age information in bands of 10 years: 17.30% of the participants were 24 years old or younger, 17.10% were 25–34 years old, 20.10% were 35–44 years old, 23.30% were 45–54 years old, and 22.10% were 55 years old or older. Among all Canadian participants, 46.60% were male and 53.40% female. Only the Canadian subsample that completed the computer-based assessment (N = 20,923) was included in the present analysis because item response times were not available for the paper-based assessment. Table 1 shows the distributions of the person-level variables used in the explanatory models for this subsample. Since the present study did not seek to describe features of the population of Canada or compare populations (see Goldhammer et al. 2016), PIAAC sampling weights were not included in the analyses.
Table 1 Distribution of person characteristics in the Canadian subsample completing the computer-based assessment (N = 20,923)
PIAAC test design
The PIAAC test design (OECD 2013b) assumed 60 min of testing time for the cognitive assessment. However, no time constraint was imposed; that is, some participants were expected to take longer. Participants first completed the background questionnaire (BQ), which asked, inter alia, about their computer experience, which was crucial to route test-takers to either the paper-and-pencil or computer-based assessment (CBA; see Fig. 2). Participants with no computer experience were given the paper-based assessment, as were participants who refused to take the assessment on the computer.
PIAAC main study design (Reproduced with permission from (OECD, 2013b)
Participants in the computer-based condition had to pass two short tests taking about 5 min each (CBA Core Stages 1 and 2). Participants who failed these test assessing basic Information and Communication Technology (ICT) skills (CBA Core Stage 1) were rerouted to the paper-based core section. Participants who succeeded in the first task but failed the following cognitive pre-test (CBA Core Stage 2) with three literacy and three numeracy items subsequently took only the paper-based reading components. Participants who successfully completed both pre-tests were randomly assigned to one of three possible types of computer-based cognitive assessments, each consisting of two modules (see grey boxes in Fig. 2) that took about 50 min in total: (i) 50% took a random combination of the literacy (Lit) and numeracy (Num) items (Lit-Num or Num-Lit), (ii) 33% were assigned randomly to either the literacy or numeracy items plus one of the two sets of problem solving (PS) items (Lit-PS2, Num-PS2, PS1-Lit or PS1-Num), and (iii) 17% completed only the two sets of problem solving items (PS1-PS2). Only those participants who took the CBA modules were included in the present study.
Literacy and numeracy were assessed using a two-stage adaptive test design. That is, each module included two stages, each of which consisted of various testlets differing in difficulty (three testlets at Stage 1, four testlets at Stage 2). The selection of the testlet for Stage 1 depended on participants' scores in the short cognitive pre-test (three literacy and three numeracy items), language, and educational attainment; for Stage 2, the score obtained in Stage 1 was used as an additional selection criterion.
Overall, 49 literacy items, 49 numeracy items and 14 problem solving items were administered in the assessment. A test-taker completing the adaptive literacy and numeracy modules was required to respond to 20 items (9 items in Stage 1 and 11 items in Stage 2). Each of the two problem solving modules (PS1, PS2) consisted of 7 items. Thus, test-takers completed a total of 40 items (Lit, Num), 27 items (PS1 or PS2 combined with Lit or Num), or 14 items (PS1, PS2) over the course of the cognitive assessment. In the present study, all possible combinations of item sets across domains within the CBA were included in the analysis.
It follows from the test design that the literacy and numeracy items were administered at two positions (Module 1 and Module 2) in balanced order, with the order of items fixed within each module; for problem solving, the order of content was also fixed across modules. Thus, random equivalent groups completed the literacy items and numeracy items in the first or the second part of the cognitive assessment, respectively, while there was only one order for the two sets of problem solving items.
Indicator of test-taking disengagement P +>0%
In line with previous research (e.g., Lee and Jia 2014; Wise 2006), we used item-specific response time thresholds to distinguish between engaged and disengaged responses. The proportion correct greater than zero (P +>0%) method was applied to obtain the thresholds (Goldhammer et al. 2016). This is an adapted version of Lee and Jia's (2014) method conditioning the proportion correct (P +) on response time to determine the threshold as the shortest response time where conditional P + first exceeds chance level. For the PIAAC items, we assumed that the chance level of obtaining a correct response is zero. This seems justifiable because almost none of the response formats allow for rapid correct guesses. There are only five MC-like numeracy items; all other items require participants to enter numbers or interact with the stimulus, for instance, by highlighting text or clicking on a graphical element. The assumption that the rate of rapid correct guesses is negligible is supported by Goldhammer et al. (2016), who found that the average proportion correct for response behavior taking less than 3 s was 1% for literacy, 4% for numeracy, and 0% for problem solving.
To determine the P +>0% threshold, the proportion correct conditional on the response time was computed item by item at one second intervals. The threshold was identified as the shortest response time associated with a proportion correct of greater than zero. Figure 3 shows the proportion correct conditional on response time for a sample item. When the response time hits the threshold of 6 s, the probability of success becomes greater than zero (P +>0 threshold). Thus, response behavior taking 6 s or longer was classified as engaged, while response behavior taking less than 6 s was regarded as disengaged.
Proportion correct conditional on response time (total time on task, truncated at the 95th percentile) for the sample literacy item C313413, completed in Module 2
To classify response behavior as engaged or disengaged, all items visited by the test-taker and for which response times were available were considered. Whether a response was given or not (omission) was not relevant for the classification; instead, we sought to determine how engaged the test-taker was as reflected by the time spent on the item. Thus, omissions with a response time below the threshold were classified as disengaged, while those with a response time above the threshold were classified as engaged.
Empirical properties
Since item position (Module 1 vs. Module 2) could have an impact on the location of the response time threshold (e.g., due to fatigue effects), the response time thresholds for literacy and numeracy items were determined by module (Goldhammer et al. 2016). While there were some differences between the two modules, there was high consistency overall, as indicated by the cross-module correlation of r = 0.92 (p < 0.001) for literacy and r = 0.63 (p < 0.001) for numeracy. The difference in average response time thresholds was small (Module 1 vs. Module 2: 6.51 s vs. 6.98 s for literacy, 2.27 s vs. 2.16 s for numeracy). For literacy, thresholds varied between 1 and 26 s for Module 1 and between 1 and 33 s for Module 2; for numeracy, they varied between 1 and 8 s for Module 1 and 0 and 8 s for Module 2; for problem solving, between 3 and 76 s.
As shown by Goldhammer et al. (2016), the proportion correct for some items was greater than 0% for all empirical response time intervals. In this case, all responses were considered to be engaged responses. This concerned several numeracy items (Module 1: 28.57%, Module 2: 24.49%), a few literacy items (Module 1: 4.17%, Module 2: 2.04%) and no problem solving items.
Goldhammer et al. (2016) investigated whether the P +>0% indicator of disengaged response behavior can be considered valid, which requires the indicator to identify responses with no chance for success (e.g., due to rapid guessing or because the time spent on the item was below the minimum allowing for success above chance level). Following the procedures described by Lee and Jia (2014), they determined the average proportion correct for engaged versus disengaged response behavior across items (and additionally by item) for each construct, as well as the correlations between score group and proportion correct for engaged and disengaged response behavior by item. One example of their findings is that the proportion correct for engaged responses in literacy, numeracy, and problem solving were 0.56, 0.63 and 0.43, respectively, in comparison to 0 for disengaged responses by definition according to the P +>0% method. Compared to other methods specifying constant thresholds of 5000 or 3000 ms, or item-specific thresholds obtained by inspecting visually the (bimodal) response time distribution, the P +>0% method resulted in the greatest difference in proportion correct for engaged versus disengaged responses, suggesting that the method separates disengaged and engaged responses very well. Taken together, these validity checks indicate that the P +>0% indicator can validly be interpreted as a measure of test-taking disengagement.
Differences in test-taking engagement were explained by the following person-level variables: gender ("male" and "female"; PUF variable GENDER_R); age group ("Aged 24 or less", "Aged 25–34", "Aged 35–54", and "Aged 55 or more"; PUF variable AGEG10LFS, collapsing the age groups "Aged 35–44" and "Aged 45–54" into "Aged 35–54"); educational attainment ("Less than high school", "High school", and "Above high school"; PUF variable B_Q01a_T); native language ("Test language same as native language", and "Test language not same as native language"; PUF variable NATIVELANG); as well as score on the cognitive pre-test (PUF variable CBA_CORE_STAGE2_SCORE) as an indicator of cognitive skill. Furthermore, to investigate a potential position effect on test taking engagement, we included a variable indicating whether literacy and numeracy items were completed in Module 2 ("LIT", and "NUM", PUF Variable CBAMOD2). Finally, we used RP67 difficulties (i.e., items are located on the scale where they have a 67% probability of being completed successfully in the target population) as provided in the PIAAC technical report (OECD 2013b) as an item variable. Item difficulties were rescaled (divided by 100) to facilitate model estimation. The (interacting) variables cognitive pre-test score and item difficulty were centered when testing the explanatory item response models to ease the interpretation of effects.
To address the first research goal, we tested a 1-parameter logistic (1PL) item response model for each construct with dichotomous disengagement indicators (0 = engaged, 1 = disengaged) as item response variables. To judge the item fit, we inspected information-weighted (Infit) and unweighted (Outfit) mean squared residual-based item fit statistics. As a rule of thumb, an Infit and Outfit between 0.5 and 1.5 can be considered acceptable (de Ayala 2009; Wright and Linacre 1994). The Infit is sensitive to unexpected responses in items located close to the person parameter, while the Outfit is sensitive to unexpected responses in items located away from the person parameter (i.e., very difficult or easy items for a person). Items with a value smaller than the lower bound of 0.5 are typically not excluded since this indicates overfit (i.e., observations can be better predicted by the model than expected). In addition, we visually inspected whether the model-expected item characteristic curve fit the (non-parametric) observed item characteristic curve. Specifically, we checked whether the observed curves show humps, non-monotonicity or an unexpected asymptote (Douglas and Cohen 2001).
For literacy and numeracy, a latent regression model for the person parameter was incorporated to take the adaptive two-stage test design into account. Including the background variables that served as selection criteria in the adaptive design (i.e., educational attainment, language, score in the cognitive pre-test) makes the assumption that the not-administered items were missing at random (MAR) more justifiable. If the propensity for disengaged responses would be related to one of the selection criteria, but the latter were not included in the model, the MAR assumption would be violated and the parameter estimates biased. For literacy and numeracy, the Stage 1 score was additionally used to select the testlet at Stage 2. However, we did not include Stage 1 score in the model because it was highly correlated with the other selection criteria and we wanted to keep the background model constant across the three domains.
Furthermore, we also sought to test a three-dimensional 1PL model with between-item multidimensionality to explore the correlational structure of construct-specific latent engagement variables for literacy, numeracy, and problem solving. However, this model exhibited estimation problems and did not converge using numerical integration or quasi Monte Carlo integration, probably due to its complexity and the low proportion of disengaged responses for many items. To recover the latent disengagement correlations, we used plausible values from the three uni-dimensional models. In a first step, a uni-dimensional model was tested for each domain, and expected-a-posteriori (EAP) estimates were obtained as person parameter estimates. In a second step, a uni-dimensional model was tested for each domain with the EAP estimates of the other two domains as predictors in the latent regression model. Ten plausible values were drawn for each person on the basis of these domain-specific measurement models. Afterwards, correlations among domains were computed for each of the ten plausible values, and these were averaged in a final step.
To test the joint effects of person and item characteristics on disengagement, we applied explanatory item response models using the generalized linear mixed modelling (GLMM) framework (De Boeck et al. 2011; Doran et al. 2007). The model explains the logit for the probability of making a disengaged response for person p and item i with the effects of \(K\) person covariates and \(H\) item covariates as well as their interaction:
$$logit\left( {P\left( {Y_{pi} = 1} \right)} \right) = \beta_{0} + \mathop \sum \limits_{k = 1}^{K} \gamma_{k} X_{p,k} + b_{0p} + \mathop \sum \limits_{h = 1}^{H} \gamma_{h} Z_{i,h} + b_{0i} + \omega Z_{i,1} X_{p,1}$$
where \(\beta_{0}\) denotes the fixed intercept, \(\gamma_{k}\) the fixed effect of person covariate \(X_{p,k}\), \(\gamma_{h}\) the fixed effect of item covariate \(Z_{i,h}\), \(\omega\) the fixed interaction effect of an item covariate \(Z_{i,1}\) (i.e., item difficulty) and a person covariate \(X_{p,1}\) (i.e., cognitive skill), \(b_{0p}\) the (residual) random person intercept (person disengagement), and \(b_{0i}\) the (residual) random item intercept (item easiness with regard to disengagement). A normal distribution was assumed for the random item and person intercepts, with a mean of zero, \(b_{person} \sim N\left( {0, Var\left( {b_{0p} } \right)} \right)\), and \(b_{item} \sim N\left( {0, Var\left( {b_{0i} } \right)} \right)\).
To address the second research aim, we first tested Model 1 with person characteristics \(X_{p,k}\) as predictors, that is, educational attainment, language, gender, age group, and cognitive skill. We then tested Model 2, which included only item characteristics \(Z_{i,h}\) as predictors, that is, module position (for numeracy and literacy only) and item difficulty. Finally, the full Model 3 was tested as given in Eq. (1) with all person and item characteristics and the interaction of item difficulty with cognitive skill. We also tested a baseline model, Model 0, to investigate the extent to which the predictor variables reduce the variances of the random person and item intercepts in order to determine their explanatory power (effect size). Note that only Model 1 and the final Model 3 account for the adaptive two-stage test design by including the adaptive selection criteria.
All models were estimated in the R environment (R Core Team 2016). The TAM package (Kiefer et al. 2016) was used for scaling and dimensionality analysis; it performs a marginal maximum likelihood estimation of the model parameters. The 1PL models tested assume uni-dimensionality and equal discriminations across items (by constraining them to one). The glmer function from the lme4 package (Bates et al. 2015) was used to test explanatory item response models (GLMMs). The maximum likelihood estimation method in lme4 utilizes a Laplace approximation.
Measurement models for test-taking engagement
Before testing measurement models, we computed the proportion of disengaged responses by item. The following analyses only include those items that exhibited variation in disengagement. Forty-eight of 49 items for literacy were included, 37 of 49 items for numeracy, and 14 of 14 items for problem solving. The proportion of disengaged responses in the remaining items varied between 0.12 and 18.67% for literacy, between 0.04 and 5.51% for numeracy, and between 0.13 and 27.77% for problem solving.
Testing a 1PL item response model for the literacy disengagement indicators revealed an acceptable item fit for almost all items. The Infit statistic was between 0.68 and 1.47 for all items, while all Outfit values were between 0.04 and 1.40, with the exception of one item at 10.02. This item was dropped for the following analyses. Comparing the expected and observed item characteristic curves showed that the model fit the data quite well (see sample items in Fig. 4, top).
Model-expected and observed item characteristic curves of two selected items by domain (top: literacy, middle: numeracy, bottom: problem solving)
For numeracy, the Infit statistic obtained from the unidimensional 1PL model was between 0.69 and 1.37 for all items. The Outfit statistic was between 0.01 and 1.29 for all items except for two with values of 2.23 and 14.67. These two items were excluded from the following analyses. Comparing the expected and observed item characteristic curves indicated that the model fit the data acceptably (see sample items in Fig. 4, middle). Observations were only available for the lower part of the item characteristic curve for most items, indicating that items were very difficult and were completed by most participants in the mode of engagement.
Finally, the 1PL item response model for the disengagement indicators for problem solving had also an acceptable fit. The Infit statistic varied between 0.76 and 1.29, and the Outfit statistic between 0.07 and 1.53. No items were dropped. Comparing expected and observed item characteristic curves also suggested that the model fit the data (see sample items in Fig. 4, bottom).
Using plausible values, the average correlations of literacy disengagement with numeracy and problem solving disengagement were 0.43 and 0.50, respectively, whereas the correlation between numeracy and problem solving disengagement was 0.46. These medium-sized correlations suggested that test-taking disengagement represents a domain-specific construct.
Explaining differences in test-taking disengagement
Literacy disengagement
The results for Model 1, testing the effects of person variables on test-taking disengagement in literacy, are presented in Table 2. Higher educational attainment was associated with lower disengagement. In addition, the main effect of cognitive skill was negative and significant, suggesting that stronger test-takers exhibit a lower level of disengagement. Disengagement was higher among test-takers whose native language was a language other than the test language. There were no significant effects of gender or age group. Model 2 reveals the relation between item properties and disengagement. Items completed later, that is, in Module 2, were associated with higher disengagement. Item difficulty also had a positive effect on disengagement. The full Model 3 reproduces the effects of the person and item variables. Contrary to expectations, the effect of item difficulty was not significantly attenuated among strong test-takers, as indicated by the non-significant negative interaction between item difficulty and cognitive skill.
Table 2 Explanation of test-taking disengagement in literacy (N = 14,039)
The decrease in variance for the random person intercept, \(Var\left( {b_{0p} } \right)\), from Model 0 to Model 3 was 21.32%, while it was 31.11% for the random item intercept, \(Var\left( {b_{0i} } \right)\). Thus, the predictors explained a substantial portion of the variance in literacy disengagement.
Numeracy disengagement
Table 3 presents the results for Model 1, testing the relation between person characteristics and test-taking disengagement in numeracy. The highest level of educational attainment was associated with lower disengagement, while this was not the case for a medium educational level. Disengagement was greater among participants for whom the test language was not their native language. Cognitive skill exhibited a significant negative effect. Once again, there were no significant effects of gender or age group on disengagement. Model 2 provided insights into the relation between item properties and disengagement. When items were completed in Module 2, disengagement was significantly higher than when they were completed in Module 1. Item difficulty had a significant positive effect on disengagement. Finally, Model 3 exhibited person and item variable effects very similar to those from Model 1 and Model 2. Most importantly, Model 3 revealed that item difficulty interacted significantly with cognitive skill, with the effect of item difficulty on disengagement smaller among strong test-takers than poor test-takers.
Table 3 Explanation of test-taking disengagement in numeracy (N = 13,947)
The variance in the random person intercept, \(Var\left( {b_{0p} } \right)\), decreased by 11.21% from Model 0 to Model 3, while the variance in the random item intercept, \(Var\left( {b_{0i} } \right)\), decreased by 26.09%. Thus, the amount of variance explained by person variables was only about half as much for numeracy compared to literacy.
Problem solving disengagement
The results of Model 1 (see Table 4) revealed that participants with higher educational attainment showed lower test-taking disengagement in problem solving. Test-takers with higher scores on the cognitive pre-test were less disengaged when completing the problem solving test. Disengagement was higher among participants whose native language was not the same as the test language and who were more than 24 years old. There was a particularly strong increase for the oldest group, participants aged 55 or above. However, there was no significant effect of gender. Model 2 showed that item difficulty had a positive effect on disengagement. These findings were reflected again in the full Model 3. This model additionally revealed that the positive effect of item difficulty was smaller among strong test-takers as indicated by the significant negative interaction between item difficulty and cognitive skill.
Table 4 Explanation of test-taking disengagement in problem solving (N = 10,367)
The variance of the random person intercept, \(Var\left( {b_{0p} } \right)\), was much lower for problem solving than for literacy and numeracy. Unexpectedly, this variance component did not decrease from Model 0 to Model 3 even though the person level predictors expected significant effects. However, as expected, the variance of the random item intercept, \(Var\left( {b_{0i} } \right)\), fell substantially, by 32.50%.
The overall goal of the present study was to investigate the conditioning factors of disengaged responses as observed in the PIAAC domains of literacy, numeracy, and problem solving. For this purpose, binary item disengagement indicators were defined for the Canadian sample by means of response time thresholds and subjected to an item response analysis. The results showed that disengagement indicators define a latent dimension by domain. Furthermore, individual and item differences could be explained substantially by the test-taker's educational attainment, language, and cognitive skill level, and by the item's difficulty and position.
Gender did not exhibit any effect on disengagement. Previous studies reporting gender effects are based on more homogenous and younger samples than the PIAAC, such as university students (Setzer et al. 2013). Thus, it would be interesting to explore whether the gender effect depends on other variables such as age or educational level. Interestingly, age showed differential effects on disengagement, that is, it had a significant effect for problem solving, but not for literacy or numeracy, supporting the assumption that disengagement represents a domain-specific construct. Increasing disengagement by elderly test-takers in items assessing problem solving in technology-rich environments may be related to their lower levels of ICT experience and skills (OECD 2013a).
Applying item response models to the item disengagement indicators was challenging because the response variation was very low for many items (i.e., very low rate of disengagement and very high item difficulty). As a result, model estimations including all three domains simultaneously had serious problems and did not converge. This points to the need for an alternative measurement model for this kind of data. One option would be to use a (multi-dimensional) latent class model (Bartolucci 2007) distinguishing between engaged and disengaged respondents by means of a categorical latent person variable measured by binary disengagement indicators.
Notably, the variance in the random person and item intercepts varied across domains (see Tables. 2, 3, 4). There was a much greater variation in the person intercept (individual disengagement) for literacy and numeracy than for problem solving, while the pattern was reversed for the variance in the item intercepts (item difficulty regarding disengagement). The huge variance in individual disengagement for literacy and numeracy may suggest that the latent disengagement variable represents two groups of test-takers, those who are mostly engaged (by far the majority of the sample) and those who are mostly disengaged. The lower variance in individual disengagement for problem solving suggests that test-takers were less extreme in being engaged or disengaged. How this relates to characteristics of the problems solving assessment requires further investigation; for instance, more diverse and less familiar kinds of items such as multiple simulated software applications might re-engage unmotivated test-takers.
Following expectancy-value theory, we assumed that test-takers would make disengaged responses depending on their (perceived) cognitive skill and (perceived) item difficulty, which together determine the expected task success. The obtained findings support this hypothesis of informed disengagement. Specifically, the negative interaction between cognitive skill and item difficulty suggests that relative item difficulty helps determine disengagement. That is, poor test-takers encountering more difficult items are more likely to become disengaged than strong test-takers. Interpreting the results in this way requires test-takers to be able to evaluate the difficulty of an item relatively quickly, that is, below the response time threshold. This raises the question of whether the interpretation of informed disengagement is compatible with defining disengaged responses as relatively quick (non-)responses. For sure, empirically there may be a portion of test-takers who simply rush through the test without any strategic reflection about their probability of success. However, we define the threshold as the shortest response time where the conditional probability of success first exceeds chance level. This means that it may take (almost) as much time to try to solve an item before finally judging it to be too difficult as it does to complete it correctly. Further research is needed to justify the interpretation of informed disengagement. For instance, future studies could conduct cognitive interviews after administering a test on a (sub)sample of test-takers to learn more about the decision processes resulting in disengagement. From a test-taker's perspective, informed disengagement can be regarded as an efficient test-taking strategy as long as the test-taker's perceptions of his or her ability and item difficulty are correct. Modelling approaches for intentional omissions (Mislevy and Wu 1996), that is, models of the joint distribution of item response and disengagement, can be applied to investigate the relations between the model-predicted correctness of an item response and disengaged responding. If the relationship is strong, disengaged responses carry information about the to be estimated competence level.
The proposed basic model of test-taking engagement (see Fig. 1) assumes that expectancy and value are determinants of test-taking engagement. However, as already pointed out by Eccles and Wigfield (2002), the expectancy-value approach needs to be extended by integrating concepts of action regulation, particularly volition, which describes action execution more comprehensively by assuming action phases and related volitional processes (Gollwitzer 1996). Specifically, a mind-set that supports efficient means of self-control might be helpful to prevent other intentions from distracting one from the task at hand (Kuhl 2000). From this, it can be inferred that engaged test-taking requires not only high expectancy but also a high level of self-control, while informed disengagement is the result of low expectancy and high level of self-control. Low levels of self-control are associated with aberrant test-taking behavior.
A potential limitation of using response time thresholds to identify disengaged responses is that disengaged responses may also be associated with long response times, for instance, if the test-taker pretends to be engaged with task completion in the test situation without making any real effort. Such disengaged responses cannot be discovered using the current approach. Therefore, an interesting further development would be to extend the item-level measure of test-taking disengagement to a sequential measure considering sequences of items. Test-takers pretending to take the test seriously would not rapidly respond to items but may distribute response time erratically across items without regard for the items' difficulty or time intensity. Thus, a pattern of repeated deviations of the observed response time from the expected response time given the person's test-taking speed and the items' time intensity (see van der Linden and Guo 2008) could indicate disengaged responding. Implementing this approach would require estimating individual latent speeds using observed response times as indicators and item time intensity parameters obtained from a sample of engaged test-takers.
A more fundamental concern about binary disengagement indicators is that continuous response time information is transformed into categories (engaged vs. disengaged) even though conceptually disengagement for an item might best be regarded as a continuous phenomenon. For instance, a person is probably less engaged when their response time is only slightly above the threshold than when their response time is clearly above it. Thus, an interesting future research direction would be to incorporate observed response times into the modelling of disengagement and define a continuous indicator for engagement rather than a single cut-off. For instance, Fox and Marianti (2017) recently proposed person-fit statistics for joint models for item responses and response times to detect aberrant response behavior (e.g., guessing).
Another potential limitation of our study refers to the explanatory models. We implicitly assumed measurement invariance in the levels of grouping variables (e.g., male vs. female). If this assumption were to be violated because of differential item functioning, the group comparisons could be biased. However, we decided against testing this assumption given the low rate of disengagement, which would be even smaller if the data set were to be split into multiple groups.
An important future research direction is to investigate the impact of considering disengaged responses when modelling individual and group differences. For instance, response behavior classified as disengaged in PIAAC could be coded as not attempted/not reached regardless of whether there was a response or a non-response (omission). Criteria of interest are the reliability of the proficiency scale and the validity of test score interpretation. Regarding the latter, investigating the impact of dealing with disengagement on inferences about country differences in competencies would be of utmost interest for international large-scale assessments such as PIAAC. This would shed light on the question of whether or not the amount of observed disengagement is a severe problem given the intended use of test scores.
It should be emphasized that the design of the PIAAC study does not allow for causal inferences, particularly in terms of person-level predictors. For instance, with regard to the effect of the cognitive skill variable, it may be that worse cognitive skill causes higher disengagement, since test-takers expect that they will not be able to successfully complete the item anyway. However, higher levels of disengagement could also give rise to lower scores on the test for cognitive skills. The findings for item-level predictors are more conclusive, and this is particularly so for position, as this property was varied experimentally by the random assignment of test-takers to modules.
The present study used a model-based approach to provide empirical evidence that disengaged responding in the large-scale assessment PIAAC can be explained by individual and item differences. Thus, whether test-takers in the Canadian sample were more or less disengaged could be explained by educational attainment, native language, and cognitive skill level, as well as by age for problem solving only. In the same vein, items are more or less associated with disengaged responses depending on item difficulty and the position of the module in which the item can be found. The negative effect of cognitive skill, the positive effect of item difficulty, and their negative interaction effect support the assumption that disengagement is the outcome of individual expectations about success (informed disengagement).
Asseburg, R., & Frey, A. (2013). Too hard, too easy, or just right? The relationship between effort or boredom and ability-difficulty fit. Psychological Test and Assessment Modeling, 55(1), 92–104.
Bartolucci, F. (2007). A class of multidimensional IRT models for testing unidimensionality and clustering items. Psychometrika, 72(2), 141. https://doi.org/10.1007/s11336-005-1376-9.
Bates, D., Maechler, M., Bolker, B. M., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01.
Braun, H., Kirsch, I., & Yamamoto, K. (2011). An experimental study of the effects of monetary incentives on performance on the 12th-grade NAEP Reading assessment. Teachers College Record, 113(11), 2309–2344.
Brown, A. R., & Finney, S. J. (2011). Low-stakes testing and psychological reactance: Using the hong psychological reactance scale to better understand compliant and non-compliant examinees. International Journal of Testing, 11(3), 248–270. https://doi.org/10.1080/15305058.2011.570884.
Cole, J. S., Bergin, D. A., & Whittaker, T. A. (2008). Predicting student achievement for low stakes tests with effort and task value. Contemporary Educational Psychology, 33(4), 609–624. https://doi.org/10.1016/j.cedpsych.2007.10.002.
Cronbach, L. J. (1970). Essentials of psychological testing. New York: Harper & Row.
de Ayala, R. J. (2009). The theory and practice of item response theory. New York: Guilford Press.
De Boeck, P., Bakker, M., Zwitser, R., Nivard, M., Hofman, A., Tuerlinckx, F., et al. (2011). The estimation of item response models with the lmer function from the lme4 package in R. Journal of Statistical Software, 39(12), 1–28. https://doi.org/10.18637/jss.v039.i12.
Debeer, D., Buchholz, J., Hartig, J., & Janssen, R. (2014). Student, school, and country differences in sustained test-taking effort in the 2009 PISA reading assessment. Journal of Educational and Behavioral Statistics, 39(6), 502–523. https://doi.org/10.3102/1076998614558485.
DeMars, C. E., Bashkov, B. M., & Socha, A. B. (2013). The role of gender in test-taking motivation under low-stakes conditions. Research and Practice in Assessment, 8, 69–82.
Doran, H., Bates, D., Bliese, P., & Dowling, M. (2007). Estimating the multilevel rasch model: With the lme4 package. Journal of Statistical Software, 20, 1–18. https://doi.org/10.18637/jss.v020.i02.
Douglas, J., & Cohen, A. (2001). Nonparametric item response function estimation for assessing parametric model fit. Applied Psychological Measurement, 25(3), 234–243. https://doi.org/10.1177/01466210122032046.
Eccles (Parsons), J. S., Adler, T. F., Futterman, R., Goff, S. B., Kaczala, C. M., Meece, J. L., et al. (1983). Expectancies, values, and academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motives: Psychological and sociological approaches (pp. 75–146). San Francisco: W. H. Freeman.
Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology, 53(1), 109–132. https://doi.org/10.1146/annurev.psych.53.100901.135153.
Finn, B. (2015). Measuring motivation in low-stakes assessments. ETS Research Report Series, 2015(2), 1–17. http://doi.org/10.1002/ets2.12067.
Fox, J.-P., & Marianti, S. (2017). Person-Fit statistics for joint models for accuracy and speed. Journal of Educational Measurement, 54(2), 243–262. https://doi.org/10.1111/jedm.12143.
Goldhammer, F., Martens, T., Christoph, G., & Lüdtke, O. (2016). Test-taking engagement in PIAAC. Vol. 133. In: OECD Education Working Papers. Paris: OECD Publishing.
Goldhammer, F., Naumann, J., Stelter, A., Tóth, K., Rölke, H., & Klieme, E. (2014). The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment. Journal of Educational Psychology, 106, 608–626. https://doi.org/10.1037/a0034716.
Gollwitzer, P. M. (1996). The Volitional Benefits of Planning. In P. M. Gollwitzer & J. A. Bargh (Eds.), The psychology of action. Linking cognition and motivation to behavior (pp. 287-312). New York, London: The Guilford Press.
Haladyna, T. M., & Downing, S. M. (2004). Construct-irrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17–27. https://doi.org/10.1111/j.1745-3992.2004.tb00149.x.
Holman, R., & Glas, C. A. W. (2005). Modelling non-ignorable missing-data mechanisms with item response theory models. British Journal of Mathematical and Statistical Psychology, 58(1), 1–17. https://doi.org/10.1348/000711005x47168.
Jakewerth, P. M., Stancavage, B. S., & Reed, E. D. (1999). An investigation of why students do not respond to questions. CA: Palo Alto.
Kiefer, T., Robitzsch, A., & Wu, M. (2016). TAM: Test analysis modules. R package version 1.99–6. Retrieved from http://CRAN.R-project.org/package=TAM.
Köhler, C., Pohl, S., & Carstensen, C. (2015). Investigating mechanisms for missing responses in competence tests. Psychological Test and Assessment Modeling, 57(4), 499–522.
Kong, X. J., Wise, S. L., & Bhola, D. S. (2007). Setting the response time threshold parameter to differentiate solution behavior from rapid-guessing behavior. Educational and Psychological Measurement, 67(4), 606–619. https://doi.org/10.1177/0013164406294779.
Kuhl, J. (2000). Chapter 5—A functional-design approach to motivation and self-regulation: The dynamics of personality systems interactions A2—Boekaerts, Monique. In P. R. Pintrich & M. Zeidner (Eds.), Handbook of self-regulation (pp. 111–169). San Diego: Academic Press.
Lau, A. R., Swerdzewski, P. J., Jones, A. T., Anderson, R. D., & Markle, R. E. (2009). Proctors matter: strategies for increasing examinee effort on general education program assessments. The Journal of General Education, 58, 196–217. https://doi.org/10.1353/jge.0.0045.
Lee, Y.-H., & Jia, Y. (2014). Using response time to investigate students' test-taking behaviors in a NAEP computer-based study. Large-scale Assessments in Education, 2(1), 1–24. https://doi.org/10.1186/s40536-014-0008-1.
Ma, L., Wise, S. L., Thum, Y. M., & Kingsbury, G. (2011). Detecting response time threshold under the computer adaptive testing environment. Paper presented at the annual meeting of the National Council of Measurement in Education, New Orleans.
Marsh, H. W., & Craven, R. G. (2006). Reciprocal effects of self-concept and performance from a multidimensional perspective: Beyond seductive pleasure and unidimensional perspectives. Perspectives on Psychological Science, 1(2), 133–163. https://doi.org/10.1111/j.1745-6916.2006.00010.x.
Meyer, J. P. (2010). A mixture rasch model with item response time components. Applied Psychological Measurement, 34(7), 521–538. https://doi.org/10.1177/0146621609355451.
Mislevy, R. J., & Wu, P.-K. (1996). Missing responses and IRT ability estimation: Omits, choice, time limits, and adaptive testing (Vol. RR96-30). Princeton: Educational Testing Service.
OECD. (2013a). OECD skills outlook 2013: First results from the survey of adult skills. Paris: OECD Publishing.
Penk, C., Pöhlmann, C., & Roppelt, A. (2014). The role of test-taking motivation for students' performance in low-stakes assessments: an investigation of school-track-specific differences. Large-scale Assessments in Education, 2(1), 5. https://doi.org/10.1186/s40536-014-0005-4.
Pohl, S., Gräfe, L., & Rose, N. (2013). Dealing with omitted and not-reached items in competence tests: Evaluating approaches accounting for missing responses in item response theory models. Educational and Psychological Measurement. https://doi.org/10.1177/0013164413504926.
Rios, J. A., Guo, H., Mao, L., & Liu, O. L. (2017). Evaluating the impact of careless responding on aggregated-scores: to filter unmotivated examinees or not? International Journal of Testing, 17, 74–104. http://doi.org/10.1080/15305058.2016.1231193.
Rost, J. (2004). Lehrbuch Testtheorie—Testkonstruktion [Textbook Test theory—Test construction] (2nd ed.). Bern: Huber.
Schnipke, D. L., & Scrams, D. J. (1997). Modeling item response times with a two-state mixture model: A new method of measuring speededness. Journal of Educational Measurement, 34(3), 213–232. https://doi.org/10.1111/j.1745-3984.1997.tb00516.x.
Setzer, J. C., Wise, S. L., van den Heuvel, J. R., & Ling, G. (2013). An investigation of examinee test-taking effort on a large-scale assessment. Applied Measurement in Education, 26(1), 34–49. https://doi.org/10.1080/08957347.2013.739453.
Stocking, M. L., Eignor, D. R., & Cook, L. L. (1988). Factors affecting the sample invariant properties of linear and curvilinear observed- and true-score equating procedures. ETS Research Report Series, 1988(2), i–71. http://doi.org/10.1002/j.2330-8516.1988.tb00297.x.
Sundre, D. L., & Kitsantas, A. (2004). An exploration of the psychology of the examinee: Can examinee self-regulation and test-taking motivation predict consequential and non-consequential test performance? Contemporary Educational Psychology, 29(1), 6–26. https://doi.org/10.1016/S0361-476X(02)00063-2.
Team, R. C. (2016). R: A language and environment for statistical computing (Version 3.1.3). Vienna, Austria: R Foundation for Statistical Computing. Retrieved from http://www.R-project.org/.
Trautwein, U., Marsh, H. W., Nagengast, B., Lüdtke, O., Nagy, G., & Jonkmann, K. (2012). Probing for the multiplicative term in modern expectancy—value theory: A latent interaction modeling study. Journal of Educational Psychology, 104(3), 763. https://doi.org/10.1037/a0027470.
van der Linden, W. J., & Guo, F. (2008). Bayesian procedures for identifying aberrant response-time patterns in adaptive testing. Psychometrika, 73(3), 365–384. https://doi.org/10.1007/s11336-007-9046-8.
Wise, S. L. (2006). An investigation of the differential effort received by items on a low-stakes computer-based test. Applied Measurement in Education, 19(2), 95–114. https://doi.org/10.1207/s15324818ame1902_2.
Wise, S. L. (2009). Strategies for managing the problem of unmotivated examinees in low-stakes testing programs. The Journal of General Education, 58(3), 152–166.
Wise, S. L. (2015). Effort analysis: Individual score validation of achievement test data. Applied Measurement in Education, 28(3), 237–252. doi:10.1080/08957347.2015.1042155.
Wise, S. L. (2017). Rapid-guessing behavior: Its identification, interpretation, and implications. Educational Measurement: Issues and Practice. https://doi.org/10.1111/emip.12165.
Wise, S. L., & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment, 10, 1–17. https://doi.org/10.1207/s15326977ea1001_1.
Wise, S. L., & DeMars, C. E. (2006). An application of item response time: The effort-moderated IRT model. Journal of Educational Measurement, 43(1), 19–38. https://doi.org/10.1111/j.1745-3984.2006.00002.x.
Wise, S. L., & Gao, L. (2017). A General Approach to Measuring Test-Taking Effort on Computer-Based Tests. Applied Measurement in Education, 30, 343–354. http://doi.org/10.1080/08957347.2017.1353992.
Wise, S. L., & Kong, X. J. (2005). Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education, 18(2), 163–183. https://doi.org/10.1207/s15324818ame1802_2.
Wise, S. L., & Ma, L. (2012). Setting response time thresholds for a CAT item pool: The normative threshold method. Paper presented at the annual meeting of the National Council on Measurement in Education, Vancouver, Canada.
Wolf, L. F., Smith, J. K., & Birnbaum, M. E. (1995). Consequence of performance, test, motivation, and mentally taxing items. Applied Measurement in Education, 8(4), 341–351. https://doi.org/10.1207/s15324818ame0804_4.
Wright, B. D., & Linacre, J. M. (1994). Reasonable mean-square fit values. Rasch Measurement Transactions, 8(3), 370.
Yamamoto, K., & Everson, H. (1997). Modeling the effects of test length and test time on parameter estimation using the HYBRID model. In J. Rost & R. Langeheine (Eds.), Applications of latent trait and latent class models in the social sciences (pp. 89–98). Münster: Waxman.
FG originated the idea for the study, conducted the analyses and wrote the most of the manuscript. OL contributed to the development of the data analysis strategy; he also revised and reworked the manuscript. TM contributed to the conceptual framing and reworked the manuscript. All authors read and approved the final manuscript.
We are grateful to three anonymous reviewers for their helpful remarks on an earlier version of this paper.
The dataset analyzed in this paper is based on the public use file for Canada from the first round of the Programme for the International Assessment of Adult Competencies (PIAAC). The public use file is available at the OECD website http://www.oecd.org/site/piaac/publicdataandanalysis.htm.
We use data from the PIAAC Survey of Adult Skills which adheres to ethics standards stated by the OECD; see PIAAC Technical Standards and Guidelines (June 2014), http://www.oecd.org/skills/piaac/PIAAC-NPM(2014_06)PIAAC_Technical_Standards_and_Guidelines.pdf.
We thank the Centre for International Student Assessment (ZIB) for financial support.
German Institute for International Educational Research (DIPF)/Centre for International Student Assessment (ZIB), Schloßstr. 29, 60486, Frankfurt/Main, Germany
Frank Goldhammer
Medical School Hamburg, Am Kaiserkai 1, 20457, Hamburg, Germany
Thomas Martens
IPN-Leibniz Institute for Science and Mathematics Education/Centre for International Student Assessment (ZIB), Olshausenstraße 62, 24118, Kiel, Germany
Oliver Lüdtke
Search for Frank Goldhammer in:
Search for Thomas Martens in:
Search for Oliver Lüdtke in:
Correspondence to Frank Goldhammer.
R syntax for computing the binary indicator of test-taking disengagement P+>0%
R syntax for estimating the generalized linear mixed models (GLMM)
Goldhammer, F., Martens, T. & Lüdtke, O. Conditioning factors of test-taking engagement in PIAAC: an exploratory IRT modelling approach considering person and item characteristics. Large-scale Assess Educ 5, 18 (2017) doi:10.1186/s40536-017-0051-9
Test-taking disengagement
Response time threshold
Explanatory item response modelling
Person effects
Item effects | CommonCrawl |
PDG HOME
pdgHome
REVIEWS, TABLES, PLOTS (2020)
PARTICLE LISTINGS
PHYSICAL CONSTANTS
ASTROPHYSICAL CONSTANTS
ATOMIC & NUCLEAR PROPERTIES
ABOUT PDG
MIRROR SITES
PDG INTERNAL
For general comments and questions about PDG, please contact us at:
[email protected]
For technical assistance or help with our site, please contact us at:
[email protected]
Particle Data Group MS 50R-6008
1 Cyclotron Road
Were you looking to order PDG products?
Order PDG products
Please use this citation:
R.L. Workman et al. (Particle Data Group), Prog. Theor. Exp. Phys. 2022, 083C01 (2022).
INSPIRE BibTeX LaTeX(US) LaTeX(EU)
USA (LBNL) | Italy | Japan (KEK) | Russia (Protvino)
pdgLive Home > ${{\mathit t}^{\,'}}$ (4${}^{\mathit th}$ Generation) Quark, Searches for > ${{\mathit t}^{\,'}}$ (5/3)-quark/hadron mass limits in ${{\mathit p}}{{\overline{\mathit p}}}$ and ${{\mathit p}}{{\mathit p}}$ collisions
${{\mathit t}^{\,'}}$ (5/3)-quark/hadron mass limits in ${{\mathit p}}{{\overline{\mathit p}}}$ and ${{\mathit p}}{{\mathit p}}$ collisions
VALUE (GeV)
CL%
TECN
$> 1330$ 95 1
SIRUNYAN
CMS ${{\mathit t}_{{R}}^{\,'}{(5/3)}}$ $\rightarrow$ ${{\mathit t}}{{\mathit W}^{+}}$
CMS ${{\mathit t}_{{L}}^{\,'}{(5/3)}}$ $\rightarrow$ ${{\mathit t}}{{\mathit W}^{+}}$
$\bf{> 1350}$ 95 2
AABOUD
ATLS ${{\mathit t}^{\,'}{(5/3)}}$ $\rightarrow$ ${{\mathit t}}{{\mathit W}^{+}}$
$>1190$ 95 3
ATLS ${}\geq{}2{{\mathit \ell}}$ + $\not E_T$ + ${}\geq{}1{{\mathit b}}$ j
$> 990$ 95 4
CHATRCHYAN
CMS ${{\mathit t}^{\,'}{(5/3)}}$ $\rightarrow$ ${{\mathit t}}{{\mathit W}^{+}}$
1 SIRUNYAN 2019T based on 35.9 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ data at $\sqrt {s }$ = 13 TeV. Signals are searched in the final states of ${{\mathit t}^{\,'}}$ pair production, with same-sign leptons (which come from a ${{\mathit t}^{\,'}}$ decay) or a single lepton (which comes from a ${{\mathit W}}$ out of 4${{\mathit W}}$ s), along with jets, and no excess over the SM expectation is found.
2 AABOUD 2018AW based on 36.1 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ data at $\sqrt {s }$ = 13 TeV. Limit on ${{\mathit t}^{\,'}{(5/3)}}$ in pair production assuming its coupling to ${{\mathit W}}{{\mathit t}}$ is equal to one. Lepton-plus-jets final state is used, characterized by ${{\mathit \ell}}$ + $\not E_T$ + jets (${}\geq{}$1 ${{\mathit b}}$ -tagged).
3 AABOUD 2018CE based on 36.1 fb${}^{-1}$ of proton-proton data taken at $\sqrt {s }$ = 13 TeV. Events including a same-sign lepton pair are used. The limit is for the pair-produced vector-like ${{\mathit t}^{\,'}}$ . With single ${{\mathit t}^{\,'}}$ production included, assuming ${{\mathit t}^{\,'}}{{\mathit t}}{{\mathit W}}$ coupling of one, the limit is ${\mathit m}_{{{\mathit t}^{\,'}}}$ $>$ 1.6 TeV.
4 SIRUNYAN 2017J based on 2.3 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ data at $\sqrt {s }$ = 13 TeV. Signals are searched in the final states of ${{\mathit t}^{\,'}}$ pair production, with same-sign leptons (which come from a ${{\mathit t}^{\,'}}$ decay) or a single lepton (which comes from a ${{\mathit W}}$ out of 4${{\mathit W}}$ s), along with jets, and no excess over the SM expectation is found.
5 AAD 2015BY based on 20.3 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ data at $\sqrt {s }$ = 8 TeV. Limit on ${{\mathit t}^{\,'}{(5/3)}}$ in pair and single production assuming its coupling to ${{\mathit W}}{{\mathit t}}$ is equal to one. Used events containing ${}\geq{}2{{\mathit \ell}}$ + $\not E_T$ + ${}\geq{}$2j (${}\geq{}$1 ${{\mathit b}}$ ) and including a same-sign lepton pair.
6 AAD 2015Z based on 20.3 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ data at $\sqrt {s }$ = 8 TeV. Used events with ${{\mathit \ell}}$ + $\not E_T$ + ${}\geq{}$6j (${}\geq{}$1 ${{\mathit b}}$ ) and at least one pair of jets from weak boson decay, sensitive to the final state ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}^{+}}{{\mathit W}^{-}}{{\mathit W}^{+}}{{\mathit W}^{-}}$ .
7 CHATRCHYAN 2014T based on 19.5 fb${}^{-1}$ of ${{\mathit p}}{{\mathit p}}$ data at $\sqrt {s }$ = 8 TeV. Non-observation of anomaly in ${{\mathit H}_{{T}}}$ distribution in the same-sign dilepton events leads to the limit when pair produced ${{\mathit t}^{\,'}{(5/3)}}$ quark decays exclusively into ${{\mathit t}}$ and ${{\mathit W}^{+}}$ , resulting in the final state with ${{\mathit b}}{{\overline{\mathit b}}}{{\mathit W}^{+}}{{\mathit W}^{-}}{{\mathit W}^{+}}{{\mathit W}^{-}}$ .
SIRUNYAN 2019T
JHEP 1903 082 Search for top quark partners with charge 5/3 in the same-sign dilepton and single-lepton final states in proton-proton collisions at $ \sqrt{s}=13 $ TeV
AABOUD 2018CE
JHEP 1812 039 Search for new phenomena in events with same-charge leptons and $b$-jets in $pp$ collisions at $\sqrt{s}= 13$ TeV with the ATLAS detector
AABOUD 2018AW
JHEP 1808 048 Search for pair production of heavy vector-like quarks decaying into high-$p_T$ $W$ bosons and top quarks in the lepton-plus-jets final state in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector
SIRUNYAN 2017J
JHEP 1708 073 Search for Top Quark Partners with Charge 5/3 in Proton-Proton Collisions at $\sqrt {s }$ = 13 TeV
AAD 2015Z
PR D91 112011 Search for Vector-Like Quarks in Events with One Isolated Lepton, Missing Transverse Momentum and Jets at $\sqrt {s }$ = 8 TeV with the ATLAS Detector
AAD 2015BY
JHEP 1510 150 Analysis of Events with ${\mathit {\mathit b}}$-Jets and a Pair of Leptons of the Same Charge in ${{\mathit p}}{{\mathit p}}$ Collisions at $\sqrt {s }$ = 8 TeV with the ATLAS Detector
CHATRCHYAN 2014T
PRL 112 171801 Search for Top-Quark Partners with Charge 5/3 in the Same-Sign Dilepton Final State
Except where otherwise noted, content of the 2022 Review of Particle Physics is licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. The publication of the Review of Particle Physics is supported by US DOE, MEXT (Japan), INFN (Italy), JPS and CERN. Individual collaborators receive support for their PDG activities from their respective institutes or funding agencies. © 2022. See LBNL disclaimers. | CommonCrawl |
Space Exploration Meta
Space Exploration Stack Exchange is a question and answer site for spacecraft operators, scientists, engineers, and enthusiasts. It only takes a minute to sign up.
Could JWST stay at L2 "forever"?
Using only reaction wheels powered by solar panel and the sunshield as a sail (in continuous active attitude control) to generate thrust from solar photon pressure in the desired direction, could JWST stay in its orbit around L2 "forever" (theoretically at least)?
In this case it couldn't fulfill it's main objective, which is to be a space telescope pointing at distant objects for long exposure time. But this is a hypothetical question asking about its orbital dynamics.
Anyway, could this be a practical way to set JWST on "pause" for say 2 years, without burning fuel/ejecting mass to keep its orbit around L2?
orbital-mechanics station-keeping james-webb-telescope halo-orbit lissajous-orbit
qq jkztdqq jkztd
$\begingroup$ Here are some different, but related questions whose answers may contain information that is also helpful here: How will JWST manage solar pressure effects to maintain attitude and station keep it's unstable orbit? and also What happens to JWST after it runs out of propellant?. $\endgroup$
– uhoh
$\begingroup$ Reaction wheels have to be desaturated occasionally. That takes fuel. Solar radiation pressure is a hindrance on JWST rather than something the vehicle can use to its advantage, more than doubling the stationkeeping costs compared to a vehicle in a similar unstable orbit but without such a huge sunshield. $\endgroup$
– David Hammen
$\begingroup$ @DavidHammen If considering an hypothetical very high sail surface/mass ratio probe, meant to solely keep orbit at one Lagrange point, could desaturation of reaction wheel be made by shifting centre of mass (reaction wheel) coplanar to sail, inducing a counter-torque allowing wheel to slow down, thereby using no fuel? $\endgroup$
– qq jkztd
$\begingroup$ Ok I found this about solar sail attitude control and propulsion, which goes into the direction of even getting rid of reaction wheel system, and desaturation related issues. $\endgroup$
$\begingroup$ @DavidHammen I suggested without proof that with enough acrobatics (pairings of maneuvers) the station-keeping could be angular momentum neutral over time. If the momentum wheels could be fairly well centered near zero to begin with, then maybe unloading could be managed with torque from photon pressure as well, since the center of mass is offset from the sunshade. Also, I am not sure the photon pressure is really a hindrance. I think you just ride several km in front of the halo orbit as if you were leaving it towards Earth along the unstable manifold, but always being pushed back. $\endgroup$
According to Wikipedia, the delta-v requirements to stay at L1 or L2 are about 30-100 m/s per year. That seems quite high, however, more likely is around 5-16 m/s. The sun shield has an area of about 300 m^2. The thrust possible is about 0.00279664 N, assuming purely reflective. Mass of JWST is about 6200 kg. Putting all of that together, the possible acceleration is around 14 m/s per year, not quite enough to station keep. Also, this assumes fully reflective sun shields, and pointed straight at the sun. I'm not sure what the actual direction of thrust that would be required to keep it at L2, but it probably wouldn't be straight on, thus reducing this further.
Bottom line, it might work, but would require some very careful placement of the shield to maintain the proper orientation.
EDIT: Per some new information, it turns out that my source was VERY misleading about the size, those dimensions were more of a diamond type dimensions, and not a rectangle, which is very odd. This paper has some interesting information, showing the area is actually closer to 160 m^2, with a station keeping budget of at most 2.25 m/s per year, taking everything in to account. That means it would entirely be possible to achieve. One of the biggest sources of uncertainty is the movement of the solar shield itself, it is likely if this was controlled it could actually be reduced significantly. The actual ability to station keep is closer to 6.7 m/s. Given sources that say between 5-16 m/s is typical stationkeeping values, it seems likely that to a degree at least, JWST will be controlled by sunlight, although that is VERY difficult to tell without complex analysis.
PearsonArtPhoto♦PearsonArtPhoto
$\begingroup$ That value of 30 to 100 m/s per year is a bogus number. Perhaps that's for EML1/EML2? This paper claims that "In recent years, typical annual station-keeping costs have been around 1.0 m/sec for ACE and WIND, and much less than that for SOHO." This paper, which addresses JSWT directly, estimates stationkeeping costs for JWST to be 2.43 m/s per year. $\endgroup$
$\begingroup$ Double checking, that "30 - 100 m/s per year" is completely bogus, even for EML1/EML2. Per this paper, the ARTEMIS satellites experienced stationkeeping costs in the range of 5 to 16 m/s per year. $\endgroup$
$\begingroup$ Good feedback, have improved. Now we just need to fix Wikipedia... $\endgroup$
– PearsonArtPhoto ♦
$\begingroup$ NASA says "Actual dimensions: 21.197 m x 14.162 m (69.5 ft x 46.5 ft)". I thought rectangle. Turns out that was a bad assumption. Huh. Interesting paper in any case, added quite a bit to my answer. $\endgroup$
$\begingroup$ @uhoh - My first comment says exactly that (JWST stationkeeping is 2.43 m/s/yr). Note that this is high for vehicles in pseudo-orbits about SEL1 or SEL2. In my second comment I was poking deeper at the dubious values in the wikipedia article referenced in the answer, addressing the question I raised in the first comment: Are those dubious wikipedia values for Earth-Moon L1/L2? The answer is no. I picked ARTEMIS specifically because for a while they were in pseudo-orbits about EML1/L2. The costs are considerably higher than for Sun-Earth L1/L2, but not in the 30-100 m/s/yr range. $\endgroup$
This paper by Heiligers et al. explores Earth-moon libration point orbits with the addition of solar sail thrusting. While it is of course not directly translateable to Sun-Earth L2 (JWST) the dynamics of libration point orbits in both systems are at least comparable. The study shows that an increase in stability can be acquired for some orbits (lunar L2 halo being one of them).
JWST is however not a typical solar sail spacecraft. These have much higher area/mass ratios and will produce more acceleration, together with a lower mass (I'm assuming also lower inertia) which means they can steer their sails much more effectively.
I would assume that the conclusions from the paper can be applied to the JWST as well, but the impact on the stability will probably be much smaller than in the case of a regular solar sail spacecraft.
Alexander VandenbergheAlexander Vandenberghe
$\begingroup$ That's a really beautiful paper! $\endgroup$
tl;dr: I think there could be room to do this. However, I don't think a conclusive answer can be had through analyses of magnitudes on envelope-backs. A real answer would only come from even more detailed Monte Carlo calculations than those already outlined in Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope. Sounds like a fun project!
Let's look at this systematically using well-sourced facts.
Thrust from photon pressure on sunshield
A photon's momentum $p$ is just its energy divided by the speed of light $E/c=h\nu/c$, so the force resulting in the perfect absorption of photons would be
$$F=\frac{dp}{dt}=\frac{1}{c}\frac{dE}{dt} = \frac{P}{c}$$
where $P$ is the total power of the light hitting the absorber, in units of Watts for example, and $A$ is the area of the incoming light field intercepting the absorber.
Since the sail is reflective rather than absorbing, there's a second beam of reflected light and a second force, and this one has a direction based on the orientation of the mirror. Let's just look at the magnitudes for now though.
Wikipedia gives the shape of the diamond-shaped sunshield as about 21 by 14 meters (the diagonals). That will have an area equal to half the product of the diagonals, or 147 m^2, agreeing nicely with Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope.
As shown in Figure 6, the effective area of the Sunshield in the Sunward direction can vary between 105 and 163 m², the range of allowed spacecraft attitudes that prevent the telescope from being exposed to stray light.
The solar constant is about 1360 W/m^2 at 1 AU, but the L2 area is 1% farther, so let's use
$$P_{max}=A \times \text{1330 W/m^2} \approx 196 \text{kW}$$
to get
$$F_{max} = 2 \frac{P_{max}}{c} \approx 1.3 \text{mN}.$$
Acceleration is force/mass. Using 6500 kg from Wikipedia:
$$a_{max} = \frac{F_{max}}{m} \approx 2.7 \times 10^{-7} \text{m/s²}.$$
A year has about 31.6 million seconds, so that's 6.3 m/s per year of delta-v available in the +z direction if the shade points mostly back towards the Sun, and somewhat less if a bit of tilt is used if perpendicular acceleration is needed.
JWST's known station-keeping budget
Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope tells us:
The results of the analysis show that the SK delta-V budget for a 10.5 year mission is 25.5 m/sec, or 2.43 m/sec per year. This SK budget is higher than the typical LPO SK budget of about 1 m/sec per year, but JWST presents challenges that other LPO missions do not face. The Endof-Box analysis was critical to the JWST mission, because it provided a realistic value for the SK delta-V budget when it was needed to establish a complete spacecraft mass budget.
So the sail provides more than double the magnitude of the station-keeping delta-v.
SOHO is an example of a spacecraft in a halo orbit (around L1) and per Roberts 2002 (from Is this what station keeping maneuvers look like, or just glitches in data? (SOHO via Horizons)) it uses a station-keeping strategy of only thrusting in the z direction (toward or away from the Sun). However, Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope tells us:
In LPO dynamics is known that the x-y plane contains the stable and unstable directions, while the z direction is neutrally stable. Because JWST does not need to remain near a reference orbit, during SK maneuvers there is no need to thrust in the z direction, and the thrust vector is chosen to lie in the x-y plane.
This doesn't mean that in our non-telescope-mode, survival holding pattern we would also need the station-keeping (SK) thrust vector in the perpendicular x-y plane though, and I propose that in survival mode one could use some combination of modulation of the z-component and adding the x-y component by tilting and angling the sunshield within its safe limits will provide enough delta-v and flexibility in its direction to perform the station-keeping.
JWST will experience a steady delta-v of about 6 m/s per year due to the constant photon pressure of sunlight reflecting back from its sunshield.
While this is of course already figured into it's orbit, this will mostly result in a halo orbit just slightly in front of (sunward-of) the halo orbit about L1 calculated without the effects of photon pressure. Here "slightly in front of" is probably of the order of a few kilometers or tens of kilometers only.
Aggressive tilting of the sunshield within safe limits can both modulate the +z acceleration, and add a component in the x-y plane
Rotating the spacecraft about the +z axis of the orbit in the rotating frame with a tilted sunshield will direct the component of the thrust within the x-y plane, though probably not enough to make up the full 2.4 m/s per year currently obtained from propulsive maneuvers every 21 days.
Momentum unloading
I haven't through too much about how to do momentum unloading of JWST's momentum wheels using only solar photon pressure. The wheels will be needed to not only maintain attitude but also to execute regular tilts and rotations needed to direct the photon pressure for station-keeping.
As soon as the spacecraft tilts a bit, the center of the resulting photon pressure will not include the spacecraft's center of mass, so there will be at least some torque to work with.
It is possible that these attitude maneuvers can be designed in pairs to be angular momentum-neutral such that they naturally cancel each other in terms of rotations of the wheels over time.
I think there could be room to do this. However, I don't think a conclusive answer can be had through analyses of magnitudes on envelope-backs. A real answer would only come from even more detailed Monte Carlo calculations than those already outlined in Stationkeeping Monte Carlo Simulation for the James Webb Space Telescope. Sounds like a fun project!
The short answer is no. Lagrange points are saddle points, in topological terms. They are only quasi-stable, but not actually stable. Solar radiation pressure is enough to cause a nutation and precession of the Wind spacecraft's spin axis over the course of a year (forms an ellipse on a polar graph with a diameter of about 1 degree). That force alone would nudge any spacecraft out place eventually, so no, it could not stay indefinitely.
There are also perturbations in gravitational fields, that, over long time periods would also nudge a spacecraft off of any quasi-stable saddle point.
There is also dust from interplanetary and interstellar space that impacts spacecraft at hypersonic speeds causing small plasma plumes. The dust from interstellar space has a preferred direction, thus would slowly kick any spacecraft off of a quasi-stable saddle point.
Even with a spinning spacecraft, the orbit starts to exponentially decay after about a month. Wind performs only four station keeping maneuvers per year and each only costs ~4-10 cm/s of fuel (equivalent to something like 0.1 kg or less). The ACE, by comparison, performs maneuvers every other week or so. The difference is that ACE is a sun-pointed spinner and Wind is an ecliptic spinner. ACE needs to point in a specific direction for communication with Earth and because one of its plasma instruments is failing (so they canted the spin axis a little to rely more heavily on the less damaged anodes).
One could, in princple, just wait 9 months and re-insert Wind about the Earth-Sun L1 point, but the longer you wait, the more expensive (fuel-wise) it becomes. If we waited 2 years, Wind (or any other Lagrange-point-orbiting spacecraft) would be in a heliocentric orbit about the Sun just like Earth. If the spacecraft was at L1(L2), the spacecraft would orbit the Sun faster(slower) than Earth. This is actually what the STEREO do.
So no, if JWST didn't use fuel for 2 years, it would be in a heliocentric orbit about the Sun.
Fun Side Note: The JWST flight operations team came up with a new set of thrusting options to try and preserve fuel and in early 2014 they presented their idea to me and the Wind team. They want to use Wind as a test run to see if the thruster maneuvers would not only work, but would save fuel.
The old way was to only thrust when the spacecraft was on the Earth-Sun line and the thrusts would be aligned with the Earth-Sun line (well, as close as possible). The reason being, you do not want to apply a torque to either your orbit about the Lagrange point or your orbit around the Sun. The JWST teams idea was to thrust off the Earth-Sun line at angles to the Earth-Sun line. So I pointed out this would insert torques to the system and I was concerned. They went and published a paper on, found at doi:10.2514/6.2014-4304. Turns out, because the maneuvers only use a few to a few 10s of cm/s, the torques would be minimal and not critical so we went forward with it. It reduced our typical fuel cost per maneuver by ~5-10%.
I still find this a little funny because at the time Wind had over 120 years worth of fuel left...
honeste_viverehoneste_vivere
$\begingroup$ The paper you mention in the "Fun side" section can be downloaded from here. Interesting to note that the fuel mass of WIND is about the same as that of JWST (300Kg), but the dry mass ratio is 1:6. Can we advance a "back of an envelop" life-time for JWST to be ~ 120/6 years? $\endgroup$
– Ng Ph
Jan 8 at 21:19
Thanks for contributing an answer to Space Exploration Stack Exchange!
Not the answer you're looking for? Browse other questions tagged orbital-mechanics station-keeping james-webb-telescope halo-orbit lissajous-orbit or ask your own question.
Graduation of Space Exploration
James Webb telescope; limits to propellant lifetime?
Why is the US building a Lunar Orbital Platform-Gateway (LOP-G)?
Is this what station keeping maneuvers look like, or just glitches in data? (SOHO via Horizons)
What happens to JWST after it runs out of propellant?
Can the James Webb Space Telescope basically manage its own orbit if necessary?
How would you identify when an object in a Lissajous orbit needs station keeping?
Have light gases like hydrogen or helium been explored for ion propulsion?
The design of the halo orbit of the James Webb Space Telescope
How will JWST manage solar pressure effects to maintain attitude and station keep it's unstable orbit?
Where did the Herschel Space Telescope go in 2013?
How much of the sky can the JWST see?
How will JWST be serviced?
Could a ball of water stay in orbit?
How will JWST maintain its elliptical orbit around L2?
What orbit would a space station need to stay in orbit for N years?
Why are JWST optics not enclosed like HST?
Are there any estimates for cost of manufacturing second if first JWST fails?
To what extent could JWST continue to be useful after the propellant runs out? | CommonCrawl |
Magazines (9,333) News (2,449) Trade Publications (1,872) Academic Journals (1,816) Reports (184) See more
Books (51) Conference Materials (45) Reviews (176) eBooks (2)
computer hacking (15,505) computer hackers (2,935) computer security (2,523) cyberterrorism (1,779) prevention of computer hacking (1,745) See more
actions & defenses (law) (303) apple inc. (244) banking industry (234) computer crime prevention (228) computer crimes (1,610) computer network security (532) computer networks (338) computer passwords (478) computer software (425) computer systems (257) computer systems security vulnerabilities (248) computer viruses (216) conferences & conventions (199) counterterrorism (195) cryptocurrencies (206) cybercriminals (232) cybersecurity (200) data encryption (298) data protection (887) data security (678) data security failures (955) email (278) email hacking (272) general news (230) google inc. (287) hacking (493) identity theft (200) information technology (332) information technology security (211) internet (301) internet of things (224) internet security (1,459) malware (855) microsoft corp. (261) phishing (260) political (270) ransomware (384) security (194) security systems (808) sony pictures entertainment inc. (266) trump, donald, 1946- (283) united states presidential election, 2016 (217) united states. federal bureau of investigation (389) united states. national security agency (213) websites (320)
bloomberg.com (1,078) new york times (757) forbes.com (688) wall street journal - eastern edition (586) eweek (572) See more
aba journal (33) acm computing surveys (13) advances in electrical and computer engineering (2) american banker (401) australasian journal of information systems (3) aviation week & space technology (15) brw (14) canadian business (12) chronicle of higher education (25) cio (13284045) (355) communications of the acm (105) computer act!ve (221) computer weekly (281) construction specifier (1) consumer reports (48) economist (243) expert review of medical devices (3) foreign policy (14) frontiers in psychology (2) frontiers in robotics and ai (2) heart rhythm (2) hill (53) ieee access (2) ieee annals of the history of computing (19) ieee spectrum (45) information security journal: a global perspective (18) international journal of advanced research in computer science (24) international journal of information security (33) issues in information systems (6) maclean's (32) mit technology review (33) money (21) network dictionary (34) pc world (08131384) (494) publishers weekly (43) salem press encyclopedia (6) scientific american (33) security & communication networks (20) strategic finance (50) technology review (21) time international (atlantic edition) (30) time magazine (99) u.s. news digital weekly (31) wall street journal (online) (424) wall street journal - online edition (490)
idg communications, inc. (1,691) dow jones & company inc (1,505) bloomberg, l.p. (1,317) arizent (1,071) new york times (767) See more
1105 media, inc. (42) american bar association (105) american society for industrial security (31) association for computing machinery (126) australasian association for information systems (2) bnp media (82) capitol hill publishing corporation (53) crain communications inc. (mi) (74) economist newspaper limited (243) elsevier (6) elsevier b.v. (110) emerald publishing limited (20) endeavor business media (77) forbes inc. (731) frontiers media s.a. (6) future publishing ltd. (344) gannett company, inc. (49) hindawi limited (31) ieee (205) incisive media services ltd (44) information today inc. (35) international journal of advanced research in computer science (24) javvin technologies, inc. (34) kenilworth media inc. (1) knowledge bylanes (37) mdpi (37) media source, inc. (14) meredith corporation (34) newsweek publishing llc (65) penske business media, llc (48) penton media, inc. (158) pwxyz llc (47) quinstreet, inc. (656) routledge (1) sage publications inc. (79) springer nature (120) stefan cel mare university of suceava (2) taylor & francis (4) taylor & francis ltd (178) tech science press (15) techtarget, inc. (322) time usa, llc (180) united business media (154) wiley-blackwell (80) wolters kluwer legal & regulatory (111)
english (17,615) german (34) spanish (12) korean (10) french (7) See more
catalan; valencian (1) chinese (4) danish (1) dutch/flemish (3) finnish (1) hungarian (1) indonesian (1) lithuanian (4) polish (3) portuguese (2) russian (3) spanish; castilian (2) turkish (2)
united states (1,577) china (440) russia (289) united kingdom (274) north korea (97) See more
australia (73) california (55) canada (29) england (36) india (66)
Complementary Index (6,525) Business Source Index (6,154) Academic Search Index (3,494) Supplemental Index (1,057) Education Abstracts (H.W. Wilson) (364) See more
Directory of Open Access Journals (24) MEDLINE (18) OAPEN Library (2) Open Web RDK - Metadata Only (1) Open Web RDK with Full Text (62) Research Starters (9)
Advanced Search Results For "COMPUTER hacking"
1 - 10 of 17,710 results for
"COMPUTER hacking"
Hacking: field notes for adaptive urban planning in uncertain times.
Source(s): Planning Practice & Research. Dec2022, Vol. 37 Issue 6, p721-738. 18p.
Allan, Penelope
Plant, Roel
Abstract: Planning systems rely on an element of certainty and can sometimes be ill-equipped to creatively adapt to increasingly complex system trajectories. We analyse how designers and planners deal creatively with a statutory planning system that is increasin...
COMPUTER hacking
The effects of hacking events on bitcoin.
Source(s): Journal of Public Affairs (14723891). Dec2022 Supplement 1, Vol. 22, p1-5. 5p.
Pham, Huy
Nguyen Thanh, Binh
Ramiah, Vikash
Abstract: We examine the short‐term and long‐term effects of hacking events on bitcoin return. Additionally, we attempt to find out if investors can benefit from these events by adopting and modifying the models proposed by Baur et al. (2018) [Journal of Interna...
HARD currencies
EXPORT marketing
Resurrecting the evil genius: examining the relationship between unethical behavior and perceived competence.
Source(s): Journal of Managerial Psychology. 2022, Vol. 37 Issue 6, p591-603. 13p.
Motro, Daphna
Sullivan, Daniel
Abstract: Purpose: Using the stereotype content model (SCM) as a framework, the authors examine how the negative relationship between peoples' unethical behavior and perceptions of their competence only holds when the unethical act is simple. Design/methodology/...
STEREOTYPE content model
STUDENT cheating
CLOTHES closets
Compactifications of Moduli of Points and Lines in the Projective Plane.
Source(s): IMRN: International Mathematics Research Notices. Nov2022, Vol. 2022 Issue 21, p17000-17078. 79p.
Schaffler, Luca
Tevelev, Jenia
Abstract: Projective duality identifies the moduli spaces |$\textbf{B}_n$| and |$\textbf{X}(3,n)$| parametrizing linearly general configurations of |$n$| points in |$\mathbb{P}^2$| and |$n$| lines in the dual |$\mathbb{P}^2$| , respectively. The space |$\textbf...
COMPACT spaces (Topology)
PROJECTIVE planes
HOW TO... Stop your printer being hacked.
Source(s): Computer Act!ve. 12/7/2022, Issue 646, p35-37. 3p. 7 Color Photographs.
Rawlinson, Nik
Abstract: On a Plusnet Hub One modem, click Advanced Settings and enter the admin password shown on the sticker on the back of the router itself. Type the address into your browser and, if prompted for a password, check the back of the router, where the administ...
COMPUTER printers
NETWORK routers
INTERNET protocol address
Information Security Strategies for Information-Sharing Firms Considering a Strategic Hacker.
Source(s): Decision Analysis. Jun2022, Vol. 19 Issue 2, p99-122. 24p.
Wu, Yong
Xu, Mengyao
Cheng, Dong
Abstract: Information resources have been shared to promote the business operations of firms. However, the connection of business information sharing interfaces between firms has increased the attack surface and created opportunities for the hacker. We examine t...
COMPUTER hackers
Hacking Gender Stereotypes: Girls' Participation in Coding Clubs.
Source(s): AEA Papers & Proceedings. May2022, Vol. 112, p583-587. 5p.
Carlana, Michela
Fort, Margherita
Abstract: The article offers information about hacking gender stereotypes, along with mentions the girls' participation in coding clubs. It mentions that employment opportunities and wage growth have been rising more rapidly among occupations that require high l...
Tunable control of internet of things information hacking by application of the induced chiral atomic medium.
Source(s): Soft Computing - A Fusion of Foundations, Methodologies & Applications. Oct2022, Vol. 26 Issue 20, p10643-10650. 8p.
Arif, Syed Muhammad
Bacha, Bakht Amin
Ullah, Syed Sajid
Abstract: The increasing demands of new technologies in the domain of Internet of Things (IoT) change the way of data transmission medium from air to light and is called quantum communication. For secure quantum communication among IoT devices, in the present ar...
QUANTUM communication
CLOAKING devices
How Serious Are You About Cybersecurity?
Source(s): Production Machining. May2022, Vol. 22 Issue 5, p28-31. 4p.
KORN, DEREK
Abstract: The article discusses the issue of cybersecurity. It mentions that manufacturing has experienced more ransomware attacks than the U.S. government, education, technology and health care sectors. It reveals cybersecurity risk assessment which helps an or...
PENETRATION testing (Computer security)
PASSWORD software
State hacking at the edge of code, capitalism and culture.
Source(s): Information, Communication & Society. Feb 2022, Vol. 25 Issue 2, p242-257. 16p.
Follis, Luca
Fish, Adam
Abstract: Hacking is a set of practices with code that provides the state an opportunity to defend and expand itself onto the internet. Bringing together science and technology studies and sociology scholarship on boundary objects and boundary work, we develop a...
EDGES (Geometry) | CommonCrawl |
On the rate of equidistribution of expanding horospheres in finite-volume quotients of SL(2, ${\mathbb{C}}$)
JMD Home
This Volume
Differentiable Rigidity for quasiperiodic cocycles in compact Lie groups
2017, 11: 143-153. doi: 10.3934/jmd.2017007
Approximation of points in the plane by generic lattice orbits
Dubi Kelmer
Department of Mathematics, Maloney Hall, Boston College, Chestnut Hill, MA 02467-3806, USA
Received June 29, 2016 Revised December 04, 2016 Published February 2017
Fund Project: Partially supported by NSF grant DMS-1401747.
Full Text(HTML)
We give upper and lower bounds for Diophantine exponents measuring how well a point in the plane can be approximated by points in the orbit of a lattice $\Gamma < {\rm{S}}{{\rm{L}}_2}\left( {\mathbb{R}} \right)$ acting linearly on ${\mathbb{R}^2}$. Our method gives bounds that are uniform for almost all orbits.
Keywords: Diophantine approximation, lattice action, shrinking targets.
Mathematics Subject Classification: Primary: 11J20; Secondary: 37A17.
Citation: Dubi Kelmer. Approximation of points in the plane by generic lattice orbits. Journal of Modern Dynamics, 2017, 11: 143-153. doi: 10.3934/jmd.2017007
A. Ghosh, A. Gorodnik and A. Nevo, Best possible rates of distribution of dense lattice orbits in homogeneous spaces, J. Reine Angew. Math., to appear. doi: 10.1515/crelle-2016-0001. Google Scholar
A. Ghosh, A. Gorodnik and A. Nevo, Diophantine approximation exponents on homogeneous varieties, in Recent Trends in Ergodic Theory and Dynamical Systems, Contemp. Math., 631, Amer. Math. Soc., Providence, RI, (2015), 181-200. doi: 10.1090/conm/631/12603. Google Scholar
A. Ghosh and D. Kelmer, Shrinking targets for semisimple groups, arXiv: 1512.05848, 2015. Google Scholar
A. Gorodnik and B. Weiss, Distribution of lattice orbits on homogeneous varieties, Geom. Funct. Anal., 17 (2007), 58-115. doi: 10.1007/s00039-006-0583-6. Google Scholar
D. Kelmer, Shrinking targets for discrete time flows on hyperbolic manifolds, preprint. Google Scholar
H. Kim and P. Sarnak, Refined estimates towards the Ramanujan and Selberg conjectures, J. Amer. Math. Soc., 16 (2003), 139-183. doi: 10.1090/S0894-0347-02-00410-1. Google Scholar
F. Ledrappier, Distribution des orbites des réseaux sur le plan réel, C. R. Acad. Sci. Paris Sér. I Math., 329 (1999), 61-64. doi: 10.1016/S0764-4442(99)80462-5. Google Scholar
M. Laurent and A. Nogueira, Approximation to points in the plane by SL(2.Z)-orbits, J. Lond. Math. Soc. (2), 85 (2012), 409-429. doi: 10.1112/jlms/jdr061. Google Scholar
M. Laurent and A. Nogueira, Inhomogeneous approximation with coprime integers and lattice orbits, Acta Arith., 154 (2012), 413-427. doi: 10.4064/aa154-4-5. Google Scholar
F. Maucourant and B. Weiss, Lattice actions on the plane revisited, Geom. Dedicata, 157 (2012), 1-21. doi: 10.1007/s10711-011-9596-x. Google Scholar
A. Nogueira, Orbit distribution on R2 under the natural action of SL(2.Z), Indag. Math. (N.S.), Indag. Math. (N.S.), 13 (2002), 103-124. doi: 10.1016/S0019-3577(02)90009-1. Google Scholar
M. Pollicott, Rates of convergence for linear actions of cocompact lattices on the complex plane, Integers, 11B (2011), Paper No. A12, 7pp. Google Scholar
L. Singhal, Diophantine exponents for standard linear actions of SL_2 over discrete rings in C, Acta Arith., 177 (2017), 53-73. doi: 10.4064/aa8370-6-2016. Google Scholar
A. Venkatesh, Sparse equidistribution problems. period bounds and subconvexity, Ann. of Math. (2), 172 (2010), 989-1094. doi: 10.4007/annals.2010.172.989. Google Scholar
Dubi Kelmer, Hee Oh. Shrinking targets for the geodesic flow on geometrically finite hyperbolic manifolds. Journal of Modern Dynamics, 2021, 17: 401-434. doi: 10.3934/jmd.2021014
Dmitry Kleinbock, Xi Zhao. An application of lattice points counting to shrinking target problems. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 155-168. doi: 10.3934/dcds.2018007
Shrikrishna G. Dani. Simultaneous diophantine approximation with quadratic and linear forms. Journal of Modern Dynamics, 2008, 2 (1) : 129-138. doi: 10.3934/jmd.2008.2.129
Dmitry Kleinbock, Barak Weiss. Dirichlet's theorem on diophantine approximation and homogeneous flows. Journal of Modern Dynamics, 2008, 2 (1) : 43-62. doi: 10.3934/jmd.2008.2.43
Chao Ma, Baowei Wang, Jun Wu. Diophantine approximation of the orbits in topological dynamical systems. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2455-2471. doi: 10.3934/dcds.2019104
Sanghoon Kwon, Seonhee Lim. Equidistribution with an error rate and Diophantine approximation over a local field of positive characteristic. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 169-186. doi: 10.3934/dcds.2018008
Lianbing She, Mirelson M. Freitas, Mauricio S. Vinhote, Renhai Wang. Existence and approximation of attractors for nonlinear coupled lattice wave equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021272
Jörg Schmeling. A notion of independence via moving targets. Discrete & Continuous Dynamical Systems, 2006, 15 (1) : 269-280. doi: 10.3934/dcds.2006.15.269
Jimmy Tseng. On circle rotations and the shrinking target properties. Discrete & Continuous Dynamical Systems, 2008, 20 (4) : 1111-1122. doi: 10.3934/dcds.2008.20.1111
Brandon Seward. Every action of a nonamenable group is the factor of a small action. Journal of Modern Dynamics, 2014, 8 (2) : 251-270. doi: 10.3934/jmd.2014.8.251
David DeLatte. Diophantine conditions for the linearization of commuting holomorphic functions. Discrete & Continuous Dynamical Systems, 1997, 3 (3) : 317-332. doi: 10.3934/dcds.1997.3.317
Sara D. Cardell, Amparo Fúster-Sabater. Modelling the shrinking generator in terms of linear CA. Advances in Mathematics of Communications, 2016, 10 (4) : 797-809. doi: 10.3934/amc.2016041
Jon Chaika, David Constantine. A quantitative shrinking target result on Sturmian sequences for rotations. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 5189-5204. doi: 10.3934/dcds.2018229
Hans Koch, João Lopes Dias. Renormalization of diophantine skew flows, with applications to the reducibility problem. Discrete & Continuous Dynamical Systems, 2008, 21 (2) : 477-500. doi: 10.3934/dcds.2008.21.477
E. Muñoz Garcia, R. Pérez-Marco. Diophantine conditions in small divisors and transcendental number theory. Discrete & Continuous Dynamical Systems, 2003, 9 (6) : 1401-1409. doi: 10.3934/dcds.2003.9.1401
Michael Hutchings. Mean action and the Calabi invariant. Journal of Modern Dynamics, 2016, 10: 511-539. doi: 10.3934/jmd.2016.10.511
David Bechara Senior, Umberto L. Hryniewicz, Pedro A. S. Salomão. On the relation between action and linking. Journal of Modern Dynamics, 2021, 17: 319-336. doi: 10.3934/jmd.2021011
Markku Lehtinen, Baylie Damtie, Petteri Piiroinen, Mikko Orispää. Perfect and almost perfect pulse compression codes for range spread radar targets. Inverse Problems & Imaging, 2009, 3 (3) : 465-486. doi: 10.3934/ipi.2009.3.465
Rong Liu, Saini Jonathan Tishari. Automatic tracking and positioning algorithm for moving targets in complex environment. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1251-1264. doi: 10.3934/dcdss.2019086
Giovanni Bozza, Massimo Brignone, Matteo Pastorino, Andrea Randazzo, Michele Piana. Imaging of unknown targets inside inhomogeneous backgrounds by means of qualitative inverse scattering. Inverse Problems & Imaging, 2009, 3 (2) : 231-241. doi: 10.3934/ipi.2009.3.231
PDF downloads (163)
HTML views (118)
Article outline | CommonCrawl |
Language and Linguistics (1)
Journal of Tropical Ecology (4)
Journal of Materials Research (3)
Seed Science Research (3)
Proceedings of the Royal Society of Edinburgh Section A: Mathematics (2)
British Journal of Nutrition (1)
Chinese Journal of Agricultural Biotechnology (1)
Language in Society (1)
MRS Communications (1)
Nagoya Mathematical Journal (1)
Bible Society of South Africa (1)
Synthesis of multi-branched gold nanostructures and their surface-enhanced Raman scattering properties of 4-aminothiophenol
Min He, Beibei Cao, Xiangxiang Gao, Bin Liu, Jianhui Yang
Journal: Journal of Materials Research , First View
A facile one-pot and environmentally friendly method was developed to synthesize multi-branched flowerlike gold (Au) nanostructures by reducing chlorate gold (HAuCl4) with hydrogen peroxide (H2O2) in the presence of sodium citrate. The multibranched Au nanostructures were characterized by transmission electron microscopy and Ultraviolet-visible (UV-vis) absorption spectroscopy. The molar ratio of sodium citrate to HAuCl4 and the concentrations of the reacted reagents play important roles in the formation of multibranched Au nanostructures. The multibranched Au nanostructures with sharp tips exhibit excellent surface-enhanced Raman scattering (SERS) ability of 4-aminothiophenol (PATP). The experimental and simulated results both confirm that the photoinduced catalytic coupling reaction of PATP transformation to 4,4′-dimercaptoazobenzene occurs on the surface of multibranched Au nanostructures at a high power during the SERS measurement. It is believed that these multibranched Au nanostructures may find potential applications in SERS, biosensors, and the photoinduced surface catalytic application fields.
Evaluation of salinomycin isolated from Streptomyces albus JSY-2 against the ciliate, Ichthyophthirius multifiliis
Jia-Yun Yao, Ming-Yue Gao, Yong-Yi Jia, Yan-Xia Wu, Wen-Lin Yin, Zheng Cao, Gui-Lian Yang, Hai-Bin Huang, Chun-Feng Wang, Jin-Yu Shen, Zhi-Min Gu
Journal: Parasitology / Volume 146 / Issue 4 / April 2019
The present study was undertaken to investigate the antiparasitic activity of extracellular products of Streptomyces albus. Bioactivity-guided isolation of chloroform extracts affording a compound showing potent activity. The structure of the compound was elucidated as salinomycin (SAL) by EI-MS, 1H NMR and 13C NMR. In vitro test showed that SAL has potent anti-parasitic efficacy against theronts of Ichthyophthirius multifiliis with 10 min, 1, 2, 3 and 4 h (effective concentration) EC50 (95% confidence intervals) of 2.12 (2.22–2.02), 1.93 (1.98–1.88), 1.42 (1.47–1.37), 1.35 (1.41–1.31) and 1.11 (1.21–1.01) mg L−1. In vitro antiparasitic assays revealed that SAL could be 100% effective against I. multifiliis encysted tomonts at a concentration of 8.0 mg L−1. In vivo test demonstrated that the number of I. multifiliis trophonts on Erythroculter ilishaeformis treated with SAL was markedly lower than that of control group at 10 days after exposed to theronts (P < 0.05). In the control group, 80% mortality was observed owing to heavy I. multifiliis infection at 10 days. On the other hand, only 30.0% mortality was recorded in the group treated with 8.0 mg L−1 SAL. The median lethal dose (LD50) of SAL for E. ilishaeformis was 32.9 mg L−1.
Si-TiN alloy Li-ion battery negative electrode materials made by N2 gas milling
Y. Wang, Simeng Cao, Hui Liu, Min Zhu, M.N. Obrovac
Journal: MRS Communications / Volume 8 / Issue 3 / September 2018
Si-TiN alloys are attractive for use as negative electrodes in Li-ion cells because of the high conductivity, low electrolyte reactivity, and thermal stability of TiN. Here it is shown that Si-TiN alloys with high Si content can surprisingly be made by simply ball milling Si and Ti powders in N2(g); a reaction not predicted by thermodynamics. This offers a low-cost and simple method of synthesizing these attractive materials. The resulting alloys have smaller grain sizes than Si-TiN made by ball milling Si and TiN directly, giving them high thermal stability and improved cycling characteristics in Li cells.
A CONJECTURE ON $C$ -MATRICES OF CLUSTER ALGEBRAS
MSC 2010: Algebraic combinatorics
PEIGEN CAO, MIN HUANG, FANG LI
Journal: Nagoya Mathematical Journal , First View
For a skew-symmetrizable cluster algebra ${\mathcal{A}}_{t_{0}}$ with principal coefficients at $t_{0}$ , we prove that each seed $\unicode[STIX]{x1D6F4}_{t}$ of ${\mathcal{A}}_{t_{0}}$ is uniquely determined by its $C$ -matrix, which was proposed by Fomin and Zelevinsky (Compos. Math. 143 (2007), 112–164) as a conjecture. Our proof is based on the fact that the positivity of cluster variables and sign coherence of $c$ -vectors hold for ${\mathcal{A}}_{t_{0}}$ , which was actually verified in Gross et al. (Canonical bases for cluster algebras, J. Amer. Math. Soc. 31(2) (2018), 497–608). Further discussion is provided in the sign-skew-symmetric case so as to obtain a weak version of the conjecture in this general case.
Cristina Ros i Solé , The personal world of the language learner. London: Palgrave Macmillan, 2016. Pp. XIII, 148. Hb. £37.99.
Min Cao
Journal: Language in Society / Volume 46 / Issue 3 / June 2017
Species associations of congeneric species in a tropical seasonal rain forest of China
Guoyu Lan, Yunbing Zhang, Fangliang He, Yuehua Hu, Hua Zhu, Min Cao
Journal: Journal of Tropical Ecology / Volume 32 / Issue 3 / May 2016
In tropical plant communities with diverse species, many congeners are found to coexist. Do environment or biotic interactions structure the coexistence of congeners in tropical forest communities? In this paper, we aimed to disentangle the effect of environment (first-order effects) and species interactions (second-order effects) on the spatial distributions of tree species. We used a classification scheme and torus-translation to test the first-order interaction of 48 species from 17 genera in a fully mapped 20-ha dipterocarp tropical seasonal rain-forest plot in Xishuangbanna, south-west China. Then we used heterogeneous Poisson null models to reveal significant uni- and bivariate second-order interactions. The results demonstrated that (1) 34 of the 48 studied species showed a significant relation with at least one topographic variable. This confirmed that topographical heterogeneity is important for distribution of these congeners. Spatial segregation (36.6%) and partial overlap (34.8%) were the most common bivariate association types in Xishuangbanna plot, which indicated first-order effects (environment) were strong. (2) For small-scale associations, 51% saplings (1 to ≤ 5 cm) (68.8% for large trees with dbh > 5 cm) of the species showed non-significant associations. For large-scale associations, 61.6% saplings (81.2% for large trees) of the species showed non-significant associations. Lack of significant species interactions provides evidence for the unified neutral theory. In conclusion, both environment and biotic interactions structure congeneric species' coexistence in tropical seasonal rain forest in this region.
The effect of POSS-based block copolymer as compatibilizer on POSS/epoxy composites
Yiting Xu, Cong Li, Min Chen, Jianjie Xie, Ying Cao, Yuanming Deng, Conghui Yuan, Lizong Dai
Journal: Journal of Materials Research / Volume 30 / Issue 2 / 28 January 2015
Print publication: 28 January 2015
In this study, a novel hybrid block copolymer containing POSS (BCP), poly(methacrylisobutyl-POSS)-b-poly(methylmethacrylate) (PMAiBuPOSS-b-PMMA) was synthesized via reversible addition-fragmentation chain transfer (RAFT) polymerization. The structure and molecular weight were characterized via 1H NMR and GPC. BCP was creatively used as the compatibilizer to overcome the bad compatibility of epoxy and POSS in their blend system. SEM and dynamic mechanical thermal analyses (DMTA) were used to observe the surface morphology and thermal–mechanical behaviors of the resultant products. We found that the amount of microaggregation domains of POSS decreased, while the nano ones increased, when BCP content increased. All the aggregation domains were distributed in epoxy matrix uniformly at nanoscale with the addition of 10 phr BCP and 5 phr POSS monomers. The results indicated that BCP could effectively improve the compatibility between epoxy resin and POSS owing to its amphiphilicity in DGEBA. The fracture behavior of products transformed from brittle fracture to ductile fracture gradually with the increase of BCP, whereas the T g and E′ decreased.
Environmental determinism of community structure across trophic levels: moth assemblages and substrate type in the rain forests of south-western China
R. L. Kitching, A. Nakamura, M. Yasuda, A. C. Hughes, Cao Min
Journal: Journal of Tropical Ecology / Volume 31 / Issue 1 / January 2015
Published online by Cambridge University Press: 13 October 2014, pp. 81-89
Soil type may drive vegetation structure. In turn, the richness, identity and diversity of arthropod herbivores may be related to plant diversity through specific host plant relationships in a location. We test the hypothesis that the soil type (calcicolous vs alluvial soils) will drive the assemblage structure of a dominant group of arthropod herbivores: the moths. We used sampling sites in rain-forest fragments in south-western China around the Xishuangbanna Tropical Botanical Gardens (21°41′N, 101°25′E) to test this hypothesis. We used Pennsylvania style light traps to take point samples of macromoths and pyraloids from four sampling sites in forest remnants on a limestone geological base and four from alluvial-based forest. A total of 3165 moths (1739 from limestone-based and 1255 from alluvium-based forests) was collected representing 1255 species. The limestone-based sites showed statistically similar levels of species richness and other alpha diversity indices to the four alluvium-based site. Nevertheless the sites were clearly significantly different in terms of species composition. Analysis of contrasting similarity ('beta' diversity) indices suggested that there was 'leakage' between the two classes of sites when 'rare' species were emphasized in the calculations. We used an indicator value procedure to select species that most characterized this separation. We expect that these differences reflect associated changes in plant assemblage structure acting through the herbivorous habits of larval moths. Accordingly, in any assessment of landscape level diversity the nature of the substrate and its associated vegetation is clearly of great importance. This observation also has consequences for the design of conservation programmes.
Review of the current status of fast ignition research at the IAPCM
Hong-bo Cai, Si-zhong Wu, Jun-feng Wu, Mo Chen, Hua Zhang, Min-qing He, Li-hua Cao, Cang-tao Zhou, Shao-ping Zhu, Xian-tu He
Journal: High Power Laser Science and Engineering / Volume 2 / 01 July 2014
Published online by Cambridge University Press: 31 March 2014, e6
Print publication: 01 July 2014
We review the present status and future prospects of fast ignition (FI) research of the theoretical group at the IAPCM (Institute of Applied Physics and Computational Mathematics, Beijing) as a part of the inertial confinement fusion project. Since the approval of the FI project at the IAPCM, we have devoted our efforts to improving the integrated codes for FI and designing advanced targets together with the experimental group. Recent FI experiments [K. U. Akli et al., Phys. Rev. E 86, 065402 (2012)] showed that the petawatt laser beam energy was not efficiently converted into the compressed core because of the beam divergence of relativistic electron beams. The coupling efficiency can be improved in three ways: (1) using a cone–wire-in-shell advanced target to enhance the transport efficiency, (2) using external magnetic fields to collimate fast electrons, and (3) reducing the prepulse level of the petawatt laser beam. The integrated codes for FI, named ICFI, including a radiation hydrodynamic code, a particle-in-cell (PIC) simulation code, and a hybrid fluid–PIC code, have been developed to design this advanced target at the IAPCM. The Shenguang-II upgraded laser facility has been constructed for FI research; it consists of eight beams (in total $24~ {\rm kJ}/3\omega $ , 3 ns) for implosion compression, and a heating laser beam (0.5–1 kJ, 3–5 ps) for generating the relativistic electron beam. A fully integrated FI experiment is scheduled for the 2014 project.
Nicotinamide supplementation induces detrimental metabolic and epigenetic changes in developing rats
Da Li, Yan-Jie Tian, Jing Guo, Wu-Ping Sun, Yong-Zhi Lun, Ming Guo, Ning Luo, Yu Cao, Ji-Min Cao, Xiao-Jie Gong, Shi-Sheng Zhou
Journal: British Journal of Nutrition / Volume 110 / Issue 12 / 28 December 2013
Published online by Cambridge University Press: 17 June 2013, pp. 2156-2164
Print publication: 28 December 2013
Ecological evidence suggests that niacin (nicotinamide and nicotinic acid) fortification may be involved in the increased prevalence of obesity and type 2 diabetes, both of which are associated with insulin resistance and epigenetic changes. The purpose of the present study was to investigate nicotinamide-induced metabolic changes and their relationship with possible epigenetic changes. Male rats (5 weeks old) were fed with a basal diet (control group) or diets supplemented with 1 or 4 g/kg of nicotinamide for 8 weeks. Low-dose nicotinamide exposure increased weight gain, but high-dose one did not. The nicotinamide-treated rats had higher hepatic and renal levels of 8-hydroxy-2′-deoxyguanosine, a marker of DNA damage, and impaired glucose tolerance and insulin sensitivity when compared with the control rats. Nicotinamide supplementation increased the plasma levels of nicotinamide, N1-methylnicotinamide and choline and decreased the levels of betaine, which is associated with a decrease in global hepatic DNA methylation and uracil content in DNA. Nicotinamide had gene-specific effects on the methylation of CpG sites within the promoters and the expression of hepatic genes tested that are responsible for methyl transfer reactions (nicotinamide N-methyltransferase and DNA methyltransferase 1), for homocysteine metabolism (betaine–homocysteine S-methyltransferase, methionine synthase and cystathionine β-synthase) and for oxidative defence (catalase and tumour protein p53). It is concluded that nicotinamide-induced oxidative tissue injury, insulin resistance and disturbed methyl metabolism can lead to epigenetic changes. The present study suggests that long-term high nicotinamide intake (e.g. induced by niacin fortification) may be a risk factor for methylation- and insulin resistance-related metabolic abnormalities.
Multi-wavelength study of star formation properties in barred galaxies
Zhi-Min Zhou, Chen Cao, Hong Wu
Journal: Proceedings of the International Astronomical Union / Volume 8 / Issue S295 / August 2012
Published online by Cambridge University Press: 17 July 2013, p. 323
Stellar bars are important internal drivers of the secular evolution of disk galaxies. Using a sample of nearby barred galaxies with weak and strong bars, we evaluate the correlations between star formation properties in different galactic structures and their associated bars, and try to interpret the complex process of bar-driven secular evolution. We find that weaker bars tend to associate with lower concentrical star formation activities, while stronger bars appear to have large scatter in the distribution of the global star formation activities. In general, the star formation activities in early- and late-type galaxies have different behavior, with similar star formation rate density distributions. In addition, there are only weak trends toward increased star formation activities in bulges and galaxies with stronger bars, which is consistent with previous works. Our results suggest that the different stages of the evolutionary sequence and many factors besides bars may contribute to the complexity of this process. Furthermore, significant correlations are found between the star formation activities in different galactic structures, in which barred galaxies with intense star formation in bulges tend to also have active star formation in their bars and disks. Most bulges have higher star formation densities than their associated bars and disks, indicating the presence of bar-driven evolution. Therefore, we derived a possible criterion (Figure 1) to quantify the different stages of a bar-driven evolutionary sequence. Future work is needed to improve on the uncertainties of this study.
Study of Cu–In–Ga precursor for Cu(In,Ga)Se2 thin film prepared by the two-stage process
Jiang Liu, Da-Ming Zhuang, He-Xin Luan, Min Xie, Xiao-long Li, Ming-Jie Cao
Journal: Journal of Materials Research / Volume 27 / Issue 20 / 28 October 2012
Published online by Cambridge University Press: 11 July 2012, pp. 2639-2643
Print publication: 28 October 2012
Cu–In–Ga precursor thin films were deposited onto soda lime glass by magnetron cosputtering CuIn and CuGa alloy targets. After that, Cu(In,Ga)Se2(CIGSe) absorbers were formed by selenizing those alloy precursors with Se vapor at 550 °C. The influence of the precursor temperature on the properties of CIGSe thin film was investigated. The results show that a lot of pinholes existed in the CIGSe thin film produced by selenizing the Cu–In–Ga alloy precursor, which was sputtering deposited at ambient temperature. After sputtering substrate temperature of 250 °C was applied, pinholes were avoided. The surface roughness of Cu–In–Ga precursor increased with the increase of sputtering substrate temperature. Due to the volume expansion of selenization process, even the precursor with high surface roughness could be converted to smooth and compact CIGSe thin film.
Seed dispersal of Syzygium oblatum (Myrtaceae) by two species of fruit bat (Cynopterus sphinx and Rousettus leschenaulti) in South-West China
Zhan-Hui Tang, Jian-Ling Xu, Jon Flanders, Xue-Mei Ding, Xun-Feng Ma, Lian-Xi Sheng, Min Cao
In this study we investigated the importance of two species of fruit bat (Rousettus leschenaulti and Cynopterus sphinx) as seed dispersers for a species of fruit tree (Syzygium oblatum) found in the Xishuangbanna Tropical Botanical Garden in South-West China. We found that although R. leschenaulti and C. sphinx were the two primary seed dispersers of S. oblatum over half of the fruit produced by the tree (65%) fell to the ground. Out of the fruit collected, R. leschenaulti and C. sphinx were able to disperse seeds up to 73 m from the parent tree with the highest density of feeding roosts occurring at 21.3 m (SE = 5.2 m). We found no signs that either species of bat used the parent tree as a feeding roost, instead choosing specific trees that were at lower densities compared with other trees in the forest that were not used. When comparing the viability of seeds in three different habitats (under parent tree, in forest gap, under feeding roost) survival analysis revealed that seedling survival was significantly higher in the forest gap (91.7% ± 4.41%) than under the parent tree (78.3% ± 1.67%), but was not significantly different to seedling survival underneath feeding roosts (86.7 ± 1.67%). Further work also showed that the seeds did not have to be removed from the fruit or ingested by the bat in order to germinate. We conclude that although S. oblatum is not dependent on R. leschenaulti and C. sphinx for successful germination of its seeds, these two species of bat are important seed dispersers and can move seeds to areas where there is a greater chance of germination success and survival.
Multiple positive solutions of nonhomogeneous semilinear elliptic equations in ℝN*
Cao Dao-Min, Zhou Huan-Song
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 126 / Issue 2 / 1996
We consider the following problem
where for all ≦f(x,u)≦c1up-1 + c2u for all x ∈ℝN,u≧0 with c1>0,c2∈(0, 1), 2<p<(2N/(N – 2)) if N ≧ 3, 2 ≧ + ∝ if N = 2. We prove that (*) has at least two positive solutions if
and h≩0 in ℝN, where S is the best Sobolev constant and
Multiple solutions for nonhomogeneous elliptic equations involving critical Sobolev exponent*
Dao-Min Cao, Gong-Bao Li, Huan-Song Zhou
Published online by Cambridge University Press: 14 November 2011, pp. 1177-1191
We consider the following problem:
where is continuous on RN and h(x)≢0. By using Ekeland's variational principle and the Mountain Pass Theorem without (PS) conditions, through a careful inspection of the energy balance for the approximated solutions, we show that the probelm (*) has at least two solutions for some λ* > 0 and λ ∈ (0, λ*). In particular, if p = 2, in a different way we prove that problem (*) with λ ≡ 1 and h(x) ≧ 0 has at least two positive solutions as
Star formation properties in barred galaxies
Journal: Proceedings of the International Astronomical Union / Volume 7 / Issue S284 / September 2011
Stellar bars are important structures for the internal secular evolution of galaxies. They can drive gas into the central region of galaxies, and result in an enhancement of star formation activity there. Previous studies are limited in the comparisons between barred and unbarred galaxies. Here we try to investigate the connection between star formation activities and different bars, based on multi-wavelength data in a sample of barred spirals. We find that there is no clearly trend of the surface star formation rates in different structures along the bar strength. In addition, there is larger scatter for the properties of star formation activity in the galaxies with middle-strength bars, which may indicate that a variety of star formation stages are more likely associated with these bars.
Hydrogenated Amorphous Silicon Photodiode Technology for Advanced CMOS Active Pixel Sensor Imagers
Jeremy A. Theil, Min Cao, Gerrit Kooi, Gary W. Ray, Wayne Greene, Jane Lin, AJ Budrys, Uija Yoon
Published online by Cambridge University Press: 17 March 2011, A14.3
Amorphous silicon photodiode technology is a very attractive option for image array integrated circuits because it enables large die-size reduction and higher light collection efficiency than c-Si arrays. We have developed a photodiode array technology that is fully compatible with a 0.35νm CMOS process to produce image sensors arrays with 10-bit dynamic range that are 30% smaller than comparable c-Si photodiode arrays. The VGA (640x480), array demonstrated here uses common intrinsic and p-type contact layers, and makes reliable contact to those layers by use of a monolithic transparent conductor strap tied to vias in the interconnect. The work presented here will discuss performance issues and solutions that lend themselves to cost-effective high-volume manufacturing. The various methods of interconnection of the diode to the array and their advantages will be presented. The photodiode dark leakage current density is about 80 pA/cm2, and its absolute quantum efficiency peaks about 85% at 550 nm. The effect of doped layer thickness and concentration on quantum efficiency, and the effect of a-Si:H defect concentration on diode performance will be discussed.
The Chemical Bonding of Organic-Inorganic in Hybrid Compounds
Yadong Dai, Liling Guo, Minjie Hu, Kunyu Shi, Xinmin Min, Minhe Cao, Hanxing Liu
Published online by Cambridge University Press: 01 February 2011, FF9.31
As has been discussed, research on the electronic structure between organic and inorganic atoms in hybrid compounds has become important. In our study, DV—Xα method was employed to calculate the electronic structure of the hybrid compound. The information obtained from the calculation included orbit charge, bonding order, Fermi energy, density of the state, etc. The influence of organic and inorganic parts on the energy band structure of the hybrid compound was discussed based on the calculation results of Fermi energy and density of state. The chemical bonding between organic and inorganic parts in the hybrid compound was also analyzed in detail according to the orbital charges and bond orders.
Identification of polymorphism in the goat callipyge gene (CLPG) and its associations with production traits
Cao Gui-Ling, Li Biao, Tang Hui, Tang Pei-Rong, Wang Jian-Min, Jiang Yun-Liang
Journal: Chinese Journal of Agricultural Biotechnology / Volume 6 / Issue 3 / December 2009
The Dorset ram of the callipyge phenotype presents with muscular hypertrophy in the buttocks, and its inheritance is polar overdominant. A partial DNA fragment of 250 bp was obtained from the goat (Capra hircas) callipyge gene (CLPG; GenBank accession no. EU753362), which shared 96.04% and 88.65% identity with the corresponding regions of ovine (Ovis aries) and porcine CLPG, respectively. A polymorphism in the DNA fragment was detected by polymerase chain reaction single-strand conformation polymorphism (PCR-SSCP). Sequencing results indicated no A→C mutation corresponding to the ovine CLPG gene, although one A→C transversion was located 147 bp downstream from the CLPG site. The polymorphism, named SNP216 after its position (where SNP indicates single-nucleotide polymorphism), was investigated in Boer (n=63), Laiwu Black (n=70), Lubei White×Boer Hybrid (n=40), Lubei White (n=29) and Inner Mongolia Alashan White cashmere (n=115) goat populations. The results indicated that allele A was dominant in four of the goat populations, the Inner Mongolia Alashan White cashmere goats being the exception. The first four populations were in a state of Hardy–Weinberg equilibrium (P>0.05). In Inner Mongolia Alashan White cashmere goats, least-square means of birth weight, production of cashmere and body weight gain from birth to weaning did not differ significantly between the AA and AC phenotypes (P>0.5).
Seasonal variation in density and species richness of soil seed-banks in karst forests and degraded vegetation in central Yunnan, SW China
You-xin Shen, Wen-yao Liu, Min Cao, Yu-hui Li
Journal: Seed Science Research / Volume 17 / Issue 2 / June 2007
We studied seasonal variation in density and species richness of seeds in the 0–10 cm soil depth layer in primary, secondary and pine forests, and in shrubland and grassland in the Shilin Stone Forest Geographical Park, Yunnan, SW China. Soil samples were collected four times during the year at 3-month intervals. Seeds from 119 species were identified by germination tests in the soil samples. Density and species richness of seeds of herbaceous plants were greater than woody plants at all five sites throughout the year. Sampling time and site differences had significant effects on the mean number of species and on seed-bank density. Mean number of species per sample increased from February, reached the highest value in May, decreased to the lowest value in August and then increased in November. An exception was in the primary forest, where the highest number of species was found in February. Mean seed-bank density peaked in May at all five sites, and no significant differences were found between densities in February, November and August, except for the primary forest. The peak in seed-bank density in May might be due to dispersal of new seeds of spring-fruiting species, combined with persistence of seeds dispersed in previous years. This seasonal variation of individual species was due primarily to differences in species phenology rather than to differences between sites. Four seasonal seed-bank strategies were identified: two transient and two persistent. At all sites, similar numbers of seeds of herbaceous species were found between seasons, but the number of species of trees and shrubs decreased in August. | CommonCrawl |
Tag: bourbaki
Cartan meets Lacan
In the Grothendieck meets Lacan-post we did mention that Alain Connes wrote a book together with Patrick Gauthier-Lafaye "A l'ombre de Grothendieck et de Lacan, un topos sur l'inconscient", on the potential use of Grothendieck's toposes for the theory of unconsciousness, proposed by the French psychoanalyst Jacques Lacan.
A bit more on that book you can read in the topos of unconsciousness. For another take on this you can visit the blog of l'homme quantique – Sur les traces de Lévi-Strauss, Lacan et Foucault, filant comme le sable au vent marin…. There is a series of posts dedicated to the reading of 'A l'ombre de Grothendieck et de Lacan':
1. Initiation au topos
2. Rencontre d'une évidence
3. Métapsychologie du topos
4. Psychanalyse et mathématiques
5. Temps et instant
6. Mythes, fantasmes et topos classifiant
Alain Connes isn't the first (former) Bourbaki-member to write a book together with a Lacan-disciple.
In 1984, Henri Cartan (one of the founding fathers of Bourbaki) teamed up with the French psychoanalyst (and student of Lacan) Jean-Francois Chabaud for "Le Nœud dit du fantasme – Topologie de Jacques Lacan".
(Chabaud on the left, Cartan on the right, Cartan's wife Nicole in the mddle)
"Dans cet ouvrage Jean François Chabaud, psychanalyste, effectue la monstration de l'interchangeabilité des consistances de la chaîne de Whitehead (communément nommée « Noeud dit du fantasme » ou du « Non rapport sexuel » dans l'aire analytique), et peut ainsi se risquer à proposer, en s'appuyant sur les remarques essentielles de Jacques Lacan, une écriture du virage, autre nom de la passe. Henri Cartan (1904-2008), l'un des Membres-fondateur de N. Bourbaki, a contribué à ce travail avec deux réflexions : la première, considère cette monstration et l'augmente d'une présentation ; la seconde, traite tout particulièrement de l'orientation des consistances. Une suite de traces d'une séquence de la chaîne précède ce cahier qui s'achève par : « L'en-plus-de-trait », une contribution à l'écriture nodale."
Lacan was not only fascinated by the topology of surfaces such as the crosscap (see the topos of unconsciousness), but also by the theory of knots and links.
The Borromean link figures in Lacan's world for the Real, the Imaginary and the Symbolic. The Whitehead link (that is, two unknots linked together) is thought to be the knot (sic) of phantasy.
In 1986, there was the exposition "La Chaine de J.H.C. Whitehead" in the
Palais de la découverte in Paris (from which also the Chabaud-Cartan picture above is taken), where la Salle de Mathématiques was filled with different models of the Whitehead link.
In 1988, the exposition was held in the Deutches Museum in Munich and was called "Wandlung – Darstellung der topologischen Transformationen der Whitehead-Kette"
The set-up in Munich was mathematically more interesting as one could see the link-projection on the floor, and use it to compute the link-number. It might have been even more interesting if the difference in these projections between two subsequent models was exactly one Reidemeister move…
You can view more pictures of these and subsequent expositions on the page dedicated to the work of Jean-Francois Chabaud: La Chaîne de Whitehead ou Le Nœud dit du fantasme Livre et Expositions 1980/1997.
Part of the first picture featured also in the Hommage to Henri Cartan (1904-2008) by Michele Audin in the Notices of the AMS. She writes (about the 1986 exposition):
"At the time, Henri Cartan was 82 years old and retired, but he continued to be interested in mathematics and, as one sees, its popularization."
Bourbaki, Brassens, Hula Hoops and Coconuts
More than ten years ago, when I ran a series of posts on pre-WW2 Bourbaki congresses, I knew most of the existing B-literature. I'm afraid I forgot most of it, thereby missing opportunities to spice up a dull post (such as yesterday's).
Right now, I need facts about the infamous ACNB and its former connection to Nancy, so I reread Liliane Beaulieu's Bourbaki a Nancy:
(page 38) : "Like a theatrical canvas, "La Tribu" often carries as its header a subtitle, the product of its editor's imagination, which brings out the theme of the congress, if necessary. There is thus a "De Nicolaıdes" congress in Nancy, "Du banc public" (reference to Brassens) that of the "Universites cogerees" (in October 68, at the time of co-management)."
The first La Ciotat congress (February 27 to March 6, 1955) was called 'the congress of the public bench' ('banc public' in French) where Serre and Cartan tried to press Bourbaki to opt for the by now standard approach to varieties (see yesterday), and the following Chicago-congress retaliated by saying that there were also public benches nearby, but of little use.
What I missed was the reference to French singer-songwriter George Brassens. In 1953, he wrote, composed and performed Bancs Public (later called 'Les Amoureux des bancs publics').
If you need further evidence (me, I'll take Liliane's word on anything B-related), here's the refrain of the song:
"Les amoureux qui s'bécotent sur les bancs publics,
Bancs publics, bancs publics,
En s'foutant pas mal du regard oblique
Des passants honnêtes,
En s'disant des "Je t'aime'" pathétiques,
Ont des p'tits gueules bien sympathiques!"
(G-translated as:
'Lovers who smooch on public benches,
Public benches, public benches,
By not giving a damn about the sideways gaze
Honest passers-by,
The lovers who smooch on the public benches,
Saying pathetic "I love you" to each other,
Have very nice little faces!')
Compare this to page 3 of the corresponding "La Tribu":
"Geometrie Algebrique : elle a une guele bien sympathique."
(Algebraic Geometry : she has a very nice face)
More Bourbaki congresses got their names rather timely.
In the summer of 1959 (from June 25th – July 8th) there was a congress in Pelvout-le-Poet called 'Congres du cerceau'.
'Cerceau' is French for Hula Hoop, whose new plastic version was popularized in 1958 by the Wham-O toy company and became a fad.
(Girl twirling Hula Hoop in 1958 – Wikipedia)
The next summer it was the thing to carry along for children on vacation. From the corresponding "La Tribu" (page 2):
"Le congres fut marque par la presence de nombreux enfants. Les distractions s'en ressentirent : baby-foot, biberon de l'adjudant (tres concurrence par le pastis), jeu de binette et du cerceau (ou faut-il dire 'binette se jouant du cerceau'?) ; un bal mythique a Vallouise faillit faire passer la mesure."
(try to G-translate it yourself…)
The spring 1949 congress (from April 13th-25th) was held at the Abbey of Royaumont and was called 'le congres du cocotier' (the coconut-tree congress).
From the corresponding "La Tribu 18":
"Having absorbed a tough guinea pig, Bourbaki climbed to the top of the Royaumont coconut tree, and declared, to unanimous applause, that he would only rectify rectifiable curves, that he would treat rational mechanics over the field $\mathbb{Q}$, and, that with a little bit of vaseline and a lot of patience he would end up writing the book on algebraic topology."
The guinea pig that congress was none other than Jean-Pierre Serre.
A year later (from April 5th-17th 1950) there was another Royaumont-congress called 'le congres de la revanche du cocotier' (the congress of the revenge of the coconut-tree).
"The founding members had decided to take a dazzling revenge on the indiscipline young people; mobilising all the magical secrets unveiled to them by the master, they struck down the young people with various ailments; rare were those strong enough to jump over the streams of Royaumont."
Here's what Maurice Mashaal says about this in 'Bourbaki – a secret society of mathematicians' (page 113):
"Another prank among the members was called 'le cocotier' (the coconut tree). According to Liliane Beaulieu, this was inspired by a Polynesian custom where an old man climbs a palm tree and holds on tightly while someone shakes the trunk. If he manages to hold on, he remains accepted in the social group. Bourbaki translated this custom as the following: some members would set a mathematical trap for the others. If someone fell for it, they would yell out 'cocotier'."
May I be so bold as to suggest that perhaps this sudden interest in Polynesian habits was inspired by the recent release of L'ile aux cocotiers (1949), the French translation of Robert Gibbing's book Coconut Island?
Published August 4, 2022 by lievenlb
Rereading the Grothendieck-Serre correspondence I found a letter from Serre to Grothendieck, dated October 22nd 1958, which forces me to retract some claims from the previous La Ciotat post.
Serre writes this ten days after the second La Ciotat-congress (La Tribu 46), held from October 5th-12th 1958:
"The Bourbaki meeting was very pleasant; we all stayed in the home of a man called Guérin (a friend of Schwartz's – a political one, I think); Guérin himself was in Paris and we had the whole house to ourselves. We worked outside most of the time, the weather was beautiful, we went swimming almost every day; in short, it was one of the best meetings I have ever been to."
So far so good, we did indeed find Guérin's property 'Maison Rustique Olivette' as the location of Bourbaki's La Ciotat-congresses. But, Serre was present at both meetings (the earlier one, La Tribu 35, was held from February 27th – March 6th, 1955), so wouldn't he have mentioned that they returned to that home when both meetings took place there?
From La Tribu 35:
"The Congress was held "chez Patrice", in La Ciotat, from February 27 to March 6, 1955. Present: Cartan, Dixmier, Koszul, Samuel, Serre, le Tableau (property, fortunately divisible, of Bourbaki)."
In the previous post I mentioned that there was indeed a Hotel-Restaurant "Chez Patrice" in La Ciotat, but mistakingly assumed both meetings took place at Guérin's property.
Can we locate this place?
On the backside of this old photograph
we read:
"Chez Patrice"
seul au bord de la mer
Hotel Restaurant tout confort
Spécialités Provençales
Plage privée Parc auto
Sur la route de La Ciota-Bandol
Tel 465
La Ciota (B.-d.-R.)
So it must be on the scenic coastal road from La Ciotat to Bandol. My best guess is that "Chez Patrice" is today the one Michelin-star Restaurant "La Table de Nans", located at 126 Cor du Liouquet, in La Ciotat.
Their website has just this to say about the history of the place:
"Located in an exceptional setting between La Ciotat and Saint Cyr, the building of "l'auberge du Revestel" was restored in 2016."
And a comment on a website dedicated to the nearby Restaurant Roche Belle confirms that "Chez Patrice", "l'auberge du Revestel" and "table de Nans" were all at the same place:
"Nous sommes locaux et avons découverts ce restaurant seulement le mois dernier (suite infos copains) alors que j'ai passé une partie de mon enfance et adolescence "chez Patrice" (Revestel puis chez Nans)!!!"
I hope to have it right this time: the first Bourbaki La Ciotat-meeting in 1955 took place "Chez Patrice" whereas the second 1958-congress was held at 'Maison Rustique Olivette', the property of Schwartz's friend Daniel Guérin.
Still, if you compare Serre's letter to this paragraph from Schwartz's autobiography, there's something odd:
"I knew Daniel Guérin very well until his death. Anarchist, close to Trotskyism, he later joined Marceau Prevert's PSOP. He had the kindness, after the war, to welcome in his property near La Ciotat one of the congresses of the Bourbaki group. He shared, in complete camaraderie, our life and our meals for two weeks. I even went on a moth hunt at his house and caught a death's-head hawk-moth (Acherontia atropos)."
Schwartz was not present at the second La Ciotat-meeting, and he claims Guérin shared meals with the Bourbakis whereas Serre says he was in Paris and they had the whole house to themselves.
Moral of the story: accounts right after the event (Serre's letter) are more trustworthy than later recollections (Schwartz's autobiography).
Dear Collaborators of Nicolas Bourbaki, please make all Bourbaki material (Diktat, La Tribu, versions) publicly available, certainly those documents older than 50 years.
Perhaps you can start by adding the missing numbers 36 and 49 to your La Tribu: 1940-1960 list.
Le Guide Bourbaki : La Ciotat
Two Bourbaki-congresses were organised at the Côte d'Azur, in La Ciotat, claiming to have one of the most beautiful bays in the world.
La Tribu 35, 'Congres du banc public' (February 27th – March 6th, 1955)
La Tribu 46, 'Congres du banquet auxiliaire' (October 5th-12th, 1958)
As is the case for all Bourbaki-congresses after 1953, we do not have access to the corresponding Diktat, making it hard to find the exact location.
The hints given in La Tribu are also minimal. In La Tribu 34 there is no mention of a next conferences in La Ciotat, in La Tribu 45 we read on page 11:
"October Congress: It will take place in La Ciotat, and will be a rump congress ('congres-croupion'). On the program: Flat modules, Fiber carpets, Schwartz' course in Bogota, Chapter II and I of Algebra, Reeditions of Top. Gen. III and I, Primary decomposition, theorem of Cohen and consorts, Local categories, Theorems of Ad(o), and (ritually!) abelian varieties."
La Tribu 35 itself reads:
"The Congress was held "chez Patrice", in La Ciotat, from February 27 to March 6, 1955.
Presents: Cartan, Dixmier, Koszul, Samuel, Serre, le Tableau (property, fortunately divisible, of Bourbaki).
The absence, for twenty-four hours, of any founding member, created a euphoric climate, consolidated by the aioli, non-cats, and sunbathing by the sea. We will ask Picasso for a painting on the theme 'Bourbaki soothing the elements'. However, some explorations were disturbed by barbed wire, wardens, various fences, and Samuel, blind with anger, declared that he could not find 'la patrice de massage'."
The last sentence seems to indicate that the clue "chez Patrice" is a red herring. There was, however, a Hotel-Restaurant Chez Patrice in La Ciotat.
But, we will find out that the congress-location was elsewhere. (Edit August 4th : wrong see the post La Ciotat (2).
As to that location, La Tribu 46 has this to say:
"The Congress was held in a comfortable villa, equipped with a pick-up, rare editions, tasty cuisine, and a view of the Mediterranean. In the deliberation room, Chevalley claimed to see 47 fish (not counting an object, in the general shape of a sea serpent which served as an ashtray); this prompted him to bathe; but, indisposed by a night of contemplation in front of Brandt's groupoid, he pretended to slip all his limbs into the same hole in Bruhat's bathing suit."
Present in 1958 were : Bruhat, Cartan, Chevalley, Dixmier, Godement, Malgrange
and Serre.
So far, we have not much to go on. Luckily, there are these couple of sentences in Laurent Schwartz' autobiography Un mathématicien aux prises avec le siècle:
Daniel Guérin is known for his opposition to Nazism, fascism, capitalism, imperialism and colonialism. His revolutionary defense of free love and homosexuality influenced the development of queer anarchism.
Now we're getting somewhere.
But there are some odd things in Schwartz' sentences. He speaks of 'two weeks' whereas both La Ciotat-meetings only lasted one week. Presumably, he takes the two together, so both meetings were held at Guérin's property.
Stranger seems to be that Schwartz was not present at either congress (see above list of participants). Or was he? Yes, he was present at the first 1955 meeting, masquerading as 'le Tableau'. On Bourbaki photos, Schwartz is often seen in front of their portable blackboard, as we've seen in the Pelvoux-post. Here's another picture from that 1951-conference with Weil and Schwartz discussing before 'le tableau'. (Edit August 12th : wrong, La Tribu 37 lists both Schwartz and 'Le Tableau' among those present).
Presumably, Bourbaki got invited to La Ciotat via Schwartz' connection with Guérin in 1955, and there was a repeat-visit three years later.
But, where is that property of Daniel Guérin?
I would love to claim that it is La Villa Deroze, (sometimes called the small Medici villa in La Ciotat), named after Gilbert Deroze. From the website:
"Gilbert Deroze's commitment to La Ciotat (he will be deputy mayor in 1947) is accompanied by a remarkable cultural openness. The house therefore becomes a place of hospitality and artistic and intellectual convergence. For example, it is the privileged place of reception for Daniel Guérin, French revolutionary writer, anti-colonialist, activist for homosexual emancipation, theoretician of libertarian communism, historian and art critic. But it also receives guests from the place that the latter had created nearby, the Maison Rustique Olivette, a real center of artistic residence which has benefited in particular from the presence of Chester Himes, Paul Célan, the "beat" poet Brion Gysin, or again of the young André Schwarz-Bart."
Even though the Villa Deroze sometimes received guests of Guérin, this was not the case for Bourbaki as Schwartz emphasises that the congress took place in Guérin's property near La Ciotat, which we now have identified as 'Maison (or Villa) Rustique Olivette'.
From the French wikipedia entry on La Ciotat:
"In 1953 the writer Daniel Guérin created on the heights of La Ciotat, Traverses de la Haute Bertrandière, an artists' residence in his property Rustique Olivette. In the 1950s, he notably received Chester Himes, André Schwartz-Bart, in 1957, who worked there on his book The Last of the Righteous, Paul Celan, Brion Gysin. Chester Himes returned there in 1966 and began writing his autobiography there."
Okay, now we're down from a village (La Ciotat) to a street (Traverses de la Haute Bertrandière), but which of these fabulous villas is 'Maison Rustique Olivette'?
I found one link to a firm claiming to be located at the Villa Rustique Olivette, and giving as its address: 130, Traverses de la Haute Bertrandière.
If this information is correct, we have now identified the location of the two last Bourbaki congress in La Ciotat as 'Maison Rustique Olivette',
with coordinates 43.171122, 5.597150.
Le Guide Bourbaki : Royaumont
At least six Bourbaki-congresses were held in 'Royaumont':
La Tribu 18 : 'Congres oecumenique du cocotier', April 13th-25th 1949
La Tribu 22 : 'Congres de la revanche du cocotier', April 5th-17th 1950
La Tribu without number : 'Congres de l'horizon', October 8th-15th 1950
La Tribu 26 : 'Congres croupion', October 1st-9th 1951
La Tribu 31 : 'Congres de la revelation du reglement', JUne 6th-19th 1953
La Tribu 32 : 'Congres du coryza', October 2nd-9th 1953
All meetings were pre-1954, so the ACNB generously grants us all access to the corresponding Bourbaki Diktats. From Diktat 31:
"The next congress will be held at the Abbey of Royaumont, from Saturday June 6th (not from June 5th as planned) to Saturday June 20th.
We meet at 10 a.m., June 6 at the Gare du Nord before the ticket-check. Train to Viarmes (change at Monsoult at 10.35 a.m.). Do not bring a ticket: one couch can transport 4 delegates.
Bring the Bible according to the following distribution:
Cartan: livre IV. Dixmier: Alg. 3, livre VI. Godement: Alg.4-5, Top. 1-2. Koszul: Top. 5-6-7-8-9. Schwartz: Top. 10, Alg. 1-2. Serre: Top. 3-4, livre V. Weil: Alg. 6-7, Ens. R."
Royaumont Abbey is a former Cistercian abbey, located near Asnières-sur-Oise in Val-d'Oise, approximately 30 km north of Paris, France.
How did Bourbaki end up in an abbey? From fr.wikipedia Abbaye de Royaumont:
In 1947, under the direction of Gilbert Gadoffre, Royaumont Abbey became the "International Cultural Center of Royaumont", an alternative place to traditional French university institutions. During the 1950s and 1960s, the former abbey became a meeting place for intellectual and artistic circles on an international scale, with numerous seminars, symposiums and conferences under the name "Cercle culturel de Royaumont". Among its illustrious visitors came Nathalie Sarraute, Eugène Ionesco, Alain Robbe-Grillet, Vladimir Jankélévitch, Mircea Eliade, Witold Gombrowicz, Francis Poulenc and Roger Caillois.
And… less illustrious, at least according to the French edition of Wikipedia, the Bourbaki-gang.
During the 1950ties, the Bourbakistas usually scheduled three meetings in the countryside. In the spring and autumn at places not too far from Paris (Royaumont, Celles-sur-plaines, Marlotte, Amboise…), in the summer they often went to the mountains (Pelvoux, Murols, Sallieres-les-bains,…).
Being a bit autistic, they preferred to return to the same places, rather than to explore new ones: Royaumont (6 times), Pelvoux (5 times), Celles-sur-plaine (4 times), Marlotte (3 times), Amboise (3 times),…
In the past, we've tried to pinpoint the exact locations of the pre-WW2 Bourbaki-conferences: in 1935 at le Station Biologique de l'Université Blaise Pascal', Rue du Lavoir, Besse-et-Saint-Anastaise, in 1936 and 1937 at La Massotterie in Chancay, and in 1938 at l'ecole de Beauvallon (often mistakingly referred to as the 'Dieulefit-meeting').
Let's try to do the same for their conferences in the 1950ties. Making use of the recent La Tribu releases for he period 1953-1960, let's start arbitrarily with the 1955 fall meeting in Marlotte.
Three conferences were organised in Marlotte during that period:
La Tribu 37 : 'Congres de la lune', October 23-29 1955
La Tribu 43 : 'Congres de la deuxieme lune', October 6-11 1957
La Tribu 44 : 'Congres des minutes de silence', March 16-22 1958
Grothendieck was present at all three meetings, Weil at the last two. But let us return to the fight between these two ('congres des minutes de silence') regarding algebraic geometry/category theory in another post.
Today we'll just focus on the location of these meetings. At first, this looks an easy enough task as on the opening page of La Tribu we read:
"The conference was held at the Hotel de la mare aux canards' ('Hotel of the duck pond') in Marlotte, near Fontainebleau, from October 23rd till 29th, 1955".
Just one little problem, I can't find any reference to a 'Hotel de la Mare aux Canards' in Marlotte, neither at present nor in the past.
Nowadays, Bourron-Marlotte is mainly a residential village with no great need for lodgings, apart from a few 'gites' and a plush hotel in the local 'chateau'.
At the end of the 19th century though, there was an influx of painters, attracted by the artistic 'colonie' in the village, and they needed a place to sleep, and gradually several 'Auberges' and Hotels opened their doors.
Over the years, most of these hotels were demolished, or converted to family houses. The best list of former hotels in Marlotte, and their subsequent fate, I could find is L'essor hôtelier de Bourron et de Marlotte.
There's no mention of any 'Hotel de la mare aux canards', but there was a 'Hotel de la mare aux fées' (Hotel of the fairy pond), which sadly was demolished in the 1970ties.
There's little doubt that this is indeed the location of Bourbaki's Marlotte-meetings, as the text on page one of La Tribu 37 above continues as (translation by Maurice Mashaal in 'Bourbaki a secret society of mathematicians', page 109):
"Modest and subdued sunlight, lustrous bronze leaves fluttering in the wind, a pond without fairies, modules without end, indigestible stones, and pierced barrels: everything contributes to the drowsiness of these blasé believers. 'Yet they are serious', says the hotel-keeper, 'I don't know what they are doing with all those stones, but they're working hard. Maybe they're preparing for a journey to the moon'."
Bourbaki didn't see any fairies in the pond, only ducks, so for Him it was the Hotel of the duck pond.
In fact La mare aux fées is one of the best known spots in the forest of Fontainebleau, and has been an inspiration for many painters, including Pierre-August Renoir:
Here's the al fresco restaurant of the Hotel de la mare aux fées:
Both photographs are from the beginning of the 20th century, but also in the 50ties it was a Hotel of some renown as celebreties, including the actor Jean Gabin, stayed there.
The exact location of the former Hotel de la mare aux fées is 83, Rue Murger in Bourron-Marlotte.
Princeton's own Bourbaki
Published April 8, 2021 by lievenlb
In the first half of 1937, Andre Weil visited Princeton and introduced some of the postdocs present (notably Ralph Boas, John Tukey, and Frank Smithies) to Poldavian lore and Bourbaki's early work.
In 1935, Bourbaki succeeded (via father Cartan) to get his paper "Sur un théorème de Carathéodory et la mesure dans les espaces topologiques" published in the Comptes Rendus des Séances Hebdomadaires de l'Académie des Sciences.
Inspired by this, the Princeton gang decided to try to get a compilation of their mathematical ways to catch a lion in the American Mathematical Monthly, under the pseudonym H. Petard, and accompanied by a cover letter signed by another pseudonym, E. S. Pondiczery.
By the time the paper "A contribution to the mathematical theory of big game hunting" appeared, Boas and Smithies were in cambridge pursuing their postdoc work, and Boas reported back to Tukey: "Pétard's paper is attracting attention here," generating "subdued chuckles … in the Philosophical Library."
On the left, Ralph Boas in 'official' Pondiczery outfit – Photo Credit.
The acknowledgment of the paper is in true Bourbaki-canular style.
The author desires to acknowledge his indebtedness to the Trivial Club of St. John's College, Cambridge, England; to the M.I.T. chapter of the Society for Useless Research; to the F. o. P., of Princeton University; and to numerous individual contributors, known and unknown, conscious and unconscious.
The Trivial Club of St. John's College probably refers to the Adams Society, the St. John's College mathematics society. Frank Smithies graduated from St. John's in 1933, and began research on integral equations with Hardy. After his Ph. D., and on a Carnegie Fellowship and a St John's College studentship, Smithies then spent two years at the Institute for Advanced Study at Princeton, before returning back 'home'.
In the previous post, I assumed that Weil's visit to Cambridge was linked to Trinity College. This should probably have been St. John's College, his contact there being (apart from Smithies) Max Newman, a fellow of St. John's. There are two letters from Weil (summer 1939, and summer 1940) in the Max Newman digital library.
The Eagle Scanning Project is the online digital archive of The Eagle, the Journal of St. John's College. Last time I wanted to find out what was going on, mathematically, in Cambridge in the spring of 1939. Now I know I just had to peruse the Easter 1939 and Michaelmas 1939 volumes of the Eagle, focussing on the reports of the Adams Society.
In the period Andre Weil was staying in Cambridge, they had a Society Dinner in the Music Room on March 9th, a talk about calculating machines (with demonstration!) on April 27th, and the Annual Business Meeting on May 11th, just two days before their punting trip to Grantchester,
The M.I.T. chapter of the Society for Useless Research is a different matter. The 'Useless Research' no doubt refers to Extrasensory Perception, or ESP. Pondiczery's initials E. S. were chosen with a future pun in mind, as Tukey said in a later interview:
"Well, the hope was that at some point Ersatz Stanislaus Pondiczery at the Royal Institute of Poldavia was going to be able to sign something ESP RIP."
What was the Princeton connection to ESP research?
Well, Joseph Banks Rhine conducted experiments at Duke University in the early 1930s on ESP using Zener cards. Amongst his test-persons was Hubert Pearce, who scored an overall 40% success rate, whereas chance would have been 20%.
Pearce and Joseph Banks Rhine (1932) – Photo Credit
In 1936, W. S. Cox tried to repeat Rhine's experiment at Princeton University but failed. Cox concluded "There is no evidence of extrasensory perception either in the 'average man' or of the group investigated or in any particular individual of that group. The discrepancy between these results and those obtained by Rhine is due either to uncontrollable factors in experimental procedure or to the difference in the subjects."
As to the 'MIT chapter of the society for useless research', a chapter usually refers to a fraternity at a University, but I couldn't find a single one on the list of MIT fraternities involved in ESP, now or back in the late 1930s.
However, to my surprise I found that there is a MIT Archive of Useless Research, six boxes full of amazing books, pamphlets and other assorted 'literature' compiled between 1900 and 1940.
The Albert G. Ingalls pseudoscience collection (its official name) comprises collections of books and pamphlets assembled by Albert G. Ingalls while associate editor of Scientific American, and given to the MIT Libraries in 1940. Much of the material rejects contemporary theories of physical sciences, particularly theoretical and planetary physics; a smaller portion builds upon contemporary science and explores hypotheses not yet accepted.
I don't know whether any ESP research is included in the collection, nor whether Boas and Tukey were aware of its existence in 1938, but it sure makes a good story.
The final riddle, the F. o. P., of Princeton University is an easy one. Of course, this refers to the "Friends of Pondiczery", the circle of people in Princeton who knew of the existence of their very own Bourbaki.
the Bourbaki code revisited
The fictitious life of Nicolas Bourbaki remains a source of fascination to some.
A few weeks ago, Michael Barany wrote an article for the JStor Daily The mathematical pranksters behind Nicolas Bourbaki.
Here's one of the iconic early Bourbaki pictures, taken at the Dieulefit-meeting in 1938. More than a decade ago I discovered the exact location of that meeting in the post Bourbaki and the miracle of silence.
Bourbaki at Beauvallon 1938 – Photo Credit
That post was one of a series on the pre-war years of Bourbaki, and the riddles contained in the invitation card of the Betti Bourbaki-Hector Petard wedding that several mathematicians in Cambridge, Princeton and Paris received in the spring of 1939.
A year ago, The Ferret made the nice YouTube clip "Bourbaki – a Tale of Mathematics, Lions and Espionage", which gives a quick introduction to Bourbaki and the people mentioned in the wedding invitation.
This vacation period may be a good opportunity to revisit some of my older posts on this subject, and add newer material I discovered since then.
For this reason, I've added a new category, tBC for 'the Bourbaki Code', and added the old posts to it.
Map of the Parisian mathematical scene 1933-39
Published January 9, 2015 by lievenlb
Michele Audin has written a book on the history of the Julia seminar (hat tip +Chandan Dalawat via Google+).
The "Julia Seminar" was organised between 1933 and 1939, on monday afternoons, in the Darboux lecture hall of the Institut Henri Poincare.
After good German tradition, the talks were followed by tea, "aimablement servi par Mmes Dubreil et Chevalley".
A perhaps surprising discovery Audin made is that the public was expected to pay an attendance fee of 50 Frs. (approx. 32 Euros, today), per year. Fortunately, this included tea…
The annex of the book contains the lists of all people who have paid their dues, together with their home addresses.
The map above contains most of these people, provided they had a Parisian address. For example, Julia himself lived in Versailles, so is not included.
As are several of the first generation Bourbakis: Dieudonne lived in Rennes, Henri Cartan and Andre Weil in Strasbourg, Delsarte in Nancy, etc.
Still, the lists are a treasure trove of addresses of "les vedettes" (the professors and the people in the Bourbaki-circle) which have green markers on the map, and "les figurants" (often PhD students, or foreign visitors of the IHP), the blue markers.
Several PhD-students gave the Ecole Normale Superieure (btw. note the 'je suis Charlie'-frontpage of the ENS today jan.9th) in the rue d'Ulm as their address, so after a few of them I gave up adding others.
Further, some people changed houses over this period. I will add these addresses later on.
The southern cluster of markers on Boulevard Jourdan follows from the fact that the university had a number of apartment blocks there for professors and visitors (hat tip Liliane Beaulieu).
A Who's Who at the Julia seminar can be found in Audin's book (pages 154-167).
Michele Audin : "Le seminaire de mathematiques 1933-1939, premiere partie: l'histoire" | CommonCrawl |
How can you calculate the tidal gradient for an orbit?
In the movie Gravity, two characters are dangling from the international space station by a long tether. I've previously wondered exactly how you could calculate the tidal forces that act on an object connected to another object in orbit.
I think I've got most of a solution... the orbital speed of an very small object orbiting a very large one is defined as $$ v=\sqrt{\frac{\mu}{a}} $$ where $\mu$ is standard gravitational parameter for the large object and $a$ is the semi-major axis. $\mu$ for the Earth is 398,600.4418, and the semi-major axis for the ISS is 6775 km. This gives us an orbital velocity of 7.670333 km/s. We can substitute in the new semi-major axis by subtracting in the length of the tether (be very generous and say 500 meters) and get 7.670616 km/s, which would be the orbital velocity of something that is floating free at that distance from the ISS... but that doesn't really help me.
What I need to be able to do is calculate, given an apogee of 6774.5 km, and an orbital speed of 7.670333, what is the perigee height. If I have that I have both a distance (apogee - perigee) and a time (one half the orbital period of about 90 minutes). This will tell me how many meters per second the tethered object wants to move, but I'm not sure how to turn that into an effective acceleration, or how to calculate the orbital heights.
gravity orbital-motion tidal-effect
fibonatic
asked Oct 8 '13 at 4:31
JhericoJherico
$\begingroup$ IIRC, 2 tethered objects that would be in stable orbits if untethered, have a rotational period equal to their orbital period. I cannot remember where I learned that though, has something to do with perturbations I think... $\endgroup$ – RBarryYoung Dec 7 '13 at 16:57
$\begingroup$ @RBarryYoung yes this stands to reason as the tidal forces will keep the tether pointed at the center of mass of the object around which they orbit. What I'm interesting knowing is how to calculate the exact force the tether exerts on each object, which should be a function of the difference between the orbital speed they would have untethered and the orbital speed of the combined system. $\endgroup$ – Jherico Dec 7 '13 at 19:32
$\begingroup$ actually, I believe that it rotates backwards. Remember that the lower end must always move faster than the upper end. Therefore the high point is always moving relatively backwards against the orbit to be in balance. $\endgroup$ – RBarryYoung Dec 8 '13 at 5:15
$\begingroup$ No, it doesn't rotate backwards. The axis through the two objects will always point to the center of mass of the object around which they are orbiting. It's the same effect that causes the moon to always show the same face to the earth, and is called tidal locking: en.wikipedia.org/wiki/Tidal_locking $\endgroup$ – Jherico Dec 8 '13 at 23:00
You can calculate this using the conservation of specific orbital energy and angular momentum. But I not think that this will help you find the resulting tidal force.
The center of gravity of the system of the two assumed rigidly connected bodies will remain on the same trajectory. To find the tidal force, you just need to look at the resulting torque around this center of gravity. But without any dissipation of energy this system will not become tidally locked, since it will keep oscillating/rotating.
fibonaticfibonatic
Not the answer you're looking for? Browse other questions tagged gravity orbital-motion tidal-effect or ask your own question.
Unexpected eccentricity in moon orbit simulation
How to calculate orbital eccentricity from the ratio of satellites' velocities?
Clearing up confusion about orbital mechanics
Why isn't the time between apogee and perigee constant?
Calculate orbital changes after acceleration
Remotely finding the mass of a satellite
How to calculate transit time for a specific section of an orbit?
Orbital mechanics : finding eccentricity and semi-major axis
How do I calculate a planet's mass given a satellite's orbital period and semimajor axis?
Horseshoe orbit cycle times | CommonCrawl |
Support this site. Donate.
Cost–benefit analysis (CBA), sometimes called benefit costs analysis (BCA), is a systematic approach to estimating the strengths and weaknesses of alternatives used to determine options which provide the best approach to achieving benefits while preserving savings (for example, in transactions, activities, and functional business requirements).[1] A CBA may be used to compare completed or potential courses of actions, or to estimate (or evaluate) the value against the cost of a decision, project, or policy. It is commonly used in commercial transactions, business or policy decisions (particularly public policy), and project investments.
CBA has two main applications:[2]
To determine if an investment (or decision) is sound, ascertaining if – and by how much – its benefits outweigh its costs.
To provide a basis for comparing investments (or decisions), comparing the total expected cost of each option with its total expected benefits.
CBA is related to cost-effectiveness analysis. Benefits and costs in CBA are expressed in monetary terms and are adjusted for the time value of money; all flows of benefits and costs over time are expressed on a common basis in terms of their net present value, regardless of whether they are incurred at different times. Other related techniques include cost–utility analysis, risk–benefit analysis, economic impact analysis, fiscal impact analysis, and social return on investment (SROI) analysis.
Cost–benefit analysis is often used by organizations to appraise the desirability of a given policy. It is an analysis of the expected balance of benefits and costs, including an account of any alternatives and the status quo. CBA helps predict whether the benefits of a policy outweigh its costs (and by how much), relative to other alternatives. This allows the ranking of alternative policies in terms of a cost–benefit ratio.[3] Generally, accurate cost–benefit analysis identifies choices which increase welfare from a utilitarian perspective. Assuming an accurate CBA, changing the status quo by implementing the alternative with the lowest cost–benefit ratio can improve Pareto efficiency.[4] Although CBA can offer an informed estimate of the best alternative, a perfect appraisal of all present and future costs and benefits is difficult; perfection, in economic efficiency and social welfare, is not guaranteed.[5]
The value of a cost–benefit analysis depends on the accuracy of the individual cost and benefit estimates. Comparative studies indicate that such estimates are often flawed, preventing improvements in Pareto and Kaldor–Hicks efficiency.[6] Interest groups may attempt to include (or exclude) significant costs in an analysis to influence its outcome.[7]
1.1 Public policy
1.2 Transportation investment
2 Accuracy
5 Time and discounting
5.1 Methods for choosing a discount rate
6 Risk and uncertainty
6.1 Principle of maximum entropy
6.2 CBA under US administrations
6.3 Example: rail transit in the United States
6.3.1 High-speed rail
6.3.2 Light rail
6.3.3 Subways
7 Distributional Issues
History [ edit ]
French engineer and economist Jules Dupuit, credited with the creation of cost–benefit analysis
The concept of CBA dates back to an 1848 article by Jules Dupuit, and was formalized in subsequent works by Alfred Marshall.[8] Jules Dupuit pioneered this approach by first calculating "the social profitability of a project like the construction of a road or bridge"[9] In an attempt to answer this, Dupuit began to look at the utility users would gain from the project. He determined that the best method of measuring utility is by learning one's willingness to pay for something. By taking the sum of each user's willingness to pay, Dupuit illustrated that the social benefit of the thing (bridge or road or canal) could be measured. Some users may be willing to pay nearly nothing, others much more, but the sum of these would shed light on the benefit of it. It should be reiterated that Dupuit was not suggesting that the government perfectly price discriminate and charge each user exactly what they would pay. Rather, their willingness to pay provided a theoretical foundation on the societal worth or benefit of a project. The cost of the project proved much simpler to calculate. Simply taking the sum of the materials and labor, in addition to the maintenance afterward, would give one the cost. Now, the costs and benefits of the project could be accurately analyzed, and an informed decision could be made.
The Corps of Engineers initiated the use of CBA in the US, after the Federal Navigation Act of 1936 mandated cost–benefit analysis for proposed federal-waterway infrastructure.[10] The Flood Control Act of 1939 was instrumental in establishing CBA as federal policy, requiring that "the benefits to whomever they accrue [be] in excess of the estimated costs."[11]
Public policy [ edit ]
CBA's application to broader public policy began with the work of Otto Eckstein,[12] who laid out a welfare economics foundation for CBA and its application to water-resource development in 1958. It was applied in the US to water quality,[13] recreational travel,[14] and land conservation during the 1960s,[15] and the concept of option value was developed to represent the non-tangible value of resources such as national parks.[16]
CBA was expanded to address the intangible and tangible benefits of public policies relating to mental illness,[17] substance abuse,[18] college education,[19] and chemical waste.[20] In the US, the National Environmental Policy Act of 1969 required CBA for regulatory programs; since then, other governments have enacted similar rules. Government guidebooks for the application of CBA to public policies include the Canadian guide for regulatory analysis,[21] the Australian guide for regulation and finance,[22] and the US guides for health-care[23] and emergency-management programs.[24]
Transportation investment [ edit ]
CBA for transport investment began in the UK with the M1 motorway project and was later used for many projects, including the London Underground's Victoria line.[25] The New Approach to Appraisal (NATA) was later introduced by the Department for Transport, Environment and the Regions. This presented balanced cost–benefit results and detailed environmental impact assessments. NATA was first applied to national road schemes in the 1998 Roads Review, and was subsequently rolled out to all transport modes. Maintained and developed by the Department for Transport, it was a cornerstone of UK transport appraisal in 2011.
The European Union's Developing Harmonised European Approaches for Transport Costing and Project Assessment (HEATCO) project, part of the EU's Sixth Framework Programme, reviewed transport appraisal guidance of EU member states and found significant national differences.[26] HEATCO aimed to develop guidelines to harmonise transport appraisal practice across the EU.[27]
Transport Canada promoted CBA for major transport investments with the 1994 publication of its guidebook.[28] US federal and state transport departments commonly apply CBA with a variety of software tools, including HERS, BCA.Net, StatBenCost, Cal-BC, and TREDIS. Guides are available from the Federal Highway Administration,[29][30] Federal Aviation Administration,[31] Minnesota Department of Transportation,[32] California Department of Transportation (Caltrans),[33] and the Transportation Research Board's Transportation Economics Committee.[34]
Accuracy [ edit ]
In the case of the Ford Pinto (where, because of design flaws, the Pinto was liable to burst into flames in a rear-impact collision), the company decided not to issue a recall. Ford's cost–benefit analysis had estimated that based on the number of cars in use and the probable accident rate, deaths due to the design flaw would cost it about $49.5 million in wrongful death lawsuits; a recall would cost $137.5 million. The company failed to consider the costs of negative publicity, which forced a recall and reduced Ford sales.[35]
In health economics, CBA may be an inadequate measure because willingness-to-pay methods of determining the value of human life can be influenced by income level. Variants, such as cost–utility analysis, QALY and DALY to analyze the effects of health policies, may be more suitable.[36][37]
For some environmental effects, cost–benefit analysis can be replaced by cost-effectiveness analysis. This is especially true when one type of physical outcome is sought, such as a reduction in energy use by an increase in energy efficiency. Using cost-effectiveness analysis is less laborious and time-consuming, since it does not involve the monetization of outcomes (which can be difficult in some cases).[38]
It has been argued that if modern cost–benefit analyses had been applied to decisions such as whether to mandate the removal of lead from gasoline, block the construction of two proposed dams just above and below the Grand Canyon on the Colorado River, and regulate workers' exposure to vinyl chloride, the measures would not have been implemented (although all are considered highly successful).[39] The US Clean Air Act has been cited in retrospective studies as a case in which benefits exceeded costs, but knowledge of the benefits (attributable largely to the benefits of reducing particulate pollution) was not available until many years later.[39]
Process [ edit ]
A generic cost–benefit analysis has the following steps:[40]
Define the goals and objectives of the action.
List alternative actions.
List stakeholders.[dubious – discuss]
Select measurement(s) and measure all cost and benefit elements.
Predict outcome of costs and benefits over the relevant time period.
Convert all costs and benefits into a common currency.
Apply discount rate.
Calculate the net present value of actions under consideration.
Perform sensitivity analysis.
Adopt the recommended course of action.
Evaluation [ edit ]
CBA attempts to measure the positive or negative consequences of a project. A similar approach is used in the environmental analysis of total economic value. Both costs and benefits can be diverse. Costs tend to be most thoroughly represented in cost–benefit analyses due to relatively-abundant market data. The net benefits of a project may incorporate cost savings, public willingness to pay (implying that the public has no legal right to the benefits of the policy), or willingness to accept compensation (implying that the public has a right to the benefits of the policy) for the policy's welfare change. The guiding principle of evaluating benefits is to list all parties affected by an intervention and add the positive or negative value (usually monetary) that they ascribe to its effect on their welfare.
The actual compensation an individual would require to have their welfare unchanged by a policy is inexact at best. Surveys (stated preferences) or market behavior (revealed preferences) are often used to estimate compensation associated with a policy. Stated preferences are a direct way of assessing willingness to pay for an environmental feature, for example.[41] Survey respondents often misreport their true preferences, however, and market behavior does not provide information about important non-market welfare impacts. Revealed preference is an indirect approach to individual willingness to pay. People make market choices of items with different environmental characteristics, for example, revealing the value placed on environmental factors.[42]
The value of human life is controversial when assessing road-safety measures or life-saving medicines. Controversy can sometimes be avoided by using the related technique of cost-utility analysis, in which benefits are expressed in non-monetary units such as quality-adjusted life years. Road safety can be measured in cost per life saved, without assigning a financial value to the life. However, non-monetary metrics have limited usefulness for evaluating policies with substantially different outcomes. Other benefits may also accrue from a policy, and metrics such as cost per life saved may lead to a substantially-different ranking of alternatives than CBA.
Another metric is valuing the environment, which in the 21st century is typically assessed by valuing ecosystem services to humans (such as air and water quality and pollution).[43] Monetary values may also be assigned to other intangible effects such as business reputation, market penetration, or long-term enterprise strategy alignment.
Time and discounting [ edit ]
CBA generally attempts to put all relevant costs and benefits on a common temporal footing, using time value of money calculations. This is often done by converting the future expected streams of costs C{\displaystyle C} and benefits B{\displaystyle B} into a present value amount with a discount rate r{\displaystyle r} and the net present value defined as:
NPV=∑t=0∞Bt−Ct(1+r)t{\displaystyle {\text{NPV}}=\sum _{t=0}^{\infty }{B_{t}-C_{t} \over {(1+r)^{t}}}}
The selection of a discount rate for this calculation is subjective. A smaller rate values the current generation and future generations equally. Larger rates (a market rate of return, for example) reflects human present bias or hyperbolic discounting: valuing money which they will receive in the near future more than money they will receive in the distant future. Empirical studies suggest that people discount future benefits in a way similar to these calculations.[44] The choice makes a large difference in assessing interventions with long-term effects. An example is the equity premium puzzle, which suggests that long-term returns on equities may be higher than they should be after controlling for risk and uncertainty. If so, market rates of return should not be used to determine the discount rate because they would undervalue the distant future.[45]
Methods for choosing a discount rate [ edit ]
For publicly traded companies, it is possible to find a project's discount rate by using an equilibrium asset pricing model to find the required return on equity for the company and then assuming that the risk profile of a given project is similar to that the company faces. Commonly used models include the capital asset pricing model (CAPM):
r=rf+β[E(rM)−rf]{\displaystyle r=r_{f}+\beta \left[\mathbb {E} (r_{M})-r_{f}\right]}
and the Fama-French model:
r=rf⏟Risk-Free Rate+βM[E(rM)−rf]⏟Market Risk+βSMB[E(rS)−E(rB)]⏟Size Factor+βHML[E(rH)−E(rL)]⏟Value Factor{\displaystyle r=\underbrace {r_{f}} _{\text{Risk-Free Rate}}+\beta _{M}\underbrace {\left[\mathbb {E} (r_{M})-r_{f}\right]} _{\text{Market Risk}}+\beta _{SMB}\underbrace {\left[\mathbb {E} (r_{S})-\mathbb {E} (r_{B})\right]} _{\text{Size Factor}}+\beta _{HML}\underbrace {\left[\mathbb {E} (r_{H})-\mathbb {E} (r_{L})\right]} _{\text{Value Factor}}}
where the βi{\displaystyle \beta _{i}} terms correspond to the factor loadings. A generalization of these methods can be found in arbitrage pricing theory, which allows for an arbitrary number of risk premiums in the calculation of the required return.
Risk and uncertainty [ edit ]
Risk associated with project outcomes is usually handled with probability theory. Although it can be factored into the discount rate (to have uncertainty increasing over time), it is usually considered separately. Particular consideration is often given to agent risk aversion: preferring a situation with less uncertainty to one with greater uncertainty, even if the latter has a higher expected return.
Uncertainty in CBA parameters can be evaluated with a sensitivity analysis, which indicates how results respond to parameter changes. A more formal risk analysis may also be undertaken with the Monte Carlo method.[46] However, even a low parameter of uncertainty does not guarantee the success of a project.
Principle of maximum entropy [ edit ]
Suppose that we have sources of uncertainty in a CBA that are best treated with the Monte Carlo method, and the distributions describing uncertainty are all continuous. How do we go about choosing the appropriate distribution to represent the sources of uncertainty? One popular method is to make use of the principle of maximum entropy, which states that the distribution with the best representation of current knowledge is the one with the largest entropy - defined for continuous distributions as:
H(X)=E[−logf(X)]=−∫Sf(x)logf(x)dx{\displaystyle H(X)=\mathbb {E} \left[-\log f(X)\right]=-\int _{\mathcal {S}}f(x)\log f(x)dx}
where S{\displaystyle {\mathcal {S}}} is the support set of a probability density function f(x){\displaystyle f(x)} . Suppose that we impose a series of constraints that must be satisfied:
f(x)≥0{\displaystyle f(x)\geq 0} , with equality outside of S{\displaystyle {\mathcal {S}}}
∫ S f ( x ) d x = 1 {\displaystyle \int _{\mathcal {S}}f(x)dx=1}
∫ S r i ( x ) f ( x ) d x = α i , i = 1 , . . . , m {\displaystyle \int _{\mathcal {S}}r_{i}(x)f(x)dx=\alpha _{i},\quad i=1,...,m}
where the last equality is a series of moment conditions. Maximizing the entropy with these constraints leads to the functional:[47]
J=maxf∫S(−flogf+λ0f+∑i=1mλirif)dx{\displaystyle J=\max _{f}\;\int _{\mathcal {S}}\left(-f\log f+\lambda _{0}f+\sum _{i=1}^{m}\lambda _{i}r_{i}f\right)dx}
where the λi{\displaystyle \lambda _{i}} are Lagrange multipliers. Maximizing this functional leads to the form of a maximum entropy distribution:
f(x)=exp[λ0−1+∑i=1mλiri(x)]{\displaystyle f(x)=\exp \left[\lambda _{0}-1+\sum _{i=1}^{m}\lambda _{i}r_{i}(x)\right]}
There is a direct correspondence between the form of a maximum entropy distribution and the exponential family. Examples of commonly used continuous maximum entropy distributions in simulations include:
No constraints are imposed over the support set S∈[a,b]{\displaystyle {\mathcal {S}}\in [a,b]}
It is assumed that we have maximum ignorance about the uncertainty
Specified mean E(X){\displaystyle \mathbb {E} (X)} over the support set S∈[0,∞){\displaystyle {\mathcal {S}}\in [0,\infty )}
Gamma distribution
Specified mean E(X){\displaystyle \mathbb {E} (X)} and log mean E(logX){\displaystyle \mathbb {E} (\log X)} over the support set S∈[0,∞){\displaystyle {\mathcal {S}}\in [0,\infty )}
The exponential distribution is a special case
Specified mean E(X){\displaystyle \mathbb {E} (X)} and variance Var(X){\displaystyle {\text{Var}}(X)} over the support set S∈(−∞,∞){\displaystyle {\mathcal {S}}\in (-\infty ,\infty )}
If we have a specified mean and variance on the log scale, then the lognormal distribution is the maximum entropy distribution
CBA under US administrations [ edit ]
The increased use of CBA in the US regulatory process is often associated with President Ronald Reagan's administration. Although CBA in US policy-making dates back several decades, Reagan's Executive Order 12291 mandated its use in the regulatory process. After campaigning on a deregulation platform, he issued the 1981 EO authorizing the Office of Information and Regulatory Affairs (OIRA) to review agency regulations and requiring federal agencies to produce regulatory impact analyses when the estimated annual impact exceeded $100 million. During the 1980s, academic and institutional critiques of CBA emerged. The three main criticisms were:[48]
That CBA could be used for political goals. Debates on the merits of cost and benefit comparisons can be used to sidestep political or philosophical goals, rules and regulations.
That CBA is inherently anti-regulatory, and therefore a biased tool. The monetization of policy impacts is an inappropriate tool for assessing mortality risks and distributional impacts.
That the length of time necessary to complete CBA can create significant delays, which can impede policy regulation.
These criticisms continued under the Clinton administration during the 1990s. Clinton furthered the anti-regulatory environment with his Executive Order 12866.[49] The order changed some of Reagan's language, requiring benefits to justify (rather than exceeding) costs and adding "reduction of discrimination or bias" as a benefit to be analyzed. Criticisms of CBA (including uncertainty valuations, discounting future values, and the calculation of risk) were used to argue that it should play no part in the regulatory process.[50] The use of CBA in the regulatory process continued under the Obama administration, along with the debate about its practical and objective value. Some analysts oppose the use of CBA in policy-making, and those in favor of it support improvements in analysis and calculations.
Example: rail transit in the United States [ edit ]
As of 2016, only 5.1% of Americans used any form of public transit to commute to work;[51] this number has remained flat since 1980.[52] Moreover, the vast majority of U.S. metro rail systems lose money on each trip[53] and projects tend to have significant cost overruns.[54] The combination of low ridership and cost overruns suggests that, on average, the benefits of rail transit are exaggerated and the costs are understated.[55] Below are several examples of projects where either the costs are significantly higher than projected or the promised benefits have not materialized:
High-speed rail [ edit ]
California High-Speed Rail#Economic projections
Only two high-speed rail lines in the world are profitable without subsidies: Tokyo-Osaka and Paris-Lyon[56]
Light rail [ edit ]
The People Mover operates at 2.6% capacity[57]
The QLine has failed to meet ridership expectations[58]
The project was initially budgeted at $3.7 billion, but costs have ballooned to $9 billion[59][60]
It is likely that the completed project would only relieve 1% of road congestion in Honolulu[61]
Subways [ edit ]
Bay Area Rapid Transit (BART)
Nobel LaureateDaniel McFadden predicted that 6.3% of Bay Area commuters would use BART, as compared to the official forecast of 15%.[62] The true number was 6.2%.[63]
New York City Subway
The cost of building subway extensions in New York, in particular the Second Avenue Subway, is the highest in the world[64][65]
Only 40% of revenue for the MTA comes from fares, with 42% coming from taxes and subsidies[66]
Tren Urbano (San Juan, Puerto Rico)
The system has been operating at approximately 30% capacity since inception[67][68][69]
Distributional Issues [ edit ]
CBA has been criticized in some disciplines as it relies on the Kaldor-Hicks criterion which does not take into account distributional issues. This means, that positive net-benefits are decisive, independent of who benefits and who loses when a certain policy or project is put into place. Phaneuf and Requate (2016: p. 649) phrased it as follows "CBA today relies on the Kaldor-Hicks criteria to make statements about efficiency without addressing issues of income distribution. This has allowed economists to stay silent on issues of equity, while focussing on the more familiar task of measuring costs and benefits".[70]
The main criticism stems from the diminishing marginal utility of income.[71][72] Without using weights in the CBA, it is not the case that everyone "matters" the same but rather that people who value money less (who are by assumption people with more money) receive a higher weight. One reason for this is that for high income people, one monetary unit is worth less relative to low income people, so they are more willing to give up one unit in order to make a change that is favourable for them.[73] This means that there is no symmetry in agents. A second reason is that any welfare change, no matter positive or negative, affects people with a lower income stronger than people with a higher income.
Taken together, this means that not using weights is a decision in itself – richer people receive de facto a bigger weight. As to compensate for this difference in valuation and in order to take into account distributional issues, it is possible to use different methods. The two most common ones are taxation, e.g. through a progressive tax, and the addition of weights into the CBA itself. There are a number of different approaches for calculating these weights. Often, a Bergson-Samuelson social welfare function is used and weights are calculated according to the willingness-to-pay of people.[74][75]
See also [ edit ]
Calculus of negligence
Downside risk
Efficient contract theory
Guns versus butter model
Have one's cake and eat it too
Shadow price
Statistical murder
Tax choice
Trade-off
Triple bottom line cost–benefit analysis
Uncertainty quantification
References [ edit ]
^ David, Rodreck; Ngulube, Patrick; Dube, Adock (16 July 2013). "A cost–benefit analysis of document management strategies used at a financial institution in Zimbabwe: A case study". SA Journal of Information Management. 15 (2). doi:10.4102/sajim.v15i2.540.
^ [1]Archived October 16, 2008, at the Wayback Machine
^ Cellini, Stephanie Riegg; Kee, James Edwin. "Cost-Effectiveness and Cost–Benefit Analysis" (PDF). Archived from the original (PDF) on 2013-05-26. Retrieved 2012-09-24. Cite journal requires |journal= (help)
^ "Archived copy"(PDF). Archived from the original(PDF) on 2012-11-02. Retrieved 2012-09-20. CS1 maint: archived copy as title (link)
^ Weimer, D.; Vining, A. (2005). Policy Analysis: Concepts and Practice (Fourth ed.). Upper Saddle River, NJ: Pearson Prentice Hall. ISBN 978-0-13-183001-1.
^ Pamela, Misuraca (2014). "The Effectiveness of a Costs and Benefits Analysis in Making Federal Government Decisions: A Literature Review" (PDF). The MITRE Corporation.
^ Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 8, "The Positive Biases of Technology Assessments and Cost Benefit Analyses", New Society Publishers, Gabriola Island, British Columbia, Canada, ISBN 0865717044, 464 pp.
^ Wiener, Jonathan B. (2013). "The Diffusion of Regulatory Oversight". In Livermore, Michael A.; Revesz, Richard L. (eds.). The Globalization of Cost–Benefit Analysis in Environmental Policy. New York: Oxford University Press. ISBN 978-0-199-93438-6.
^ Sandmo, Agnar (2011). Economics evolving : a history of economic thought. Princeton University Press. ISBN 9780691148427. OCLC 799499179.
^ "History of Benefit-Cost Analysis"(PDF). Proceedings of the 2006 Cost Benefit Conference. Archived from the original(PDF) on 2006-06-16.
^ Guess, George M.; Farnham, Paul G. (2000). Cases in Public Policy Analysis. Washington, DC: Georgetown University Press. pp. 304–308. ISBN 978-0-87840-768-2.
^ Eckstein, Otto (1958). Water Resource Development: The Economics of Project Evaluation. Cambridge: Harvard University Press.
^ Kneese, A. V. (1964). The Economics of Regional Water Quality Management. Baltimore: Johns Hopkins Press.
^ Clawson, M.; Knetsch, J. L. (1966). Economics of Outdoor Recreation. Baltimore: Johns Hopkins Press.
^ Krutilla, J. V. (1967). "Conservation Reconsidered". American Economic Review. 57 (4): 777–786. JSTOR 1815368.
^ Weisbrod, Burton A. (1964). "Collective-Consumption Services of Individual-Consumption Goods". Quarterly Journal of Economics. 78 (3): 471–477. doi:10.2307/1879478. JSTOR 1879478.
^ Weisbrod, Burton A. (1981). "Benefit-Cost Analysis of a Controlled Experiment: Treating the Mentally Ill". Journal of Human Resources. 16 (4): 523–548. doi:10.2307/145235. JSTOR 145235.
^ Plotnick, Robert D. (1994). "Applying Benefit-Cost Analysis to Substance Abuse Prevention Programs". International Journal of the Addictions. 29 (3): 339–359. doi:10.3109/10826089409047385. PMID 8188432.
^ Weisbrod, Burton A.; Hansen, W. Lee (1969). Benefits, Costs, and Finance of Public Higher Education. Markham.
^ Moll, K. S.; et al. (1975). Hazardous wastes: A Risk-Benefit Framework Applied to Cadmium and Asbestos. Menlo Park, CA: Stanford Research Institute.
^ Canadian Cost–Benefit Guide: Regulatory Proposals, Treasury Canada, 2007. [2]
^ Australian Government, 2006. Introduction to Cost–Benefit Analysis and Alternative Evaluation Methodologies and Handbook of Cost–Benefit Analysis, Finance Circular 2006/01. http://www.finance.gov.au/publications/finance-circulars/2006/01.html Archived 2014-02-01 at the Wayback Machine
^ US Department of Health and Human Services, 1993. Feasibility, Alternatives, And Cost/Benefit Analysis Guide, Administration for Children and Families, and Health Care Finance Administration. http://www.acf.hhs.gov/programs/cb/systems/sacwis/cbaguide/index.htm
^ Federal Emergency Management Administration, 1022. Guide to Benefit Cost Analysis. http://www.fema.gov/government/grant/bca.shtm
^ Hugh Coombs; Ellis Jenkins; David Hobbs (18 April 2005). Management Accounting: Principles and Applications. SAGE Publications. pp. 278–. ISBN 978-1-84787-711-6.
^ "HEATCO project site". Heatco.ier.uni-stuttgart.de. Archived from the original on 2015-05-24. Retrieved 2013-04-21.
^ [3] Guide to Cost–Benefit Analysis of Major Projects. Evaluation Unit, DG Regional Policy, European Commission, 2008.
^ Guide to Benefit-Cost Analysis in Transport Canada. Transport Canada. Economic Evaluation Branch, Transport Canada, Ottawa, 1994 [4] Archived 2013-12-21 at the Wayback Machine
^ US Federal Highway Administration: Economic Analysis Primer: Benefit-Cost Analysis 2003 [5]
^ US Federal Highway Administration: Cost–Benefit Forecasting Toolbox for Highways, Circa 2001 [6]
^ US Federal Aviation Administration: Airport Benefit-Cost Analysis Guidance, 1999 [7][permanent dead link] [8]
^ Minnesota Department of Transportation: Benefit Cost Analysis. MN DOT Office of Investment Management [9] Archived 2009-08-13 at the Wayback Machine
^ California Department of Transportation: Benefit-Cost Analysis Guide for Transportation Planning [10]
^ Transportation Research Board, Transportation Economics Committee: Transportation Benefit-Cost Analysis [11]
^ "Ford Fuel Fires". Archived from the original on July 15, 2011. Retrieved 29 December 2011.
^ Phelps, Charles (2009). Health Economics (4th ed.). New York: Pearson/Addison-Wesley. ISBN 978-0-321-59457-0.
^ Buekers, J (2015). "Health impact model for modal shift from car use to cycling or walking in Flanders: application to two bicycle highways". Journal of Transport and Health. 2 (4): 549–562. doi:10.1016/j.jth.2015.08.003.
^ Tuominen, Pekka; Reda, Francesco; Dawoud, Waled; Elboshy, Bahaa; Elshafei, Ghada; Negm, Abdelazim (2015). "Economic Appraisal of Energy Efficiency in Buildings Using Cost-effectiveness Assessment". Procedia Economics and Finance. 21: 422–430. doi:10.1016/S2212-5671(15)00195-1.
^ a b Ackerman; et al. (2005). "Applying Cost–Benefit to Past Decisions: Was Environmental Protection Ever a Good Idea?". Administrative Law Review. 57: 155.
^ Boardman, N. E. (2006). Cost–benefit Analysis: Concepts and Practice (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-143583-4.
^ Field, Barry C; Field, Martha K (2016). ENVIRONMENTAL ECONOMICS: AN INTRODUCTION, SEVENTH EDITION. America: McGraw-Hill. p. 144. ISBN 978-0-07-802189-3.
^ Campbell, Harry F.; Brown, Richard (2003). "Valuing Traded and Non-Traded Commodities in Benefit-Cost Analysis". Benefit-Cost Analysis: Financial and Economic Appraisal using Spreadsheets. Cambridge: Cambridge University Press. ISBN 978-0-521-52898-6. Ch. 8 provides a useful discussion of non-market valuation methods for CBA.
^ Dunn, William N. (2009). Public Policy Analysis: An Introduction. New York: Longman. ISBN 978-0-13-615554-6.
^ Newell, R. G. (2003). "Discounting the Distant Future: How Much Do Uncertain Rates Increase Valuations?". Journal of Environmental Economics and Management. 46 (1): 52–71. doi:10.1016/S0095-0696(02)00031-1. hdl:10161/9133.
^ Campbell, Harry F.; Brown, Richard (2003). "Incorporating Risk in Benefit-Cost Analysis". Benefit-Cost Analysis: Financial and Economic Appraisal using Spreadsheets. Cambridge: Cambridge University Press. ISBN 978-0-521-52898-6. Ch. 9 provides a useful discussion of sensitivity analysis and risk modelling in cost benefits analysis. CBA.
^ Cover, Thomas M.; Thomas, Joy A. (2006). Elements of Information Theory. Wiley-Interscience (2nd ed.). Hoboken, NJ: John Wiley & Sons. pp. 409–412. ISBN 0471241954.
^ http://regulation.huji.ac.il/papers/jp5.pdf
^ "Executive Order 12866: Regulatory Planning and Review". govinfo.library.unt.edu.
^ Heinzerling, L. (2000), "The Rights of Statistical People", Harvard Environmental Law Review 24, 189–208.
^ Tomer, Adie (2017-10-03). "America's commuting choices: 5 major takeaways from 2016 census data". Brookings Institution. Retrieved 2019-08-18.
^ Stromberg, Joseph (2015-04-29). "The utter dominance of the car in American commuting". Vox. Retrieved 2019-08-18.
^ Jaffe, Eric (2015-06-08). "How Much Money U.S. Transit Systems Lose Per Trip, in 1 Chart". CityLab. Retrieved 2019-08-18.
^ "Rail Transit Cost Overruns – The Antiplanner". ti.org. Retrieved 2019-08-18.
^ O'Toole, Randal (2014-06-03). "The Worst of Both: The Rise of High-Cost, Low-Capacity Rail Transit" (PDF). Policy Analysis. Cato Institute. 750.
^ Feigenbaum, Baruch (2013). "High-Speed Rail in Europe and Asia: Lessons for the United States" (PDF). Reason Foundation.
^ "Detroit People Mover | Detroit Historical Society". detroithistorical.org. Retrieved 2019-08-18.
^ Neavling, Steve (2019-05-01). "Two years in, Detroit's QLine falls far short of expectations". Detroit Metro Times. Retrieved 2019-08-18.
^ Matar, Sharif (2013-08-01). "Rail to Nowhere: Honolulu, Hawaii's Train Boondoggle". Reason.com. Retrieved 2019-08-18.
^ Journal, Dan Frosch and Paul Overberg | Photographs by Marco Garcia for The Wall Street (2019-03-22). "How a Train Through Paradise Turned Into a $9 Billion Debacle". Wall Street Journal. ISSN 0099-9660. Retrieved 2019-08-18.
^ Lyte, Brittany (2017-07-19). "Honolulu's Rapid Transit Crisis". CityLab. Retrieved 2019-08-18.
^ McFadden, Daniel (1974-11-01). "The measurement of urban travel demand". Journal of Public Economics. 3 (4): 303–328. doi:10.1016/0047-2727(74)90003-6. ISSN 0047-2727.
^ McFadden, Daniel L. (2002). "The Path to Discrete-Choice Models" (PDF). Access Magazine.
^ Rosenthal, Brian M. (2017-12-28). "The Most Expensive Mile of Subway Track on Earth". The New York Times. ISSN 0362-4331. Retrieved 2019-08-18.
^ Harris, Connor (2018-08-25). "Why building trains in New York costs more than any other city". New York Post. Retrieved 2019-08-18.
^ Rivoli, Dan (2018-05-25). "MTA Budget: Where does the money come from?". NY Daily News. Retrieved 2019-08-18.
^ Green, Ariana (2005-11-19). "A Hesitant Puerto Rico Tries Commuting by Train". The New York Times. ISSN 0362-4331. Retrieved 2019-08-18.
^ "Before and After Studies of New Starts Projects"(PDF). Federal Transit Administration. 2008.
^ Honore, Marcel (2018-07-09). "What Honolulu Rail Planners Can Learn From Puerto Rico". Honolulu Civil Beat. Retrieved 2019-08-18.
^ Phaneuf, Daniel J.; Requate, Till (2016-12-24). A Course in Environmental Economics: Theory, Policy, and Practice. Cambridge University Press. ISBN 9781316867358.
^ Nurmi, Väinö; Ahtiainen, Heini (2018-08-01). "Distributional Weights in Environmental Valuation and Cost-benefit Analysis: Theory and Practice". Ecological Economics. 150: 217–228. doi:10.1016/j.ecolecon.2018.04.021. ISSN 0921-8009.
^ Persky, Joseph (November 2001). "Retrospectives: Cost-Benefit Analysis and the Classical Creed". Journal of Economic Perspectives. 15 (4): 199–208. doi:10.1257/jep.15.4.199. ISSN 0895-3309.
^ Brekke, Kjell Arne (1997-04-01). "The numéraire matters in cost-benefit analysis". Journal of Public Economics. 64 (1): 117–123. doi:10.1016/S0047-2727(96)01610-6. ISSN 0047-2727.
^ Boadway, Robin (2006). "Principles of Cost-Benefit Analysis". Public Policy Review. 2 (1): 1–44.
^ Samuelson, P. A. (1977). "Reaffirming the Existence of "Reasonable" Bergson-Samuelson Social Welfare Functions". Economica. 44 (173): 81–88. doi:10.2307/2553553. ISSN 0013-0427. JSTOR 2553553.
Further reading [ edit ]
Campbell, Harry; Brown, Richard (2003). Benefit-Cost Analysis: Financial and Economic Appraisal Using Spreadsheets. Cambridge University Press. ISBN 978-0-521-82146-9.
Chakravarty, Sukhamoy (1987). "Cost–benefit analysis". The New Palgrave: A Dictionary of Economics. 1. London: Macmillan. pp. 687–690. ISBN 978-0-333-37235-7.
David, R., Ngulube, P. & Dube, A., 2013, "A cost–benefit analysis of document management strategies used at a financial institution in Zimbabwe: A case study", SA Journal of Information Management 15(2), Art. #540, 10 pages.
Dupuit, Jules (1969). "On the Measurement of the Utility of Public Works". In Arrow, Kenneth J.; Scitovsky, Tibor (eds.). Readings in Welfare Economics. London: Allen and Unwin. ISBN 978-0-04-338038-3.
Eckstein, Otto (1958). Water-resource Development: The Economics of Project Evaluation. Cambridge: Harvard University Press.
Folland, Sherman; Goodman, Allen C.; Stano, Miron (2007). The Economics of Health and Health Care (Fifth ed.). New Jersey: Pearson Prentice Hall. pp. 83–84. ISBN 978-0-13-227942-0.
Ferrara, A. (2010). Cost–Benefit Analysis of Multi-Level Government: The Case of EU Cohesion Policy and US Federal Investment Policies. London and New York: Routledge. ISBN 978-0-415-56821-0.
Frank, Robert H. (2000). "Why is Cost–Benefit Analysis so Controversial?". Journal of Legal Studies. 29 (S2): 913–930. doi:10.1086/468099.
Hirshleifer, Jack (1960). Water Supply: Economics, Technology, and Policy. Chicago: University of Chicago Press.
Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 8, "The Positive Biases of Technology Assessments and Cost Benefit Analyses", New Society Publishers, Gabriola Island, British Columbia, Canada, ISBN 0865717044, 464 pp.
Maass, Arthur, ed. (1962). Design of Water-resource Systems: New Techniques for Relating Economic Objectives, Engineering Analysis, and Governmental Planning. Cambridge: Harvard University Press.
McKean, Roland N. (1958). Efficiency in Government through Systems Analysis: With Emphasis on Water Resources Development. New York: Wiley.
Nas, Tevfik F. (1996). Cost–Benefit Analysis: Theory and Application. Thousand Oaks, CA: Sage. ISBN 978-0-8039-7133-2.
Richardson, Henry S. (2000). "The Stupidity of the Cost–Benefit Analysis". Journal of Legal Studies. 29 (S2): 971–1003. doi:10.1086/468102.
Quigley, John; Walls, Lesley (2003). "Cost–Benefit Modelling for Reliability Growth" (PDF). Journal of the Operational Research Society. 54 (12): 1234–1241. doi:10.1057/palgrave.jors.2601633.
Sen, Amartya (2000). "The Discipline of Cost–Benefit Analysis". Journal of Legal Studies. 29 (S2): 931–952. doi:10.1086/468100.
External links [ edit ]
Benefit-Cost Analysis Center at the University of Washington's Daniel J. Evans School of Public Affairs
Intro to Cost–Benefit Analysis
Benefit-Cost Analysis site maintained by the Transportation Economics Committee of the Transportation Research Board(TRB).
Convexity and non-convexity
Sunk
Deadweight loss
Economies of scale
Economies of scope
Externality
Income–consumption curve
Monopolistic
Duopoly
Monopsony
Oligopsony
Pareto efficiency
Returns to scale
Substitution effect
Social choice
Subfields
Computational
Statistical decision theory
Civil engineering economics
Microfoundations of macroeconomics
Concepts of quality
Medical consensus
Medical guideline
Health care evaluations
Independent medical review
Health care ratings
Routine health outcomes measurement
Hospital accreditation
International healthcare accreditation
List of international healthcare accreditation organizations
Incremental cost-effectiveness ratio
Cost-minimization analysis
Cost per procedure
Clinical Quality Management System
Disability-adjusted life year
Quality-adjusted life year | CommonCrawl |
From Encyclopedia of Mathematics
2010 Mathematics Subject Classification: Primary: 26A09 Secondary: 30B10 [MSN][ZBL]
Also known as Maclaurin series. The series was published by B. Taylor in 1715, whereas a series reducible to it by a simple transformation was published by Johann I. Bernoulli in 1694.
1 One real variable
1.1 Analyticity
2 One complex variable
3 Several variables
4 Further generalizations
One real variable
Let $U$ be an open set of $\mathbb R$ and consider a function $f: U \to \mathbb R$. If $f$ is infinitely differentiable at $x_0$, its Taylor series at $x_0$ is the power series given by \begin{equation}\label{e:Taylor_series} \sum_{n=0}^\infty \frac{f^{(n)} (x_0)}{n!} (x-x_0)^n\, , \end{equation} where we use the convention that $0^0=1$.
The partial sums \[ P_k(x) := \sum_{n=0}^k \frac{f^{(n)} (x_0)}{n!} (x-x_0)^n \] of a Taylor series are called Taylor polynomial of degree $k$ and the "remainder" $f(x)- P_k (x)$ can be estimated in several ways, see Taylor formula.
Analyticity
The property of being infinitely differentiable does not guarantee the convergence of the Taylor series to the function $f$: a well-known example is given by the function \[ f (x) = \left\{\begin{array}{ll} e^{-1/x^2} \quad &\mbox{if } x\neq 0\\ 0 \quad &\mbox{otherwise.} \end{array} \right. \] Indeed the function $f$ defined above is infinitely differentiable everywhere, its Taylor series at $0$ vanishes identically, but $f(x)>0$ for any $x\neq 0$.
If the Taylor series of a function $f$ at $x_0$ converges to the values of $f$ in a neighborhood of $x_0$, then $f$ is real analytic (in a neighborhood of $x_0$). The Taylor series is also unique in the following sense: if for some given function $f$ defined in a neighborhood of $x_0$ there is a power series $\sum a_n (x-x_0)^n$ which converges to the values of $f$, then such series coincides necessarily with the Taylor series.
With the aid of the formulas for the difference $f (x) - P_n (x)$ (see Taylor formula) one can establish several criterions for the analyticity of $f$. A popular one is the existence of positive constants $C$, $R$ and $\delta$ such that \[ |f^{(n)} (x)| \leq C n! R^n \qquad \forall x\in ]x_0-\delta, x_0+\delta[\quad \forall n\in \mathbb N\, . \]
One complex variable
If $U$ is a subset of the complex plane and $f:U\to \mathbb C$ an holomorphic function (i.e. complex differentiable at every point of $U$), then the Taylor series at $x_0\in U$ is given by the same formula, where $f^{(n)} (x_0)$ denotes the complex $n$-th derivative. The existence of all derivatives is guaranteed by the holomorphy of $f$, which also implies the convergence of the power series to $f$ in a neighborhood of $x_0$ (in sharp contrast with the real differentiability!), see Analytic function.
Several variables
The Taylor series can be generalized to functions of several variables. More precisely, if $U\subset \mathbb R^n$ and $f:U\to \mathbb R$ is infinitely differentiable at $\alpha\in U$, the Taylor series of $f$ at $\alpha$ is given by \begin{equation}\label{e:power_series_nd} \sum_{k_1, \ldots, k_n =1}^\infty \frac{1}{k_1!\ldots k_n!} \frac{\partial^{k_1+\ldots + k_n} f}{\partial x_1^{k_1} \ldots \partial x_n^{k_n}} (\alpha)\, (x_1-\alpha_1)^k_1 \ldots (x_n - \alpha_n)^{k_n}\, \end{equation} (see also Multi-index notation for other ways of expressing \eqref{e:power_series_nd}). The (real) analyticity of $f$ is defined by the property that such series converges to $f$ in a neighborhood of $\alpha$. An entirely analogous formula can be written for holomorphic functions of several variables (see Analytic function).
Further generalizations
The Taylor series can be generalized to the case of mappings of subsets of linear normed spaces into similar spaces.
[Di] J.A. Dieudonné, "Foundations of modern analysis" , Acad. Press (1960) (Translated from French)
[IS] V.A. Il'in, V.A. Sadovnichii, B.Kh. Sendov, "Mathematical analysis" , Moscow (1979) (In Russian)
[Ni] S.M. Nikol'skii, "A course of mathematical analysis" , 1–2 , MIR (1977) (Translated from Russian)
[Ru] W. Rudin, "Principles of mathematical analysis" , McGraw-Hill (1976) pp. 75–78 MR0385023 Zbl 0346.26002
[St] K.R. Stromberg, "Introduction to classical real analysis" , Wadsworth (1981)
How to Cite This Entry:
Taylor series. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Taylor_series&oldid=31211
This article was adapted from an original article by L.D. Kudryavtsev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Taylor_series&oldid=31211"
Real functions
Functions of a complex variable
TeX done | CommonCrawl |
Fast convergence in $L^1$ implies convergence almost everywhere
This is a proof-verification request.
Claim: Let $(X,\mathscr M,\mu)$ be a measure space. Let $f_n$ ($n\in\mathbb N$) and $f$ be measurable, integrable, real-valued functions such that $(f_n)_{n\in\mathbb N}$ converges to $f$ in $L^1$ at a rate $O(1/n^p)$, where $p>1$. Then, $f_n\to f$ almost everywhere.
Note: If no assumption is made on the rate of convergence, then the best one can establish is the existence of a $\textit{sub}$sequence $(f_{n_k})_{k\in\mathbb N}$ converging to $f$ almost everywhere (Corollary 2.32 in Folland, 1999).
Proof of the Claim: Suppose there exists $M>0$ such that $\int|f_n-f|\,\mathrm d\mu\leq M/n^p$ for all $n\in\mathbb N$. For each $\varepsilon>0$ and $n\in\mathbb N$, let $$E(n,\varepsilon)\equiv \big\{x\in X\,\big|\,|f_n(x)-f(x)|\geq\varepsilon\big\}.$$ Then, $$\frac {M}{n^p}\geq\int|f_n-f|\,\mathrm d\mu\geq \int_{E(n,\varepsilon)}|f_n-f|\,\mathrm d\mu\geq\varepsilon\,\mu(E(n,\varepsilon))$$ for each $n\in\mathbb N$, so that $$\mu(E(n,\varepsilon))\leq\frac{M}{n^p\varepsilon}.$$ Defining $$E(\varepsilon)\equiv\bigcap_{m=1}^{\infty}\bigcup_{n=m}^{\infty}E(n,\varepsilon),$$ one has that $$\mu(E(\varepsilon))\underset{\forall m\in\mathbb N}{\leq}\mu\left(\bigcup_{n=m}^{\infty}E(n,\varepsilon)\right)\leq\sum_{n=m}^{\infty}\mu(E(n,\varepsilon))\leq\frac{M}{\varepsilon}\sum_{n=m}^{\infty}\frac{1}{n^p}\to0\quad\text{as $m\to\infty$},$$ since the series $\sum_{n=1}^{\infty}1/n^{p}=\zeta(p)$ converges. Therefore, $\mu(E(\varepsilon))=0$ for any $\varepsilon>0$, which implies also that $$\mu\left(\bigcup_{q\in\mathbb Q\cap(0,\infty)}E(q)\right)=0.$$
Now, if $x\in X$ is such that $f_n(x)\not\to f(x)$, then there exists some $q>0$ such that $q\in\mathbb Q$ and for each $m\in\mathbb N$, there exists some $n\geq m$ so that $|f_n(x)-f(x)|\geq q$. That is, $x\in E(q)$. Hence, the set where pointwise convergence fails is a subset of $\bigcup_{q\in\mathbb Q\cap(0,\infty)}E(q)$, completing the proof. $\quad\blacksquare$
functional-analysis measure-theory proof-verification
triple_sec
triple_sectriple_sec
$\begingroup$ There is a typo in $E(\varepsilon)\equiv\bigcap_{m=1}^{\infty}\bigcup_{m=n}^{\infty}E(m,\varepsilon),$ since you've written $E(\varepsilon,m)$ earlier, i.e., the real argument comes first, plus you can't use the same variable of summation $m$ in two successive operators $\cap\cup$. In the next line $E(m,\varepsilon)$ looks like it ought to be $E(\varepsilon,n)$. $\endgroup$
– ForgotALot
$\begingroup$ You could clarify where the quadratic convergence is actually needed. Any exponent p > 1 works just as fine. $\endgroup$
– shuhalo
$\begingroup$ Your proof is fine. In fact, it can be easily generalized to the case $(f_n)_{n\in\mathbb N}$ converges to $f$ in $L^r$ at a rate $O(1/n^p)$, where $p>1$ and $r\geqslant 1$. $\endgroup$
– Ramiro
$\begingroup$ Simpler proof: Monotone convergence shows $\int\sum|f_n-f| = \sum\int|f_n-f|<\infty$. Hence $\sum|f_n-f|<\infty$ almost everywhere, which implies $|f_n-f|\to0$ almost everywhere. $\endgroup$
– David C. Ullrich
$\begingroup$ @triple_sec The $r$ in $L^r$ affects the rate at which $f_n \to f$ convergences in measure, because of the definition of the norm in $L^r$. Here are the details: If $(f_n)_{n\in\mathbb N}$ converges to $f$ in $L^r$ at a rate $O(1/n^p)$, where $p>1$ and $r\geqslant 1$, then $$\Vert f_n - f\Vert_r \leq \frac{M}{n^p}$$ which means $$\left (\int\vert f_n - f\vert ^r d\mu \right )^\frac{1}{r} \leq \frac{M}{n^p}$$ So we have $$\int\vert f_n - f\vert ^r d\mu \leq \frac{M^r}{n^{pr}}$$ So we get $$ \mu(E(n,\varepsilon)) \leq \frac{M^r}{n^{pr} \varepsilon ^r}$$ $\endgroup$
Yes, your proof is correct.
Moreover, instead of $\frac{1}{n^p}$ there could have been any function $f$, such that $\Sigma_{i = 1}^\infty f(n)$ converges.
Chain MarkovChain Markov
Not the answer you're looking for? Browse other questions tagged functional-analysis measure-theory proof-verification or ask your own question.
Convergence in $L_{\infty}$ norm implies uniform convergence
Convergence almost everywhere implies convergence in measure, the proof thereof
Convergence of measures — revisited
Series of integrable functions converges pointwise almost everywhere
Prove directly from the definition of convergence almost everywhere that if for all $n$, $\mu(\{x:|f_n(x)|>1/n\}<n^{-3/2}$, then $f_n\to 0$ a.e.
Proving pointwise convergence almost everywhere
Convergence in $L^1$ implies there exists a subsequence that converges almost everywhere.
Weak convergence implies almost everywhere? | CommonCrawl |
What do high-frequency expenditure network data reveal about spending and inflation during COVID‑19?
Staff Analytical Note 2020-20 (English)
Kim Huynh, Helen Lao, Patrick Sabourin, Angelika Welte
Data available as: CSV, JSON, and XML
The COVID‑19 pandemic and the containment measures that followed have shifted what Canadians buy. For example, since the pandemic began, Canadians have travelled less and bought more cleaning products and non-perishable foods compared with before. The official consumer price index (CPI) uses a fixed basket of goods and services, based on expenditures reported by Canadians in the Survey of Household Spending (SHS). The CPI basket was last updated in January 2019 using data from the 2017 SHS.1 Quantifying the cost of a fixed basket over time allows for consistent measurement of pure price change. Although there is a small measurement bias in the CPI, the fixed-basket approach has worked relatively well under normal economic conditions.2
Concerns have been raised, however, that focusing on a fixed basket of goods and services might be less useful now since consumers have changed their spending patterns because of COVID‑19. Given these changes, CPI inflation may not fully capture the expenditures that a typical Canadian consumer makes currently. There is some evidence that Canadians' views of inflation may differ from the official measure: in the Canadian Survey of Consumer Expectations, consumer expectations for one-year-ahead inflation rose slightly in the second quarter of 2020 despite a sharp decline in the officially reported inflation rate.3, 4
To address this potential gap between Canadians' perceptions and the official inflation rate, the Bank of Canada partnered with Statistics Canada to construct an adjusted CPI to better gauge the typical basket purchased by consumers during the COVID‑19 pandemic (a real-time basket versus the fixed basket). To construct our index, we combine anonymized and aggregated datasets on consumer credit card purchases with data from Statistics Canada's Monthly Retail Trade Survey and transaction data from Canadian grocery retailers. Then, we use these data to map changes in the shares, or weights, of consumer spending on the goods and services in the CPI basket. Figure 1 outlines our process.
Figure 1: Constructing the adjusted CPI measure
The rest of this note is organized as follows. Section 1 describes the card payments data, known as high-frequency expenditure network (HFEN) data, collected from major Canadian payment card providers. Section 2 outlines the method used to calculate inputs to the adjusted CPI from these data. Section 3 covers the implied changes in spending shares and CPI weights. Section 4 shows the results of the adjusted CPI measure based on these data along with the implications for CPI inflation, and section 5 offers some concluding remarks.
1. High-frequency expenditure network data
National statistical agencies require timely data from surveys and other data sources on the size and composition of consumer spending. Extreme events, which may introduce unpredictable and large shocks, highlight the need for timely data. While Diewert and Fox (2020) advocate for a continual consumer survey to address this need, the International Monetary Fund (2020) points out that it can take up to a year to update weights after the data have been collected. Alternative data sources may thus offer a more feasible approach. Galbraith and Tkacz (2013) propose using debit card transaction data to investigate the effect of extreme events on total consumer spending. To understand the effect of payment disruptions on the real economy, they study what happened in Canada after two extreme events, the September 11, 2001, terrorist attack and the SARS outbreak in 2002. Similarly, HFEN data on debit card purchases may be useful in tracking the effects of COVID‑19 on the level of household spending. In addition, COVID‑19 has affected credit and debit card spending across industries in an uneven way, in terms of both the direction and the magnitude of the change.5 Thus, HFEN data on individual spending categories could potentially be helpful for determining the shares of spending on consumption that underpin the CPI.6 This potential has also been demonstrated by Cavallo (2020), who uses debit and credit card transaction data to estimate the impact of COVID‑19 on the US CPI.
The Bank has access to HFEN datasets from payment service providers. These datasets provide weekly statistics on transaction values processed in Canada, which includes close to three-quarters of the value of payment card purchases in Canada. For 2019, that amounts to around $600 billion. The data are available on an aggregate level and by merchant type.7
2. Mapping the CPI components
Four practical considerations arise when we map the HFEN data to the expenditure categories of the various components that make up the CPI. First, the market segments and merchant categories used by the payment service providers do not align one to one with the CPI components. Second, the datasets do not cover all payment methods. Cash, cheques, bank transfers and some other payment methods are not included in the data, yet they play an important role for some expenditure categories, such as rent, utilities and car purchases. Third, while the HFEN data contain transactions from all major card networks, they may not cover all industries and consumer demographics evenly. Analysis of consumer data by Henry, Huynh and Welte (2018) and of retailer data by Kosse et al. (2017) shows that the adoption of card networks differs across consumer groups and industries. Fourth, growth in card payments may not be the same as growth in spending, and some networks have grown at a faster pace than others. While these considerations are partially addressed by our methodology, they also indicate the need for complementary data and research to calculate the weights.8
The proposed solution
Since HFEN data are available by merchant type, we can map HFEN data to CPI components based on the main type of product sold by merchant type. Since most merchants sell a group of products, HFEN data are useful for mapping larger groups of sub-categories such as food purchased from stores but not for mapping granular breakdowns into specific commodities such as bread or cereal.
The CPI basket is made up of eight major categories (see Mitchell 2019). Of these, we exclude shelter because it is not well represented in the HFEN data.9 The remaining categories are food; household operations, furnishings and equipment; clothing and footwear; transportation; health and personal care; recreation, education and reading; and alcoholic beverages, tobacco products and recreational cannabis.
Some of these major categories are broken down further into intermediate categories. For example, food includes both food purchased from stores and food from restaurants. Purchases of motor vehicles are included in transportation; however, since these retail purchases are commonly made with cheques or through financing, mapping the weights for that CPI component is done using alternative sources of data.10
We grouped the categories and their sub-categories according to the type of business that sells those products and services. For example, food from restaurants was grouped with alcohol in restaurants.
This proposed solution addresses the first consideration noted above by setting up a concordance between CPI components and merchant types in the HFEN data. To address the second consideration, we exclude categories where most transactions are likely made with non-card payment methods. These include shelter, and purchase and leasing of passenger vehicles; alternative sources of data for these categories are described below. Further research is planned to understand the impact of COVID‑19 on the potential composition changes in other payment methods, in particular cash, and to understand the role of heterogeneity in payment choice across demographics and across product and industry categories.11
The coverage of HFEN and other sources of data
We calculate final HFEN weights for 64 percent of the CPI basket. The sum of the final HFEN weights for January to May 2020 is equal to a total weight within 1 percent of the sum of the original CPI basket weights (64 percent). For the months of January and February 2020, the HFEN weights of individual components are close to the original basket weights.
To map the remaining 36 percent of the CPI basket and to help overcome the four challenges highlighted above, we use additional sources of data to complement the HFEN data in the calculation of real-time weights. These include Statistics Canada's monthly retail trade survey and transaction data from Canadian grocery retailers, along with Statistics Canada's expertise.
Estimating the adjusted CPI weights
We compute the relative changes in the adjusted CPI weights from relative changes in the HFEN data. For example, the adjusted CPI expenditure weight of a component x would increase by a factor of 2 if the share in the HFEN data doubles. We focus on changes in year-over-year values to control for seasonality.
The CPI basket weight of a given component \(x_{t}\) in 2019 is denoted by \(w_{t}\). We denote \(g_{t}\) to be the share of the component x in HFEN data, where the time index is measured in months.
In step 1 of our procedure, the alternative CPI expenditure shares (hereafter HFEN weights) are estimated as \(\hat{w}_{t+12}\) \(=\,\frac{g_{t+12}}{g_{t}}\) \(w_{t}\), anchoring the HFEN weights to the 2019 CPI basket weights. With this estimation, the relative increase of the alternative CPI expenditure share matches the relative increase of the share in the data provided by payment service providers.12 To estimate the overall change in spending, the basket weights are scaled by the observed expenditure changes in the HFEN data. Taking the sum of the scaled weights gives an estimate of the overall change in spending.
For step 2, Statistics Canada requires dollar values as inputs. Combining the total value \(PQ_{t}\) of the mapped basket at time \(t\)13 with the HFEN weights \(\hat{w}_{t+12}\) and the estimate of the overall change in spending, we compute the estimated values \(PQ_{t+12}^{x}\) for each mapped component.14
3. Estimated changes in spending patterns
The HFEN data indicate a significant contraction in spending from March to May 2020 compared with the same period in 2019: for the mapped component, spending overall was down 12 percent in March, 31 percent in April and 14 percent in May. As the economy reopens, the HFEN data can also be used to monitor how spending resumes, overall and by market segment.
Using weekly data from January to May 2020, Chart 1 and Chart 2 illustrate that the proposed methodology suggests the alternative weights (\(\hat{w}_{t}\)) for food, health and personal care goods, gasoline, and clothing were close to the fixed CPI basked weights before COVID‑19. This stability can also be observed for other expenditure categories. However, sharp changes occur for several expenditure categories after the outbreak of the pandemic.15 We notice an increase in the spending shares for food purchased from stores and personal and health care goods, while the spending shares for food from restaurants, clothing and footwear, and gasoline declined.
Chart 1: HFEN weights for some CPI components have increased in weight
Alternatively, the data is available for download in:
Note: CPI is consumer price index; HFEN is high-frequency expenditure network.
Sources: Statistics Canada (Table 18-10-0007-01) and Bank of Canada calculations Last observation: July 26, 2020
Chart 2: HFEN weights for other CPI components have declined in weight
4. Adjusted CPI measure suggests slightly higher inflation
Using the adjusted national weights listed in Table 1, Statistics Canada compiled the adjusted CPI measure based on the chained Laspeyres method.16 This adjusted CPI measure shows slightly less downward pressure than the official CPI in the early months of the pandemic. The year-over-year growth of the adjusted CPI measure is about 1.0 percent in March, 0.0 percent in April and -0.1 percent in May, compared with 0.9 percent, -0.2 percent and -0.4 percent for the official CPI (Chart 3). This is because the weights of components with higher inflation, such as food purchased from stores, have increased, while weights for components with lower inflation, such as transportation, have declined.
Table 1: CPI component weights based on high-frequency data
CPI component
CPI basket weight
March adjusted weight
April adjusted weight
May adjusted weight
Food (in stores and at restaurants) 16.48 16.54 20.68 20.84
Food in stores 11.31 11.38 16.86 17.94
Food at restaurants 5.17 5.18 3.82 2.91
Shelter 27.36 27.70 31.23 37.12
Household operations and furnishings 12.80 12.66 13.04 13.99
Clothing and footwear 5.17 5.00 3.30 2.22
Transportation 19.95 19.04 15.01 12.14
Health and personal care 4.79 4.85 5.61 4.96
Recreation, education and reading 10.24 11.62 7.97 5.18
Alcoholic beverages and tobacco products 3.21 2.60 3.15 3.55
Difference between the adjusted and official measures (year-over-year change in percentage points) 0.10 0.20 0.30
Note: The adjusted measure is calculated based on the chained Laspeyres method and uses the one-month lagged weights as inputs. For example, the adjusted measure for May 2020 uses the real-time weights from April 2020. See Mitchell et al. (2020) for Statistics Canada's version.
Chart 3: The adjusted CPI measure suggests a slightly higher inflation rate
Note: CPI is consumer price index.
Sources: Statistics Canada and Bank of Canada calculations Last observation: May 2020
Because the Bank uses total CPI inflation as the target in its monetary policy framework, these results provide useful insights on the measurement risk around that target.17 However, even when the changes in spending patterns during the pandemic have been accounted for, it is evident that the COVID‑19 shock is disinflationary in the short run. This is because despite consumers spending more on components with a relatively higher inflation rate, the price declines from components with lower demand, such as transportation, more than offset upward price pressures from components such as food purchased from stores.
Our results suggest that inflation adjusted for changes in spending patterns during the pandemic is only slightly higher than the official CPI inflation so far in 2020. It is evident that the COVID‑19 shock is disinflationary in the short run even when changes in consumption patterns are accounted for. Statistics Canada and the Bank will continue to update this new index to calculate an adjusted CPI inflation measure during the recovery period. The difference between the two measures of inflation may dissipate if the changes in consumption patterns reverse.
1. Since 2015, the CPI basket is typically updated every two years. The last basket update was in February 2019 with the January 2019 CPI update. See Statistics Canada (2019) for an overview of the methodology for the Canadian CPI.[←]
2. See the Bank of Canada's Understanding the consumer price index.[←]
3. See the Canadian Survey of Consumer Expectations—Second Quarter of 2020.[←]
4. Such a divergence is not new, and gaps like these are not uncommon. However, the difference between households' expectations in the second quarter of 2020 and CPI inflation for May was particularly acute.[←]
5. See, for example, RBC Economics.[←]
6. These are the input shares used for calculating CPI component weights at the level of aggregation where these weights are published by Statistics Canada.[←]
7. Given the sensitive nature of the data, we do not identify the payment service providers, and we suppress certain statistics to prevent their identification.[←]
8. See Mitchell et al. (2020) for more details on the other sources of data. [←]
9. For shelter, Statistics Canada assumes that the dollar values (price updates of quantities purchased on average in 2017) for each sub-component other than home repairs are unchanged. This means that in percentage terms, the share of these components is also shifting due to a rescaling effect, as the dollar values for the rest of the basket are altered with the introduction of alternative weights.[←]
10. Durables (e.g., purchases of motor vehicles) and shelter are not covered by HFEN data. These account for 36 percent of the CPI basket. The mapping for durables is done using data from Statistics Canada's monthly retail trade survey.[←]
11. See Chen et al. (2020) for more on cash use and demand in April 2020. Kaplan and Schulhofer-Wohl (2017) document inflation heterogeneity at the household level using consumer scanner data for Canada. Jaravel and O'Connell (2020) investigate heterogeneity in inflation at the household level in the United Kingdom during the COVID‑19 lockdown restrictions.[←]
12. HFEN CPI basket weights are computed for each dataset provided by payment service providers. The weighted average of the HFEN CPI basket weights is then calculated and depends on the coverage, representativeness and reliability of the data source.[←]
13. This \(PQ\) is based on updating the prices of the goods and services in the basket (prices could be updated every month for a fixed quantity estimated in 2017). \(P\) stands for price and \(Q\) for quantity.[←]
14. Statistics Canada applies a similar mapping method by combining its different sources of data.[←]
15. A statistical test rejects the hypothesis that the HFEN weights in March, April, May and June are random outliers, given the distribution of the HFEN weights before COVID‑19. Similar charts can be produced for other commodities.[←]
16. The Laspeyres formula is a basic method for calculating price indexes and is consistent with the CPI fixed-basket concept. For more information, see Mitchell et al. (2020).[←]
17. This paper examines measurement risks around CPI from changes to the consumption basket. Other measurement risks around CPI inflation during the pandemic could also arise; for example, quality adjustment could be more challenging as an increasing number of sampled products may be out of stock and replaced with products of different quality, and outlet substitution bias could increase as consumers increasingly shift to online shopping. See Kryvtsov (2016) and Sabourin (2012) for a discussion of these measurement biases.[←]
Cavallo, A. 2020. "Inflation with Covid Consumption Baskets." NBER Working Paper No. 27352.
Chen, H., W. Engert, K. Huynh, G. Nicholls, M. Nicholson and J. Zhu. 2020. "Cash and COVID‑19: The Impact of the Pandemic on Demand for and Use of Cash." Bank of Canada Staff Discussion Paper No. 2020-6.
Diewert, W. E. and K. J. Fox. 2020. "Measuring Real Consumption and CPI Bias under Lockdown Conditions." NBER Working Paper No. 27144.
Galbraith, J. and G. Tkacz. 2013. "Analyzing Economic Effects of September 11 and Other Extreme Events Using Debit and Payments System Data." Canadian Public Policy 39 (1): 119–134.
Henry, C., K. Huynh and A. Welte. 2018. "2017 Methods-of-Payment Survey Report." Bank of Canada Staff Discussion Paper No. 2018-17.
International Monetary Fund. 2020. "Consumer Price Index Manual: Concepts and Methods." (Draft). Inter-Secretariat Working Group on Price Statistics.
Jaravel, X. and M. O'Connell. 2020. "Inflation Spike and Falling Product Variety During the Great Lockdown." CEPR Discussion Paper No. DP14880.
Kaplan, G. and S. Schulhofer-Wohl. 2017. "Inflation at the Household Level." Journal of Monetary Economics 91: 19–38.
Kosse, A., H. Chen, M.-H. Felt, V. Dongmo Jiongo, K. Nield and A. Welte. 2017. "The Costs of Point-of-Sale Payments in Canada. Bank of Canada Staff Discussion Paper No. 2017-4.
Kryvtsov, O. 2016. "Is There a Quality Bias in the Canadian CPI? Evidence from Microdata." The Canadian Journal of Economics 49 (4): 1401–1424.
Sabourin, P. 2012. "Measurement Bias in the Canadian Consumer Price Index: An Update." Bank of Canada Review (Summer): 1–11.
Statistics Canada. 2019. "The Canadian Consumer Price Index Reference Paper."
Mitchell, T. 2019. "An Analysis of the 2019 Consumer Price Index Basket Update, Based on 2017 Expenditures."
Mitchell, T., G. O'Donnell, R. Taves, Z. Weselake-George and A. Xu. 2020. "Consumer Expenditures During COVID‑19: An Exploratory Analysis of the Effects of Changing Consumption Patterns on Consumer Price Indexes. Statistics Canada.
We would like to thank Russell Barnett, Erik Ens, Marc-André Gosselin and Oleksiy Kryvtsov for helpful comments. We would like to acknowledge the collaboration of colleagues at Statistics Canada's Consumer Price Division. We express our gratitude to Michele Sura and her colleagues in Knowledge and Information Services, Scott Jones, Alison Layng, Olga Mkhitarova and Katherine Shrives, for support in data acquisition and curation. We also thank Carole Hubbard and Meredith Fraser-Ohman for editorial assistance and Vivian Chu, April Dang and Ceciline Steyn for technical assistance.
Bank of Canada staff analytical notes are short articles that focus on topical issues relevant to the current economic and financial context, produced independently from the Bank's Governing Council. This work may support or challenge prevailing policy orthodoxy. Therefore, the views expressed in this note are solely those of the authors and may differ from official Bank of Canada views. No responsibility for them should be attributed to the Bank.
Content Type(s): Staff research, Staff analytical notes
Topic(s): Inflation and prices, Payment clearing and settlement systems
JEL Code(s): D, D1, D12, E, E3, E31, E4, E42, E5, E52
DOI: https://doi.org/10.34989/san-2020-20 | CommonCrawl |
SVDNVLDA: predicting lncRNA-disease associations by Singular Value Decomposition and node2vec
Jianwei Li1,2,
Jianing Li1,2,
Mengfan Kong1,2,
Duanyang Wang1,2,
Kun Fu1,2 &
Jiangcheng Shi3
Numerous studies on discovering the roles of long non-coding RNAs (lncRNAs) in the occurrence, development and prognosis progresses of various human diseases have drawn substantial attentions. Since only a tiny portion of lncRNA-disease associations have been properly annotated, an increasing number of computational methods have been proposed for predicting potential lncRNA-disease associations. However, traditional predicting models lack the ability to precisely extract features of biomolecules, it is urgent to find a model which can identify potential lncRNA-disease associations with both efficiency and accuracy.
In this study, we proposed a novel model, SVDNVLDA, which gained the linear and non-linear features of lncRNAs and diseases with Singular Value Decomposition (SVD) and node2vec methods respectively. The integrated features were constructed from connecting the linear and non-linear features of each entity, which could effectively enhance the semantics contained in ultimate representations. And an XGBoost classifier was employed for identifying potential lncRNA-disease associations eventually.
We propose a novel model to predict lncRNA-disease associations. This model is expected to identify potential relationships between lncRNAs and diseases and further explore the disease mechanisms at the lncRNA molecular level.
Since the central dogma of molecular biology was proposed, RNA has been treated as an intermediary between protein-coding gene and protein. However, protein-coding genes account for only ~ 1.5% of the human genome, and more than 98% of the human genome cannot encode proteins [1,2,3]. Most non-coding genes would be transcribed into non-coding RNAs (ncRNAs). As their names imply, ncRNAs cannot be directly translated into proteins, so they were often considered as the "noise" of genome transcription without any biological functions for decades. According to the lengths of nucleotide sequences, ncRNAs can be further divided into small ncRNAs (< 200 nucleotides) and long ncRNAs (> 200 nucleotides) [4, 5]. Following the discovery of lncRNA H19 and XIST in the early 1990s [6, 7], associated with the rapid developments of scientific methodologies and experimental techniques, researchers have identified thousands of lncRNAs in eukaryotes ranging from nematodes to humans [8, 9]. Abundant evidences have demonstrated that lncRNAs play important roles in many fundamental and critical biological processes, such as transcriptional and post-transcriptional regulation, epigenetic regulation and chromosome dynamics [10,11,12,13,14]. Previous studies showed that the mutation or dysregulation of lncRNAs are closely related with a variety of human diseases. For instance, MALAT1, also known as NEAT2, was found upregulated in non-small cell lung cancer tissues and could be served as an early prognostic biomarker [15]; lncRNA HOTAIR had been explored as a potential biomarker on the detection of hepatocellular carcinoma relapse [16].
The complex and precise regulatory functions of lncRNAs have largely explained the complexity of genome and opened a new chapter for scientists to deeply understand the diversity of living organisms from the perspective on gene expression regulatory network. However, the exact mechanisms behind these various regulative relationships remain to be further explored; the general characteristics of lncRNAs, such as the relationships between their spatial structures and functions, the realization of transcriptional regulation, and the molecular level mechanisms in various biological processes or diseases, are still unknown. The identification of lncRNA-disease associations can not only help us better understand the underlying mechanisms of lncRNAs in various human diseases, but also accelerate the discovery of potential biomarkers which may benefit the diagnosis, treatment, prognosis of many complex diseases. The exploration on the association between lncRNA and disease has attracted more and more researchers' attention nowadays, which has become a prevalent topic in the current research field of lncRNA. Due to the number of newly discovered lncRNAs is growing rapidly every year, identifying lncRNA-disease association purely based on clinical information and biological experiments has encountered bottlenecks for their enormous consume of time and cost, and their disability to predict the associations of unrecorded diseases or lncRNAs, which undoubtedly limits the development of the lncRNA related studies. However, computational methods based on biological data can rapidly and efficiently quantify the correlation probability of interested lncRNA-disease pairs automatically, which can significantly reduce the time and cost of biological experiments. Therefore, it is a significant and urgent task to develop efficient and robust computational methods that are capable for predicting potential lncRNA-disease associations and providing candidates for future experimental verification.
Many researchers have proposed numerous algorithms and models for predicting potential lncRNA-disease association relationships over the years. All these methods could be broadly divided into three groups: biological network-based methods, machine learning-based methods and others. Based on the hypothesis that lncRNAs with similar functions may be more likely to be associated with diseases with similar phenotypes [17], a significant number of different biological network-based methods have been proposed by integrating multi-source biological information networks to detect potential disease-related lncRNAs. Sun et al. [18] proposed a global network-based computing method, RWRLNCD. By integrating a lncRNA-disease association network and a disease similarity network into a lncRNA functional similarity network, RWRLNCD adopted the Random Walk with Restart (RWR) algorithm on the constructed lncRNA functionally similar network to conduct predictions. Yao et al. [19] proposed a predictive model named LNCPricNet, which was based on a multi-layer composite network fusing different data of phenotypic-phenotypic interactions, lncRNA-lncRNA interactions and gene–gene interactions with disease-ncRNA relationships. The RWR algorithm was applied to predict potential lncRNA-disease associations. LNCPricNet could still achieve a decent performance when the known lncRNA-disease association data was insufficient, which may largely thank to the fact that the multi-layer composite network interacted with abundant information offseted the insufficient with one particular type of data. Ding et al. [20] came up with a model named TPGLDA in which built a lncRNA-disease-gene tripartite graph and applied a resource allocation algorithm to obtain the promising lncRNA-disease associations. Zhao et al. [21] built a multi-heterogeneous network which integrated the lncRNA functional similarity network, genetic similarity network, disease semantic similarity network and association networks among these three kinds of biological entries, subsequently realized the prediction of underlying lncRNA-disease associations through the RWR algorithm on their heterogeneous network. Xie et al. [22] adopted unbalance bi-random walk in their heterogeneous network to reconstruct the lncRNA-disease association matrix, which reflected the latent lncRNA-disease associations. After that, they proposed a NCPHLDA model [23], which constructed two cosine similarity networks for all lncRNAs and diseases separately, and combined the network consistency projection score for each similarity network as the associated probability of corresponding lncRNA-disease pairs. Most of these biological network-based methods adopted random walk-based algorithms on the established heterogeneous networks, which essentially takes the underlying topology information of nodes in the heterogeneous networks as the basis for the potential association prediction. The predicted effects of network-based methods heavily depend on whether the built network could accurately and comprehensively reflect the interactions among real biomolecules. Meanwhile, the rigid neighborhood relationship utilized by the random walk algorithm or its derivations limits the information richness of molecular features.
In recent years, machine learning and deep learning techniques have been widely adopted in lncRNA-disease assocaition predictions. Most of machine learning methods for disease-related lncRNA candidate selection typically train classifiers with the acquired features of experimentally confirmed lncRNA-disease associations and interested candidates, then rank the candidating associations according to the classification results. Chen et al. [17] came up with a calculating model, LRLSLDA (Laplacian Regularized Least Squares for lncRNA-Disease Association), based on the "guilt by association" assumption that similar diseases tend to be associated with lncRNAs which possess similar functions. They developed a semi-supervised learning framework to predict potential disease-lncRNA associations. However, there are too many parameters involved in their model, and how to adjust parameters was not well addressed. In addition, the same lncRNA-disease pairs may get different scores from the lncRNA space and the disease space respectively, how to properly combining these scores is a tricky problem. Liu et al. [24] designed a computational model by integrating known human disease genes, human lncRNAs and gene expression profiles without relying on any known human lncRNA-disease relationships. However, this model could not predict disease-associated lncRNAs which have no associated gene records. Guo et al. [25] integrated the Gaussian interaction profile kernel similarity of lncRNAs and diseases with disease semantic similarity, and utilized an autoencoder getting lower-dimensional features of lncRNA-disease pairs. Finally, a rotating forest classifier was adopted to gain the prediction results. Beyond that, several deep learning-based models have been developed in lncRNA-disease prediction field. Zeng et al. [26] initially combined matrix factorization method with a two-hidden-layer neural network architecture to capture the linear and non-linear features of lncRNAs and diseases respectively. Subsequently, they proposed a deep learning framework named DMFLDA [27], which adopted deep matrix factorization to learn the represents of lncRNAs and diseases. Besides, they also proposed a SDLDA model [28] mixed matrix factorization method with neural network framework to extract different features of lncRNAs and diseases.
In addition to biological networks and machine learning methods, plenty of statistical methods are also adopted to predict latent lncRNA-disease associations. Chen et al. [29] proposed a HGLDA model based on hypergeometric distribution, where the functional similarity of lncRNA was calculated by integrating disease semantic similarity, miRNA-disease association, and miRNA-lncRNA interaction. By testing whether the number of the common miRNAs shared by the disease and the lncRNA which were in the same lncRNA-disease pair exteeded beyond some threshold, HGLDA performed hypergeometric distribution tests for each lncRNA-disease pair. Lu et al. [30] proposed a matrix factorization-based model, SIMCLDA. According to known lncRNA-disease, gene-disease, gene–gene interactions and the functional similarities of diseases, the Gaussian interaction kernel of lncRNAs was calculated, the matrix decomposition method was introduced to predict the potential lncRNA-disease associations. However, it did not tackle the problem of data sparsity and further studies are needed to improve its performance. Apart from statistical methods, there are still a lot of novel algorithms could be applied for potential association predictions. For example, Fan et al. [31] introduced graph convolutional matrix completion to implement potential lncRNA-disease associations. Fusing verified lncRNA-disease associations and similarity data, they constructed an encoder-decoder model to learn nodes embeddings and score associations respectively.
In this paper, we propose an integrated feature extraction model, Singular Value Decomposition SVD and Node2Vec based LncRNA-Disease Association prediction model (SVDNVLDA), to predict potential lncRNA-disease associations. The rest of this paper is arranged as follows:
The results and discussions section exhibits the influences of hyperparameters in SVDNVLDA, the results of model comparison, robustness test and case studies, as well as an in-depth analysis of the limitions of SVDNVLDA and futher improvement directions.
The conclusion section overviews the workflow of SVDNVLDA, and its first-class prediction capabality for practical applications.
The methods section introduces the acquisition and preprocessing of experimental data, the prediction process of SVDNVLDA, and the theoretical datails of SVD and node2vec methods involved in our model.
Results and discussions
Evaluation metrics
Except for special instructions, all the numerical experimental results involved in this paper were generated under tenfold cross-validations. The evaluation metrics used in classifier selection and parameter adjustment processes contained Accuracy (Acc), Sensitivity (Sen), Specificity (Spec), Precision (Prec), and Matthews correlation coefficient (MCC) [32, 33]. In contrast experiments, the average AUC values and the AUPR values of ten testing sets of each model were gained and the corresponding ROC curves and PR curves were drawn through the results of tenfold cross-validations [34, 35].
Classifier selection and parameter tuning
After gaining the linear feature matrixes \(U\) and \(V^{T}\) based on SVD, we found a huge decay gap from \(10^{ - 1}\) to \(10^{ - 14}\) between the 173rd and the 174th dimensions of the importance matrix \(\Sigma\) (Additional file 1). In the light of principle of SVD, the linear features of entities were mainly focused on the top 173 dimensions. Therefore, the linear feature vectors of lncRNA and disease were fixed to 173 dimensions. As node2vec is a highly encapsulated node representation learning method, most of the inner parameters were kept constant and the hyperparameters acted as the dimensions of nonlinear vectors in our model. The16-, 32-, 64-, and 128-dimensional nonlinear feature representations were obtained, respectively.
In the selection process of machine learning classifiers, Linear Regression (LR), Naive Bayes (NB) [36], Random Forest (RF) [37], AdaBoost (ADB) [38] and XGB (XGBoost) [39] were tested based on different integrated features, respectively. The results of ACC and MCC values of all classifiers are shown in Tables 1 and 2. The column named "SVD" represents the features extracted based on single SVD method. Analogously, "N2V16" represents the 16-dimensional features extracted based on node2vec, "SN2V16" represents the integrated features combined with SVD features and 16 dimensional node2vec, and so on. For results on other evaluation indexes Sen, Spec and Prec, refer to Additional file 2, Additional file 3 and Additional file 4 respectively. All above classifiers were imported from scikit-learn library and implemented on Python, all inner-classifier parameters were set as defaults.
Table 1 The ACC results of different features on classifiers
Table 2 The MCC results of different features on classifiers
As known from Tables 1 and 2, the combination of linear features and 16-dimensional node2vec features obtained the optimal classification results in the XGBoost classifier (bolded in Tables 1, 2). Moreover, in most classifiers, prediction results based on integrated features were better than single linear feature prediction results and corresponding nonlinear feature prediction results, which demonstrated that the combination of SVD and node2vec does enhance the expression of integrated feature vectors in majority of classifiers.
Model contrast
After the model construction, we compared the proposed model with five state-of-the-art lncRNA-disease prediction methods: LDASR [25], LDA-LNSUBRRW [22], NCPHLDA [23], SDLDA [28], and TPGLDA [20]. The ROC and PR curves under tenfold cross-validations as well as relevant AUC and AUPR values are shown in Figs. 1 and 2 respectively.
The ROC curves of comparison test
The PR curves of comparison test
Just as shown in Figs. 1 and 2, both the AUC value and AUPR value of SVDNVLDA are the highest among tested models, which indicated that the outperformance of SVDNVLDA. In terms of AUC, compared with NCPHLDA model, which gained the best result in contrast group, our model also improved the AUC value by about \(5{\text{\% }}\). Moreover, the excellent AUPR value manifested that our model also has first-class classification ability on unbalance data sets.
Since all parameters of XGBoost classifier were set as defaults, to testify whether the AUC and AUPR results of SVDNVLDA is overfitted, we futher seperated 10% samples as validation set and trained classifier without leveraging the validation set. The ROC and PR curves of the train set and the validation set were exhibited in Figs. 3 and 4 respectively. SVDNVLDA achieved remarkable results with AUC of 0.9798 and AUPR of 0.9723 on the validation set, and it was not a result of overfitting.
The ROC curves of train set and validation set
The PR curves of train set and validation set
Robustness testing
The robustness of the predictive model is that the predictive model can give a stable performance for data sets on different scales. For evaluating the robustness of SVDNVLDA, we applied it on three varying scale data sets, which had been adopted by other open-source lncRNA-disease association identification models. Similarly, under tenfold cross validations, the ROC and the PR curves of SVDNVLDA on these data sets are plotted in Figs. 5 and 6 respectively. The data set used in Yao's model [19] includes 2697 lncRNA-disease associations, 1002 lncRNA-miRNA associations, and 13,562 miRNA-disease associations. And the data leveraged in Zhanghui's model [40] contains 1151 lncRNA-disease associations, 10,102 lncRNA-miRNA associations and 4634 miRNA-disease associations. While, it is worth mentioning that miRNA entities were replaced with genes in the data set of MHRWR [21], which included 264 lncRNA-gene associations, 855 lncRNA-disease associations, and 9997 gene-disease associations. The experimental test results yielded that SVDNVLDA achieved excellent prediction results on all data sets, in particular, the prediction results after replacing miRNAs with other biological entities were still fine in the MHRWR model data. All these results suggested that SVDNVLDA can be flexible to accommodate data in different scales or even different contents.
The ROC curves of robustness test
The PR curves of robustness test
To further evaluate the performance of SVDNVLDA model in practical applicaitons, we selected lung cancer, breast cancer and pancreatic cancer as case studies. The general processes of each of case studies were as following: first, all lncRNA-disease association data and the same number of negative samples were utilized to train an XGBoost classifier. Then, all lncRNAs unrelated to the interested disease in experimental data were screened, each of lncRNA feature vectors was combined with the current disease feature vectors. Finally, all these lncRNA-disease feature pairs were inputted into the trained classifier, and the output scores were taken as the correlation probability between the lncRNAs and the corresponding disease. After sorting these scores by descending order, the top ten lncRNA-disease associations were selected. And the validity of selected associations was verified by searching the relative literature in the PubMed database. The results of case studies (Tables 3, 4, 5) and roughly analyses of each disease are as follows.
Table 3 Case study results of breast neoplasms
Table 4 Case study results of lung neoplasms
Table 5 Case study results of pancreatic neoplasms
[Breast Cancer] According to the latest data of the global cancer burden in 2020 [41], there were 2.26 million new cases of Breast cancer worldwide in 2020, accounting for 11.7% of all new cases of cancer this year, ranking first among all cancers. Symptoms of breast cancer includes lumps in the breast, changes in the shape of the breast, depressions in the skin with bone pain, swollen lymph nodes, tachypnea or yellow skin. Table 3 shows the top-10 lncRNA-disease associations of unknown association of SVDNVLDA for breast cancer prediction.
[Lung Cancer] Lung cancer is a kind of malignant lung tumor caused by uncontrolled cell growth in lung tissues, the malignant growth can spread beyond the lungs by metastasizing to nearby tissues or other parts of the body. In 2020, there were 2.2 million new cases of lung cancer worldwide, accounting for 11.4% of all the new cancer cases, ranking secondly among all cancers [41]. The most common symptoms of lung cancer include coughing, weight loss, breath hard and chest pain. Most of lung cancer cases are caused by long-term smoking. Table 4 illustrates the top-10 lncRNA results of lung cancer predicted by SVDNVLDA.
[Pancreatic Cancer] The common signs and symptoms of pancreatic cancer include yellow skin, abdominal or back pain, unexplained weight loss and loss of appetite. Usually, there are no obvious symptoms in the early stages of pancreatic cancer, yet when the symptoms are sufficient to indicate contraction generally means the disease is at an advanced stage, and by the time of diagnosis, pancreatic cancer has usually spread to other parts of the body. In the global statistics of cancer deaths in 2020, pancreatic cancer caused 466,000 deaths, and more than half of these clinical cases of pancreatic cancer were over 79 years old [41]. Table 5 presents the top-10 potential lncRNAs with pancreatic cancer prediction.
Among all the results of three diseases, the latest Pubmed literature support was found for 8, 9 and 8 of the top-10 predicted lncRNAs with maximum correlation probability, respectively. This clearly indicates that our model has a good performance in the prediction of actual disease-related lncRNAs, and possess potential application value and scientific significance. Full results of the three cancers are given in Additional file 5, Additional file 6 and Additional file 7.
In this paper, we proposed an integrated feature extraction model, SVDNVLDA, for predicting potential lncRNA-disease associations. In SVDNVLDA, the network representation learning method node2vec and matrix decomposition method SVD were originally integrated to predict the potential lncRNA-disease associations. It also can be regarded as an open framework, in which more feature extraction methods can be flexibly applied.
However, there are still some potential weaknesses in our model, which mainly relies on the limitations of the data used in this paper. Specifically, relying solely on the associated data almost could not comprehensively reflect the complex interactions between lncRNAs and the other biomolecules. Meanwhile, in the heterogeneous network LMDN, the node representations, obtained by node2vec, have been proven to be capable to retain the topology information of nodes in network, yet they fail to remain the information of different node types which is abundant and valuable in heterogeneous networks. It would be improved on the expansion of experimental data and introducing more advanced representation learning methods in future studies.
In SVDNVLDA, the linear feature representations of lncRNAs and diseases containing their linear interaction information were obtained by matrix decomposition method SVD; and the nonlinear features containing network topology information were obtained by node2vec. The integrated feature vectors of aforementioned features were inputted into a machine learning classifier, which transformed the lncRNA-disease association prediction into a binary classification problem. The AUC and AUPR values of SVDNVLDA are higher than any of five popular prediction methods under tenfold cross-validations. The prediction performance on data sets of different scales shows that SVDNVLDA can be adapted to a range of data sets and possess strong robustness. In addition, the case studies of three common cancers indicate its effectiveness in practical applications.
Overview of SVDNVLDA
Matrix decomposition method, SVD, and network embedding method, node2vec, were novelly integrated in SVDNVLDA for obtaining the linear and the nonlinear representations of both lncRNA and disease entities respectively. By combining the different features of each lncRNA and each disease, the integrated feature vectors were constructed which fused the linear features of interaction information and the nonlinear features of network topology information. These feature vectors were served as the inputs of one machine learning classifier and the corresponding predicted results would be obtained in the end (Fig. 7).
The flowchart of SVDNVLDA
Step1: Data processing and construction of lncRNA-disease association matrix and lncRNA-miRNA-disease association network (LMDN). Step 2: Apply SVD on association matrix to get linear features. Step 3: Apply node2vec on LMDN to get nonlinear features. Step 4: Feature integration. Step 5: Use XGBoost classifier to predict association.
Data preprocessing
The study mainly included lncRNA-disease association data, lncRNA-miRNA association data and miRNA-disease association data. The experimentally confirmed lncRNA-disease association data were downloaded from LncRNADisease v2.0 [42] and Lnc2Cancer v3.0 [43]. All disease names were converted into standard MESH disease terms, and duplicate data was filtered to retain only one replication. For avoiding experimental errors that came from the downloaded data, the lncRNAs with one or none association were removed. In the end, a total number of 4518 associations between 861 lncRNAs and 253 diseases were obtained.
The known lncRNA-miRNA association data was downloaded from Encori [44] and NPInter V4.0 databases [45]. After eliminating redundancy, only records of the lncRNAs commonly to lncRNA-disease data and the miRNAs commonly to miRNA-disease data were selected. Finally, a total of 8172 lncRNA-miRNA associations were obtained involving 338 lncRNAs and 285 miRNAs.
As for miRNA-disease association data, it was obtained from the HMDD v3.2 database [46]. The original data includes two types of association records, namely the subjective causality and passive changes of miRNAs during the course of diseases. By contrast, the studies of miRNAs in causal relationship with diseases were more valuable for exploring the pathogenesis and searching for new biomarkers. In our experiment, only the related records with causal relationships in HMDD database were picked. All disease names were transformed to standardized names based on MeSH glossary, and the lncRNAs associated with only one disease were removed from the original data. Ultimately, a total count of 861 lncRNAs, 437 miRNAs and 431 diseases were involved in our experiment. The statistical overview of formed data, also as the statistical overview of LMDN was documented in Additional file 8.
Construct lncRNA-disease association matrix and LMDN
Firstly, the lncRNA- disease association matrix was constructed. For lncRNA \(l\), if there is a known association with disease \(j\) in our collected data, the corresponding element value in the association matrix \(R_{M \times N}\) is \(1\); otherwise, it is \(0\). The formula is made out as:
$${\text{R}}_{{{\text{M}} \times {\text{N}}}} \left( {\text{i,j}} \right){ = }\left\{ {\begin{array}{*{20}l} {1,} \hfill & {{\text{if}}\,{\text{i}}\,{\text{and}}\,{\text{j}}\,{\text{have}}\,{\text{association}}} \hfill \\ {0,} \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right.$$
in our experiment, the real matrix \({\text{R}}_{{{\text{M}} \times {\text{N}}}}\) was shaped as 861 × 437 dimensions.
After the construction of association matrix, lncRNA-disease association data combined with lncRNA-miRNA association and miRNA-disease association data were used to construct lncRNA-miRNA-disease association heterogeneous network (LMDN). Among the three types of vertices in LMDN, namely lncRNA, miRNA and disease, there would be an edge between two vertices with association record, otherwise the two vertices would have no connection. The heterogeneous network was a sparse network with 1769 nodes and 16,878 edges, as detailed in Additional file 8.
Linear feature extraction based on singular value decomposition
SVD is a matrix decomposition method which has been widely used in recommender systems [47, 48]. In SVD, the matrix is common decomposed into the multiplying of three matrices:
$$R_{M \times N} = U_{M \times C} \cdot \Sigma_{C \times C} \cdot V_{C \times N}^{T}$$
As a typical collaborative filtering-based recommendation system with SVD, the initial matrix \(R\) represents a rating matrix for \(M\) users' rates on \(N\) goods. Among the resulted matrixes, \(U\) represents the interesting levels of \(M\) users on \(C\) features of goods, namely users' characteristics or commodity affinity; while \(\Sigma\) represents the importance of each feature of goods, specified as a non-negative diagonal matrix, in which diagonal elements are arranged as descending order. \(V^{T}\) represents the distribution of \(C\) features in \(N\) goods [49].
Analogically, applying SVD on lncRNA-disease association matrix \(R_{M \times N}\), the obtained matrixes \(U\), \(\Sigma\) and \(V^{T}\) could represent lncRNA feature matrix, feature weight matrix and disease feature matrix, respectively. For dimensional reduction purpose, only the ranked \(k\) features with larger numerical values in \(\Sigma\) were taken, and \(R\) would be expressed as:
$$R_{M \times N} \approx U_{M \times k} \cdot {\Sigma }_{k \times k} \cdot V_{k \times N}^{T}$$
In fact, the binary matrix \(R\) is not an ideal initial matrix. In recommendation system, \(0\) (or blank) elements in rating matrixs cannot actually represent these rates of products, more likely, it is commonly due to missing users' evaluations. Thus, in lncRNA-disease association matrix \(R\), the value \(0\) usually represents that the corresponding association has not been confirmed. Therefore, for calculation convenience and considering biological meaning, all the \(0\) elements in original binary matrix \(R\) were replaced by \(10^{ - 6}\) in our experiment. Based on the theory of SVD, each row in \(U_{M \times k}\) represents a \(k\)-dimensional linear feature vector of a certain lncRNA. Similarly, each column in \(V_{k \times N}^{T}\) represents a \(k\)-dimensional linear feature vector of a certain disease (Fig. 8).
The illustration of applying SVD on lncRNA-disease association matrix
Nonlinear feature extraction based on Node2vec
Network representation learning (NRL), also known as network embedding, refers to map nodes into a continuous low-dimensional space on the premise of keeping characteristics of nodes in the original network. Given a network \(G = \left( {V,E} \right)\), where \(V = \left\{ {v_{i} } \right\}\) represents the collection of nodes and \(E = e_{i} \subset \left\{ {V \times V} \right\}\) represents the collection of edges. The mathematical expression of NRL is: \(\forall v_{i}\), find a map \(f:V \to R^{d}\), and \(d \ll \left| V \right|\). The ideal learned node representations should be able to quantify the characteristics of nodes in social network, which could be intuitively expressed that topological neighbor nodes have small numerical vector distance and the representations of nodes in the same community have larger similarity than nodes outside the community. Up to now, many NRL methods have been widely used to solve problems such as node classification, community discovery, link prediction and data visualization [50].
As a semi-supervised network feature learning method, node2vec [51] innovatively proposed a biased random walk on the basis of word representation method [52] and DeepWalk [53], as well as defined a more flexible way to select the next step node with random walk. More specifically, node2vec trades off the two kinds of random walk strategy: Breadth-first search (BFS) and Depth-first search (DFS), which are shown in Fig. 9. Unlike the original random walk, node2vec can artificially control the degree of BFS and DFS by adjusting parameters based on the preferences of actual practice scenario. Here is a detailed description of simple random walk and modified biased random walk in node2vec (Fig. 10).
The illustration of distinctions between BFS and DFS
The bias random walk on node2vec
For a given boot node \(u\), simulate a simple unbiased random walk with \(l\) length. \(c_{i}\) represents the \(i^{th}\) node in the process of random walk. Let \(c_{0} = u\), and the transition probability of the node reached in \(i^{th}\) step is:
$${\text{P(c}}_{{\text{i}}} = {\text{x|c}}_{{{\text{i}} - {1}}} = {\text{v)}} = \left\{ {\begin{array}{*{20}l} {\frac{{\uppi _{{{\text{vx}}}} }}{{\text{Z}}}{,}} \hfill & {{\text{if}}\,{\text{(v,x)}} \in {\text{E}}} \hfill \\ {0,} \hfill & {{\text{otherwise}}} \hfill \\ \end{array} } \right.$$
of which \(\pi_{vx}\) is the unnormalized transition probability between nodes \(v\) and \(x\), \(Z\) represents a normalized constant term.
As for the biased random walk in node2vec, just as shown in Fig. 10, if the root position of a random walk is set at node \(t\), through edge \(\left( {t,v} \right)\), the current position reached node \(v\), and the transition probability is set as follows:
$$\alpha_{pq} \left( {t,x} \right) = \left\{ {\begin{array}{*{20}l} \frac{1}{p} \hfill & { if\,d_{tx} = 0} \hfill \\ 1 \hfill & {if\,d_{tx} = 1} \hfill \\ \frac{1}{q} \hfill & { if\,d_{tx} = 2} \hfill \\ \end{array} } \right.$$
\(d_{tx}\) represents the shortest distance between nodes \(t\) and \(x\) and the possible value of \(d_{tx}\) is 0,1,2. As shown in Fig. 10, the parameter \(p\) controls the probability that the next step of walk will return to the previous node. If \(p\) is greater than \(1\), the random walk will have less tendency to turn back. The value of \(q\) controls the preference of BFS and DFS to guide the bias of random walk. If \(q\) is greater than \(1\), the random walk will be more inclined to BFS, that is, to the neighbor node of the starting node. If \(q\) is less than \(1\), the random walk is more inclined to DFS, that is, to go away from the starting node. When the values of \(p\) and \(q\) are both equal to \(1\), node2vec is equal to DeepWalk.
In the constructed LMDN, node2vec was adopted to obtain the corresponding representations for vertices. The representations of lncRNA and disease nodes generated by node2vec retain the topological information of the nodes in LMDN. The experimental results demonstrate that the obtained nonlinear features could effectively enhance the SVD based linear features and improve the information richness in integrated features.
Feature integration
Based on the decomposition of \(R_{M \times N}\) and NRL method node2vec, we have obtained the linear feature matrixes \(U\), \(V^{T}\), and the nonlinear feature representations of lncRNA and disease nodes in LMDN. For each lncRNA \(i\) and disease \(j\), the feature integration rules are as follows:
The linear features corresponding to lncRNA \(i\) is the ith row of \(U\), which is noted as \(LL_{i}\) after being converted into a column vector. Similarly, the linear features corresponding to disease \(j\) is the jth column of \(V^{T}\), represented as \(LD_{j}\). The nonlinear features corresponding to \(i\) is noted as \(NL_{i}\) as well as the nonlinear features corresponding to \(j\) is noted as \(ND_{j}\). The final integrated features of \(i\) and \(j\) is expressed as:
$$FL_{i} = \left[ {\begin{array}{*{20}c} {LL_{i} } \\ {NL_{i} } \\ \end{array} } \right]$$
$$FD_{j} = \left[ {\begin{array}{*{20}c} {LD_{j} } \\ {ND_{j} } \\ \end{array} } \right]$$
where [] represents the vector connect operation.
The source code and data of SVDNVLDA are available at https://github.com/iALKing/SVDNVLDA.
SVD:
Singular Value Decomposition
RWR:
Random walk with restart
LRLSLDAL:
Laplacian Regularized Least Squares for lncRNA-Disease Association
SVDNVLDA:
Singular Value Decomposition and Node2Vec based LncRNA-Disease Association prediction model
Sen:
Prec:
MCC:
Matthews correlation coefficient
NB:
Naïve Bayes
RF:
ADB:
XGB:
LMDN:
LncRNA-miRNA-disease interaction heterogeneous network
NRL:
Network representation learning
BFS:
Breadth-first search
DFS:
Depth-first search
Guttman M, Amit I, Garber M, French C, Lin MF, Feldser D, Huarte M, Zuk O, Carey BW, Cassady JP. Chromatin signature reveals over a thousand highly conserved large non-coding RNAs in mammals. Nature. 2009;458(7235):223–7.
Xue M, Zhuo Y, Shan B. MicroRNAs, long noncoding RNAs, and their functions in human disease. Methods Mol Biol. 2017;1617:1–25.
DiStefano JK. The emerging role of long noncoding RNAs in human disease. Methods Mol Biol. 2018;1706:91–110.
Chen J, Shishkin AA, Zhu X, Kadri S, Maza I, Guttman M, Hanna JH, Regev A, Garber M. Evolutionary analysis across mammals reveals distinct classes of long non-coding RNAs. Genome Biol. 2016;17(1):1–17.
McDonel P, Guttman M. Approaches for understanding the mechanisms of long noncoding RNA regulation of gene expression. Cold Spring Harb Perspect Biol. 2019;11(12):a032151.
Tsang W, Kwok T. Riboregulator H19 induction of MDR1-associated drug resistance in human hepatocellular carcinoma cells. Oncogene. 2007;26(33):4877–81.
Li Y, Zhuang L, Wang Y, Hu Y, Wu Y, Wang D, Xu J. Connect the dots: a systems level approach for analyzing the miRNA-mediated cell death network. Autophagy. 2013;9(3):436–9.
Munos B. Lessons from 60 years of pharmaceutical innovation. Nat Rev Drug Discov. 2009;8(12):959–68.
Lalevée S, Feil R. Long noncoding RNAs in human disease: emerging mechanisms and therapeutic strategies. Epigenomics. 2015;7(6):877–9.
Mercer TR, Qureshi IA, Gokhan S, Dinger ME, Li G, Mattick JS, Mehler MF. Long noncoding RNAs in neuronal-glial fate specification and oligodendrocyte lineage maturation. BMC Neurosci. 2010;11(1):1–15.
Mercer TR, Mattick JS. Structure and function of long noncoding RNAs in epigenetic regulation. Nat Struct Mol Biol. 2013;20(3):300–7.
Quinodoz S, Guttman M. Long noncoding RNAs: an emerging link between gene regulation and nuclear organization. Trends Cell Biol. 2014;24(11):651–63.
Bhan A, Mandal SS. Long noncoding RNAs: emerging stars in gene regulation, epigenetics and human disease. ChemMedChem. 2014;9(9):1932–56.
Engreitz JM, Ollikainen N, Guttman M. Long non-coding RNAs: spatial amplifiers that control nuclear structure and gene expression. Nat Rev Mol Cell Biol. 2016;17(12):756–70.
Gutschner T, Hämmerle M, Eißmann M, Hsu J, Kim Y, Hung G, Revenko A, Arun G, Stentrup M, Groß M. The noncoding RNA MALAT1 is a critical regulator of the metastasis phenotype of lung cancer cells. Can Res. 2013;73(3):1180–9.
Topel H, Bagirsakci E, Comez D, Bagci G, Cakan-Akdogan G, Atabey N. lncRNA HOTAIR overexpression induced downregulation of c-Met signaling promotes hybrid epithelial/mesenchymal phenotype in hepatocellular carcinoma cells. Cell Commun Signal. 2020;18(1):1–19.
Chen X, Yan G-Y. Novel human lncRNA–disease association inference based on lncRNA expression profiles. Bioinformatics. 2013;29(20):2617–24.
Sun J, Shi H, Wang Z, Zhang C, Liu L, Wang L, He W, Hao D, Liu S, Zhou M. Inferring novel lncRNA–disease associations based on a random walk model of a lncRNA functional similarity network. Mol BioSyst. 2014;10(8):2074–81.
Yao Q, Wu L, Li J, guang Yang L, Sun Y, Li Z, He S, Feng F, Li H, Li Y. Global prioritizing disease candidate lncRNAs via a multi-level composite network. Sci Rep. 2017;7(1):1–13.
Ding L, Wang M, Sun D, Li A. TPGLDA: Novel prediction of associations between lncRNAs and diseases via lncRNA-disease-gene tripartite graph. Sci Rep. 2018;8(1):1–11.
Zhao X, Yang Y, Yin M. MHRWR: prediction of lncRNA-disease associations based on multiple heterogeneous networks. IEEE/ACM Trans Comput Biol Bioinforma. 2020;PP(99):1–1.
Xie G, Jiang J, Sun Y. LDA-LNSUBRW: lncRNA-disease association prediction based on linear neighborhood similarity and unbalanced bi-random walk. IEEE/ACM Trans Comput Biol Bioinform. 2020;PP(99):1–1.
Xie G, Huang Z, Liu Z, Lin Z, Ma L. NCPHLDA: a novel method for human lncRNA-disease association prediction based on network consistency projection. Mol Omics. 2019;15(6):442–50.
Liu M-X, Chen X, Chen G, Cui Q-H, Yan G-Y. A computational framework to infer human disease-associated long noncoding RNAs. PLoS ONE. 2014;9(1):e84408.
Guo Z-H, You Z-H, Wang Y-B, Yi H-C, Chen Z-H. A learning-based method for LncRNA-disease association identification combing similarity information and rotation forest. IScience. 2019;19:786–95.
Zeng M, Lu C, Zhang F, Lu Z, Wu F-X, Li Y, Li M. LncRNA–disease association prediction through combining linear and non-linear features with matrix factorization and deep learning techniques. In: 2019 IEEE international conference on bioinformatics and biomedicine (BIBM). IEEE; 2019. pp. 577–582.
Zeng M, Lu C, Fei Z, Wu F, Li Y, Wang J, Li M. DMFLDA: a deep learning framework for predicting lncRNA-disease associations. IEEE/ACM Trans Comput Biol Bioinform. 2020;PP(99):1–1.
Zeng M, Lu C, Zhang F, Li Y, Wu F-X, Li Y, Li M. SDLDA: lncRNA-disease association prediction based on singular value decomposition and deep learning. Methods. 2020;179:73–80.
Chen X. Predicting lncRNA-disease associations and constructing lncRNA functional similarity network based on the information of miRNA. Sci Rep. 2015;5(1):1–11.
Lu C, Yang M, Luo F, Wu F-X, Li M, Pan Y, Li Y, Wang J. Prediction of lncRNA-disease associations based on inductive matrix completion. Bioinformatics. 2018;34(19):3357–64.
Fan Y, Chen M, Pan X. GCRFLDA: scoring lncRNA-disease associations using graph convolution matrix completion with conditional random field. Brief Bioinform. 2021;22:438–450.
Boughorbel S, Jarray F, El-Anbari M. Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric. PLoS ONE. 2017;12(6):e0177678.
Chicco D, Tötsch N, Jurman G. The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation. BioData Min. 2021;14(1):1–22.
Hanczar B, Hua J, Sima C, Weinstein J, Bittner M, Dougherty ER. Small-sample precision of ROC-related estimates. Bioinformatics. 2010;26(6):822–30.
Saito T, Rehmsmeier M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE. 2015;10(3):e0118432.
Rish I. An empirical study of the naive Bayes classifier. In: IJCAI 2001 workshop on empirical methods in artificial intelligence. 2001. pp. 41–46.
Breiman L. Random forests. Mach Learn. 2001;45(1):5–32.
Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci. 1997;55(1):119–39.
Chen T, Guestrin C. Xgboost: a scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. 2016. pp. 785–794.
Zhang H, Liang Y, Peng C, Han S, Du W, Li Y. Predicting lncRNA-disease associations using network topological similarity based on deep mining heterogeneous networks. Math Biosci. 2019;315:108229.
Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71(3):209–49.
Bao Z, Yang Z, Huang Z, Zhou Y, Cui Q, Dong D. LncRNADisease 2.0: an updated database of long non-coding RNA-associated diseases. Nucleic Acids Res. 2019;47(D1):D1034–7.
Gao Y, Shang S, Guo S, Li X, Zhou H, Liu H, Sun Y, Wang J, Wang P, Zhi H. Lnc2Cancer 3.0: an updated resource for experimentally supported lncRNA/circRNA cancer associations and web tools based on RNA-seq and scRNA-seq data. Nucleic Acids Res. 2021;49(D1):D1251–8.
Li J-H, Liu S, Zhou H, Qu L-H, Yang J-H. starBase v.20: decoding miRNA-ceRNA, miRNA-ncRNA and protein–RNA interaction networks from large-scale CLIP-Seq data. Nucleic Acids Res. 2014;42(D1):D92–7.
Teng X, Chen X, Xue H, Tang Y, Zhang P, Kang Q, Hao Y, Chen R, Zhao Y, He S. NPInter v4.0: an integrated database of ncRNA interactions. Nucleic Acids Res. 2020;48(D1):D160–5.
Huang Z, Shi J, Gao Y, Cui C, Zhang S, Li J, Zhou Y, Cui Q. HMDD v3.0: a database for experimentally supported human microRNA–disease associations. Nucleic Acids Res. 2019;47(D1):D1013–7.
Vozalis MG, Margaritis KG. Applying SVD on item-based filtering. In: 5th international conference on intelligent systems design and applications (ISDA'05). IEEE. 2005. pp. 464–469.
Vozalis MG, Margaritis KG. Using SVD and demographic data for the enhancement of generalized collaborative filtering. Inf Sci. 2007;177(15):3017–37.
Cheng W, Yin G, Dong Y, Dong H, Zhang W. Collaborative filtering recommendation on users' interest sequences. PLoS ONE. 2016;11(5):e0155739.
Yang C, Sun M, Liu Z, Tu C. Fast network embedding enhancement via high order proximity approximation. In: IJCAI: 2017. pp. 3894–3900.
Grover A, Leskovec J. node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. pp. 855–864.
Mikolov T, Chen K, Corrado G, Dean J. Efficient estimation of word representations in vector space. 2013. arXiv:13013781
Perozzi B, Al-Rfou R, Skiena S. Deepwalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and data mining. 2014. pp. 701–710.
This study was supported by National Natural Science Foundation of China (81672113, 62072154) and Natural Science Foundation of Hebei Province (C2018202083).
Institute of Computational Medicine, School of Artificial Intelligence, Hebei University of Technology, Tianjin, 300401, China
Jianwei Li, Jianing Li, Mengfan Kong, Duanyang Wang & Kun Fu
Hebei Province Key Laboratory of Big Data Calculation, Hebei University of Technology, Tianjin, 300401, China
School of Life Sciences, Tiangong University, Tianjin, 300387, China
Jiangcheng Shi
Jianwei Li
Jianing Li
Mengfan Kong
Duanyang Wang
Kun Fu
JS and KF conceived, led the project, evaluated the methods, suggested improvements and analyzed the results. JL (Jianwei) and JL (Jianing) conducted the experiments and wrote the manuscript. MK and DW collected, organized data and modified the manuscript. All authors read and approved the final manuscript.
Correspondence to Kun Fu or Jiangcheng Shi.
Consent to publish
. The numerical values of diagonal elements in the importance matrix Σ.
. The sensitivity results of different features in classifiers.
. The specificity results of different features in classifiers.
. The precision results of different features in classifiers.
. The complete case study results of breast cancer.
. The complete case study results of lung cancer.
. The complete case study results of pancreatic cancer.
. The statistical overview of experimental data.
Li, J., Li, J., Kong, M. et al. SVDNVLDA: predicting lncRNA-disease associations by Singular Value Decomposition and node2vec. BMC Bioinformatics 22, 538 (2021). https://doi.org/10.1186/s12859-021-04457-1
LncRNA-disease association prediction
node2vec
XGBoost classifier | CommonCrawl |
Construction of Serre Spectral Sequence
I'm trying to follow Hopkins' construction of the Serre Spectral Sequence, but some "obvious" things are not that obvious to me.
He starts with considering a double complex $C_{\bullet,\bullet}$ with $C_{p,q}$ to be a free $\mathbb{Z}$-module generated by the maps $\Delta[p]\times\Delta[q]\rightarrow E$ ($E$ is a total space of Serre fibration over $B$) which fit into the diagram
\begin{matrix} \Delta[p]\times\Delta[q] & \to & E \\\ \downarrow & & \downarrow \\\ \Delta[p] & \to & B \end{matrix}
with obvious differentials (coordinate by coordinate differentials as in normal singular complex).
There are two filtrations, he uses the first one (by rows) to determine the homology of the total complex, ane the second one to get $E^2_{p,q}=H_p(B,\underline{H_q}(F)$. I have a problem with the second part. He fixes $p$ and a map $c$ in the bottom row and interprets the diagram as a map $\Delta[q]\rightarrow F_c$ where $F_c$ is the image of $\Delta[p]\times\Delta[q]$ in $E$ such that the diagram commutes, which basically means that $C_{p,q}=\bigoplus_c \mathbb{Z}[F_c]$ (as a module, that's true). Now he calculates the homology of the column and says that it's $\bigoplus_c H_*(F_c)$. Why can we apply the vertical differential here? Do we need to check some compatibility condition?
Next, we want to use that $E^1_{p,q}$ is a module of singular $p$-chains with coefficients in a local system $\underline{H_q}(F)$ and say that the horizontal differential is just the regular differential to get the desired output. But how do we know that this differential works in a nice way?
at.algebraic-topology tag-removed spectral-sequences
mathdonkmathdonk
$\begingroup$ At the time this construction was first published, Mike Hopkins went to elementary school: A Dress, "Zur Spectralsequenz von Faserungen", Inventiones Math. 3 (1967). $\endgroup$
– Johannes Ebert
$\begingroup$ Yes, it's based on Dress' article, but it seems to be much easier and simplified proof. Maybe I should have written "Hopkins' article". $\endgroup$
– mathdonk
$\begingroup$ A good warm up would be Hopkins' definition of singular homology. I would then move on to Hopkins' definition of fibration. There must be a mistake in terminology: it should not be called, "Serre fibration." The correct terminology should be "Hopkin's fibration." $\endgroup$
– John Klein
$\begingroup$ @mathdonk: You got stuck at a point where Hopkins writes "From this one easily checks..." (if we are talking about the same text). Maybe you should have a look at Dress' paper, he gives full details. P.S.: why do you say that a proof is "much easier and simplified" if you fail to understand the main step? $\endgroup$
$\begingroup$ @Dan: I believe it is isites.harvard.edu/fs/docs/icb.topic880873.files/… $\endgroup$
The double complex has a horizontal differential $\partial'$ and a vertical differential $\partial''$ such that $\partial'\partial''=\partial''\partial'$. This gives rise to a total complex $TC_n=\bigoplus_{p+q=n}C_{pq}$ with differential $\partial|C_{pq}=\partial'+(-1)^p\partial''$. This can be filtered by $F_p=\bigoplus_{i\le p}C_{i,n-i}$ and so we get a spectral sequence $E^r$ converging to $H_\ast(TC)$, with $E^0_{pq}=C_{pq}$ and $d^0=\partial''$ (up to sign). Thus $E^1_{pq}=H_q(C_{p\ast})$ is the vertical homology of our double complex. Now the differential $d^1$ is induced by the chain map $\partial'$: any element of $E^1_{pq}$ is given by a $c\in C_{pq}$ with $\partial''c=0$, and that means $\partial c=\partial'c$. From here we see that $E^2$ is the horizontal homology of the vertical homology of our double complex.
Hope I didn't miss the bulk of your question. For more information I suggest Ken Brown's amazing textbook Cohomology of Groups.
Chris GerigChris Gerig
$\begingroup$ I know how the spectral sequence of the double complex works, I just don't get why one can calculate the homology (of rows and columns) after doing this identifications. You definitely have to check something, but I don't know what exactly... $\endgroup$
$\begingroup$ The treatment of double complexes in Bott ant Tu "Differential forms in Algebraic Topology" is also very good. $\endgroup$
– Mark Grant
I wanted to post the following as a comment, but it's too long.
It might help to realize where the differential comes from. Let $p: E \to B$ be a Hurewicz fibration. Assume $B$ is a connected CW complex. Then $B$ has a cellular filtration $B_k \subset B_{k+1} \dots$. If we pull back $p$ along this filtration we obtain a filtration (in fact, by cofibrations!) of $E$ $$ E_0 \subset E_1 \subset \cdots $$ I claim that the homology spectral sequence of this filtration is then the Serre spectral sequence. The $d_1$-differential of the above filtration is given by the composition $$ E_k/E_{k-1} \overset{\delta}\to \Sigma E_{k-1} \to \Sigma E_{k-1}/E_{k-2} \qquad (\ast) $$ where the map $\delta$ is the Barratt-Puppe extension of the cofibration $E_{k-1} \to E_k$ and the second displayed map is given by collapsing $E_{k-2}$ to a point.
Furthermore, it's easy to check that $$ E_{k}/E_{k-1} \simeq F_+ \wedge B_k/B_{k-1} \qquad (\ast\ast) $$ So I guess that your question, in the end, amounts to the following: With respect to the equivalence $(\ast\ast)$, what does the map $E_k/E_{k-1} \to \Sigma E_{k-1}/E_{k-2}$ look like when considered as a map $$ F_+ \wedge B_k/B_{k-1} \to F_+ \wedge \Sigma B_{k-1}/B_{k-2} \quad ? $$ In other words, on the level of homology, is this map of the form $\text{id}\wedge c$, where $c:B_k/B_{k-1} \to \Sigma B_{k-1}/B_{k-2} $ is the map for the filtration $B_k \subset B_{k+1} \dots$ constructed like the one in $(\ast)$?
John KleinJohn Klein
$\begingroup$ And the answer to this question is that the map in question is $\alpha \wedge c$, where $\alpha$ is an admissible map for the fibration. An $H_*$-orientable fibration is one in which all admissible maps induce the same map, so it does induce $\id\wedge c$. $\endgroup$
– Jeff Strom
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology tag-removed spectral-sequences or ask your own question.
Motivic homotopy spectral sequence
spectral sequence with non-trivial action on coefficients
Hochschild-Serre spectral sequence and non-trivial action on coefficients
Persistence barcodes and spectral sequences | CommonCrawl |
A combination of climate, tree diversity and local human disturbance determine the stability of dry Afromontane forests
Hadgu Hishe ORCID: orcid.org/0000-0002-4026-59571,2,
Louis Oosterlynck1,
Kidane Giday2,
Wanda De Keersmaecker1,3,
Ben Somers1 &
Bart Muys1
Anthropogenic disturbances are increasingly affecting the vitality of tropical dry forests. The future condition of this important biome will depend on its capability to resist and recover from these disturbances. So far, the temporal stability of dryland forests is rarely studied, even though identifying the important factors associated with the stability of the dryland forests could serve as a basis for forest management and restoration.
In a degraded dry Afromontane forest in northern Ethiopia, we explored remote sensing derived indicators of forest stability, using MODIS satellite derived NDVI time series from 2001 to 2018. Resilience and resistance were measured using the anomalies (remainders) after time series decomposition into seasonality, trend and remainder components. Growth stability was calculated using the integral of the undecomposed NDVI data. These NDVI derived stability indicators were then related to environmental factors of climate, topography, soil, tree species diversity, and local human disturbance, obtained from a systematic grid of field inventory plots, using boosted regression trees in R.
Resilience and resistance were adequately predicted by these factors with an R2 of 0.67 and 0.48, respectively, but the model for growth stability was weaker. Precipitation of the wettest month, distance from settlements and slope were the most important factors associated with resilience, explaining 51% of the effect. Altitude, temperature seasonality and humus accumulation were the significant factors associated with the resistance of the forest, explaining 61% of the overall effect. A positive effect of tree diversity on resilience was also important, except that the impact of species evenness declined above a threshold value of 0.70, indicating that perfect evenness reduced the resilience of the forest. Precipitation of the wettest month was the most important factor explaining 43.52% of the growth stability variation.
A combination of climate, topographic factors and local human disturbance controlled the stability of the dry forest. Also tree diversity is an important stability component that should be considered in the management and restoration programs of such degraded forests. If local disturbances are alleviated the recovery time of dryland forests could be shortened, which is vital to maintain the ecosystem services these forests provide to local communities and global climate change.
A significant area of the globe (41%) is covered with drylands, and a large part of the human population (35%) resides in them (Safriel and Adeel 2008). Among dryland ecosystems, the dry forest biome covers an estimated 1079 million ha (Bastin et al. 2017), accounting for almost half of the (sub) tropical forests (Aide et al. 2013). Dryland forests are very important for biodiversity conservation, as they are known for their high level of endemism (Myers et al. 2000); also for deep aquifer recharge as they show high infiltration rates in a lacking water environment (Bargués-Tobella et al. 2020), and for moderating high temperatures. Dryland forests are some of the most threatened by human degradation and therefore, maintaining the remnant forests is crucial for a sustainable environment, and a seed source for possible restoration (Safriel et al. 2005; Díaz et al. 2018).
Dry forests are among the most threatened ecosystems (Bognounou et al. 2010) as they are found in regions of low productivity, supporting population with one of the fastest birth rates, where poverty prevails (Safriel and Adeel 2008). Dry forests have high conversion rates to other land use, and the remaining parts are degraded and fragmented (Sánchez-Azofeifa et al. 2005).
Due to climate change and other anthropogenic causes, desertification is widespread in drylands and is impacting the overall well-being of dwellers (Yan et al. 2011). Climate change-induced prolonged dryness could change the vegetation composition of dryland forests, which might further complicate the socioeconomic situation in these areas (Huang et al. 2016). Local disturbance factors such as illegal logging, uncontrolled browsing and grazing, and fire incidences are adding on to, and are possibly interfering with, the effect of global climate change on dryland forests (Lloret et al. 2007; Jacob et al. 2014; Abrha and Adhana 2019; Hishe et al. 2020). Understanding how forests respond to increasing climate change and local human pressure is crucial to keep a sustained flow of the ecosystem services, ecosystem stability (Jactel et al. 2006; Bauhus et al. 2017; Duffy et al. 2017) and should be an essential component of forest management (Huang et al. 2016). This is important as not all forests respond in the same way to global and local disturbances. Their responses are modulated by local landscape characteristics such as species composition, altitude, slope and edaphic factors. Diverse versus monoculture stands, for example, are reported to respond differently to disturbance (Johnson et al. 1996; Van Ruijven and Berendse 2007; De Keersmaecker et al. 2018). While a number of studies reported that tree diversity has a positive effect on production, health and stability of forests, other have reported either neutral or negative effect of diversity which indicates for a need of further study (Waide et al. 1999; McCann 2000). As a consequence restoration planning protocols will need context specific information.
Different metrics have been proposed to define and quantify the responses of forests to disturbances (Webb 2007; Yan et al. 2011). Among these, growth stability, resilience and resistance have been used widely (Verbesselt et al. 2016; De Keersmaecker et al. 2018). Many definitions are given to the mentioned stability concepts (Nikinmaa et al. 2020). The resilience is defined as the recovery rate after a disturbance (Dakos et al. 2012). Resistance, on the other hand, is the capacity of the forest to remain unchanged regardless of disturbances (Grimm and Wissel 1997). Growth stability is considered as a steady continuity of growth irrespective of external disturbance (Chen et al. 2019).
Ecosystem stability is affected by different factors, such as climate, topography and species diversity, among others (Yan et al. 2011; Hutchison et al. 2018). Insight in the response of the ecosystem to change in these factors is valuable for management and restoration purposes. In the absence of long-term ecological experiments, remote sensing data analysis is providing an opportunity to monitor long term forest dynamics (Wang et al. 2004). Typically, vegetation indices based on the ratio between the reflectance in red and near-infrared (NIR) bands, such as the Normalized Different Vegetation Index (NDVI) (Kogan 1995), are used to characterize vegetation properties (Lu et al. 2016). NDVI time series thus provide valuable information on forest dynamics and their response to external pressures (Lhermitte et al. 2011; Verbesselt et al. 2016; De Keersmaecker et al. 2018).
Forest stability metrics can be derived by applying statistical analysis to the entire NDVI time series, holistic approach, to take the possible recurrent stochastic perturbation events such as drought and other environmental variations in an open environment into consideration (Verbesselt et al. 2016; Hutchison et al. 2018). Within the holistic approach, temporal autocorrelation (TAC) (Verbesselt et al. 2016), the depth of the anomalies (De Keersmaecker et al. 2014) and the standard deviation of the anomalies (Pimm 1984) from a decomposed time series are commonly used as an indicator of forest resilience and resistance, respectively. TAC is based on the assumption that forests with lower resilience will recover more slowly, and growth progress is dependent on previous performances (Verbesselt et al. 2016). Hence, higher TAC values indicate a slow forest response to these perturbations, showing lower recovery rate of the system. TAC is thus a measure of the slowness of forest response after disturbances and a direct indicator of resilience (Verbesselt et al. 2016). TAC can be used to assess how close a system is close to a critical transition point (CTP) to another stable system, the higher the autocorrelation (close to one) the closer the system is to the CTP (Leemput et al. 2018). Subtracting the TAC from one, on the other hand, indicates how close a system is to its prior disturbance state (the recovery rate in its broad sense), which could be considered as the resilience of the system (Verbesselt et al. 2016).
Similarly, as resistance is defined as the ability to withstand external shocks where highly resistant forests will deviate less than forests with low resistance during perturbations, the depth of the deviation is considered as an indicator of resistance (De Keersmaecker et al. 2014). In addition, growth stability can be measured by calculating the area under the curve of the undecomposed NDVI at a yearly basis and is measured by the inverse of the coefficient of variation (mean divided by the standard deviation) of the respective years of the time series (Isbell et al. 2009).
Apart from quantifying the degree of stability of forests to disturbances, understanding and predicting the effect of environmental factors strengthening or weakening forest stability is little explored (Yan et al. 2011). Therefore, this research aims at quantifying the effect of different explanatory variables describing tree species diversity, local degradation indicators and climate on forest resilience, resistance and growth stability over time using MODIS NDVI time series. Such information will be crucial for planning a successful restoration and forest management (Anjos and De Toledo 2018). With this respect, the study strives to test the following hypotheses: 1) precipitation and temperature play a vital role in the stability of dry forests, 2) topographic and edaphic factors and local land degradation indicators further modulate the difference in the stability of forests, 3) stands with multispecies composition have more growth stability resistance and resilience under climate fluctuation and human disturbances than monocultures.
Study area description
The study was carried out in Desa'a Forest, a large degraded dry Afromontane forest situated in the Tigray and Afar regions in the north of Ethiopia, for which an ambitious restoration plan is ongoing. The altitudes range from 900 m in Afar lowlands to 3000 m in the highlands of Tigray (Fig. 1). Due to the large difference in topography and long north-south extension along the escarpment, the geologic formation of the forest area is diverse (Asrat 2002). The bedrock in Desa'a Forest is mainly made up of a Precambrian basement in the northern part and the Hintalo limestone dotted with Adigrat Sandstone in the southern landscape (Williams 2016).
Location of Desa'a Forest in Ethiopia, and the position of the sampling points
The precipitation pattern of the study area is influenced by topography and rain-bearing winds and is dominated by a large inter-annual variability (Nyssen et al. 2005). Data from a nearby meteo-station and Worldclim (http://worldclim.org/version2) (Fick and Hijmans 2017) indicate that the average annual temperature and precipitation of the study area ranges between 13 °C to 25 °C and 400 to 700 mm respectively. Drought has a long history in the area, and caused regular famines, including in recent times. Recent droughts have been recorded for 2000, 2002, 2004 and 2009 (Gebrehiwot and van der Veen 2013). In a recent study, 2012 and 2013 were added among the driest years in the region (Tefera et al. 2019).
Desa'a Forest is most often classified as a dry Afromontane forest with a long dry season, where Juniperus procera Hochst. ex Endl. and Olea europaea subsp. cuspidata (Wall. ex G. Don) Cif. are the dominant species (Friis et al. 2010) in the canopy and understory, respectively. In Aynekulu et al. (2012), dry Afromontane forest (Juniper-Olea-Tarchonanthus group), semi-deciduous shrubland (Cadia-Acacia group), open acacia woodland and semi-desert shrubland (Balanites group) was identified from top to bottom along the altitude gradient. The forest is under strong degradation pressure by livestock and overcutting and is undergoing fast species composition change (Aynekulu et al. 2011)) with a 500 m upward shift in the tree line for juniper and olive species so far (Aynekulu et al. 2011). Desa'a forest covers an area of 150,000 ha.
The ground data were collected by systematic sampling, based on a 2 km by 2 km grid. At each corner of the grids, 303 plots of 400 m2 were established on which all woody species, shrubs and trees, were identified following the nomenclature of Ethiopian flora (Tesemma 2007) and counted. For each tree, diameter at breast height (DBH) at 1.3 m above ground was measured using a calliper. For shrubs, diameter at stump height (DSH) at 30 cm above ground was measured. Trees with at least 5 cm in DBH and shrubs with at least 1 cm in DSH were considered. Only plots with a vegetation cover above 10% following the FAO definition of forest, 131 plots were used (FAO 2010). For the shrub and tree layers, canopy cover was estimated by a group of three experts and an average was recorded.
For each plot, slope, aspect and altitude were extracted from the 30 m spatial resolution ASTER Digital Elevation Model. The 19 standard Bioclimatic variables for 30 years (1970–2000) were extracted at 1 km resolution from the WorldClim WebPortal (http://worldclim.org/version2) (Fick and Hijmans 2017). The definition and nature of the bioclimatic variables are well documented in Fick and Hijmans (2017) and (O'Donnell and Ignizio 2012).
Distance to nearby settlements and roads were extracted from a Euclidean distance raster constructed from a digitized road and settlement shapefiles. The shapefiles were obtained from a combination of data digitized from Google earth, and GPS tracked major and feeder roads, towns and centre of encompassing villages.
In every plot, local disturbance indicators such as fire incidence, grazing and logging severity were estimated (see appendix 1, supplementary material) following Aynekulu et al. (2011). In each of the diversity inventory plots, soil depth was measured by penetrating a metal rod until the bedrock is reached. The thickness of the forest floor (ectorganic humus layer) was measured after cutting a profile with a spade (Eriksson and Holmgren 1996) (Table 1).
Table 1 Categorical environmental factors collected in the field (Lower rank indicates better forest condition and higher values indicate bad forest condition; while soil depth, humus depth and erosion status were assessed into five ranks, grazing, cutting and fire incidence were ranked into four)
Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data, i.e. the global MOD13Q1 data product with a temporal resolution of 16 days and a spatial resolution of 250 m, was used. MODIS NDVI time series from 2001 to 2018 were downloaded from Google Earth Engine (Hird et al. 2017). Upon downloading, low data quality observations such as pixels covered by clouds were masked (Hird et al. 2017). NDVI values were extracted for the pixels covering each inventory plot for every scene as a matrix of bimonthly NDVI over the 18 years in R-software.
Time series decomposition
The time series were decomposed into trend, seasonality and remainder (anomalies) components using Seasonal-Trend decomposition using Loess (STL) (Abbes et al. 2018) in R software. The trend component indicates long-term forest development, while the seasonal component depicts annual growth variations (Quan et al. 2016). The remainder is the difference obtained when the trend and seasonality are subtracted from the original time series (Verbesselt et al. 2016) (Fig. 2).
An example of an NDVI time series of Desa'a forest, study area, decomposed using the STL algorithm
Deriving ecosystem stability metrics from the NDVI time series
Three stability metrics were used to describe forest dynamics: resilience, resistance, and growth stability. While resilience and resistance were based on the anomalies of the NDVI time series (De Keersmaecker et al. 2014), growth stability was based on the integrals of the undecomposed NDVI time series (Isbell et al. 2009).
Resilience (Fig. 3) was computed using the temporal auto-correlation (TAC) of the anomaly. TAC and resilience are given in the following formula (Dakos et al. 2012), Eqs. 1 and 2, respectively. Highly correlated events (= high TAC) represent a slow recovery rate (= low resilience).
The concept of resilience (A) resistance (B) as used in this study on the remainder of the time series decomposition. Resilience is the recovery rate of the community, the resistance if the net change in the community
$$ \mathrm{TAC}=\frac{\sum_{t=i}^{n-1}\left(\ {X}_t-\overline{X}\right)\left({X}_{t+1}-\overline{X}\right)}{\sum_{t=1}^n{\left({X}_t-\overline{X}\right)}^2\ } $$
$$ \mathrm{Resilience}=1-\mathrm{TAC} $$
where TAC is the temporal autocorrelation at lag 1, Xt stands for the observation at time t and n equals the total number of observations.
The resistance was calculated as the lowest 5th percentile of the remainder (anomalies) per year (De Keersmaecker et al. 2014) (Fig. 3). Small values for the resistance metric represent highly resistant forests, i.e. forests that will deviate to a small extent during perturbations.
Growth stability
The growth stability was calculated from the integral of the undecomposed NDVI time series (Yin et al. 2012). The area under the curve of yearly based NDVI time series was considered as a good proxy for the net primary production (growth) of the forest. This area under the curve was obtained based on the top 75% of the yearly NDVI response to avoid the possible effect of seasonal variation in vegetation properties such as leaf sheds (Fig. 4). The growth stability was then calculated as the inverse of the coefficient of variation (i.e. a ratio of mean to standard deviation) of the area under the curve.
Fraction of the yearly NDVI (75%) used to extract growth stability for Desa'a forest
Tree diversity
Basal area (BA) based on species diversity was derived using the Shannon-Wiener diversity index (H′) and evenness index (J) equations (Shannon 1948), Eqs. 3 and 4, respectively.
$$ {H}^{\prime }={\sum}_{s=0}^s\mathrm{B}{\mathrm{A}}_i\ln\ {\left(\mathrm{B}{\mathrm{A}}_i\right)}^{\prime } $$
$$ J=\frac{H^{\prime }}{H^{\prime}\max }={\sum}_{s=0}^s\mathrm{B}{\mathrm{A}}_i\ln\ \left(\mathrm{B}{\mathrm{A}}_i/\ln (s)\right) $$
where H′ is the Shannon-Wiener diversity index, J is Shannon-Wiener evenness index, and BAi is the BA proportion (n/N) of individuals of the abundance of the ith species (one particular species) found (n) divided by the total number of individuals found (N) (species richness), and S is the number of species. These diversity indices were later used as explanatory variables in the regression analysis.
The four forest stability metrics were modeled against climate, tree species diversity, edaphic and topographic variables and land degradation indicators. Boosted Regression Trees (BRT) was applied as a regression model (Elith et al. 2008) for each metric to explain the dynamics of the forest as a system and identify the most important factors predicting each metric.
BRT allows handling of complex interactions while allowing simplicity for ecological interpretation (Elith et al. 2008; Aertsen et al. 2012). BRT combines the power of regression trees and boosting. It continuously partitions the data into homogeneous parts and fits a specific model to each partition. This avoids the loss of unexplained data if a single regression model could be fitted into such complex interactions. In R-environment, BRT was run using the gbm.step function developed by Elith et al. (2008) which as an extension of the "gbm" package (Ridgeway 2007), and explanatory variables could be simplified to concentrate on the most meaningful and important ones using the gbm.simplify to boost the power of the model (Elith et al. 2008).
The different variables used in the analyses were checked for multi-collinearity using the variation inflation factor (VIF) and Pearson correlation. Variables with higher VIF (> 5) and Pearson correlation (> 0.7) between predictors were not included in the reported outputs (Aertsen et al. 2012). BRT was run for the different stability metrics by varying the learning rates (0.001–0.05), tree complexity (1–5) and bag fraction (0.50–0.75). Model performance was measured using R-squared, AIC and root mean square error (RMSE). In the BRT, the cross-validation (CV) statistic is the most important measure to evaluate the results (Elith et al. 2008). The cross-validation correlation is the mean correlation of the predicted data iteratively based on the number of folds (Elith et al. 2008). The higher the correlation, the higher the predictive power of the model. Because the algorithm is of a stochastic nature, based on the bag fraction used (the default is 75%), a portion of the data (here 50% was used) is used to train the model and the remaining for prediction capability test. Variable importance is determined by averaging the number of times, a variable is selected in the iterative division (splitting) of data weighted by the squared improvement to the BRT model (Gu et al. 2019). Variables that are above the median of the group in the model value are highly important (significant), and those that are below are less important variables in the model (Gu et al. 2019). Results were also supported by partial dependence plots to ease ecological interpretability of the effect trend of the factors considered.
To generate wall to a wall map of stability metrics over the forest, a kriging interpolation in ArcMap10.6 was applied to the stability metrics obtained on a plot level. Similarly, the stability matrices were summarized on an annual basis to show the stability status of the forest over the study period. A summary of the methodological approach is presented in the flow chart below (Fig. 5).
Methodological flowchart showing the workflow used in this paper. DI is Diversity indices, VIF is variance inflation factor, TAC is temporal autocorrelation, SD is the standard deviation, and BRT is boosted regression tree. Same colours show similar work stages; blue represents final input variables and green data sources, for example
Stability status of Desa'a forest and correlation of the metrics
The resilience, resistance, and growth stability of Desa'a forest from 2001 to 2018 depict a similar trend (Figs. 5 and 6, Table 2). The resilience index showed lows in the years 2001, 2007 and 2015 (Fig. 6). The resistance showed minima in 2004, 2008, 2009 and 2015. The growth stability, however, was declining throughout the study period except for a sudden rise in 2016 (Fig. 7). Additionally, the spatial distribution of the four metrics showed similar patterns (Fig. 8), where vegetation in the south was more stable while in the center of the study area it was less stable. In the north, however, it was more stable except for the resilience metric.
The NDVI derived resilience and resistance of Desa'a Forest between 2001 to 2018. The solid line is the average of each metrics of all plots in a particular year and the broken line is the linear trendline of each metric
Table 2 The relative influence of the variables determining resilience in Desa'a forest (in bold are significant factors)
Growth stability in Desa'a Forest, 2001 to 2018. The solid line is the average growth stability of all plots in a particular year and the broken line is the linear trend of the growth stability
Spatial distribution of resilience, resistance, and growth stability in Desa'a Forest
The correlation between the stability metrics used shows that resilience (r = 0.56) and resistance (r = 0.46) correlated significantly with growth stability. However, the correlation between resistance and resilience was weak (0.23). The correlation among resilience, resistance and productivity was positive.
Drivers of stability
Drivers of resilience
Resilience was influenced by a combination of biophysical and climatic factors. In general, precipitation of the wettest month, species evenness, distance from the settlement and slope were the most effective variables explaining the resilience of Desa'a forest. The other factors had a similar share of influence (Table 2).
The partial dependencies of the variables in the model indicated that three main types of responses could be observed. First, the precipitation of the wettest month, annual precipitation, annual temperature, Shannon diversity, distance to settlement, and annual temperature range showed a similar trend. Their influence was increasing up to a certain optimal condition and ceiled afterwards. In all except the precipitation of the wettest month, visible reductions in resilience were observed before an ultimate increment was recorded. Second, the effect of both species evenness and slope showed a unimodal shape, high at the mid values and lower at the two ends. Third, temperature seasonality and stoniness showed a negative effect on the resilience of the forest (Fig. 9).
Partial dependencies of factors affecting resilience in Desa'a forest. The relative importance of variables in the model (% out of 100) is given in brackets. Fitted functions are centred around the mean of the resilience and plotted on a common scale. Rug plots (ticks in X-axis) show the distribution of sample measurements. PWem stands for precipitation of the wettest month, DiSet for distance from the settlement, AP for annual precipitation, ShanI for Shannon index, TS for temperature seasonality and TAR for temperature annual range
Drivers of resistance
Temperature seasonality and temperature of the driest quarter, forest floor thickness and precipitation of the wettest month were the variables that influenced the resistance of the forest most, with a total contribution of 53.6% (Table 3).
Table 3 The relative influence of the variables determining resistance in Desa'a Forest (in bold are significant factors)
The partial dependency plots revealed that the important variables affecting resistance had two general effect trends. First, the influence of temperature seasonality ended up in a decreasing trend though they showed different responses in the process. The resistance of the forest was lower in areas where temperature seasonality was lower than 180 (1.8 °C), the optimal size of temperature seasonality and got pick at around 220 (2.2 °C) above which an increase in temperature seasonality resulted in reduced resistance of forest communities. Second, the effect of the mean temperature of the driest quarter, humus depth and precipitation of the wettest month followed a positive trend. Around 185 mm precipitation of the wettest month is optimal to keep a resistant forest in the dry Afromontane environment (Fig. 10).
Partial dependencies of factors affecting resistance in Desa'a forest. The relative importance of variables in the model (% out of 100) is given in brackets. Fitted functions are centred around the mean of the resilience and plotted on a common scale. Rug plots (ticks in X-axis) show the distribution of sample measurements. TS stands for temperature seasonality, MTDQ for a mean temperature of the driest quarter, HumusDh for humus depth, PWeM for precipitation of the wettest month, and TAR for temperature annual range
Drivers of growth stability
Growth stability was governed dominantly by precipitation of the wettest month, taking about 44% of the total effect. Annual temperature range, precipitation of the warmest quarter and distance to settlement had similar effect strength accounting for 56% of the total (Table 4).
Table 4 The relative influence of the variables determining growth stability in Desa'a forest (in bold are significant factors)
The partial dependencies of the factors influencing growth stability (Fig. 11) show that the stability of the forest has been increasing with all the important factors. However, the increment rate was different across the factors. The growth stability remained low up to around 155 mm of precipitation of the wettest month, and it exponential increased and ultimately ceiled at 180 mm (Fig. 11).
Partial dependencies of factors affecting growth stability in Desa'a forest. The relative importance of variables in the model (% out of 100) is given in brackets. Fitted functions are centred around the mean of the growth stability and plotted on a common scale. Rug plots (ticks in X-axis) show the distribution of sample measurements. PWeM stands for precipitation of the wettest month, TAR for temperature annual range, PWaQ for precipitation of the warmest quarter and DiSet for distance to settlement
Model strength of the different stability metrics
The performance of the model fit to the different stability metrics is given in Table 5. Modelling growth stability with the variables used was difficult compared to the other response variables, resulting in the lowest performance for all goodness-of-fit criteria used (Table 5).
Table 5 Stability metrics and their model performance (TDC is training data correlation, CVC is cross-validation correlation)
Resilience, resistance and growth stability status of Desa'a forest
Over the study period, Desa'a Forest remained more or less resistant but not resilient, with a significant decrease in resilience in 2001, 2007 and 2015. A slight drop below the average resistance was also observed in 2004, 2008, 2009, and 2015. The frequent, and acute drought occurrences might explain these drops in both resilience and resistance in the region. In the study period, reported droughts occurred in 2000, 2002, 2004 (Gebrehiwot and van der Veen 2013), 2012 and 2013 (Tefera et al. 2019), and 2015 (Ahmed et al. 2017). The resilience range of Desa'a forest (0.3–0.6) is incomparably lower than that of other African tropical forests (0.7–1.0) reported by Verbesselt et al. (2016) which might explain the severe and repetitive anthropogenic pressure the forest is facing (Aynekulu et al. 2011). The growth stability, however, was continuously decreasing over the study period, which might be linked to continuous degradation in the forest that could be explained by the dieback of the dominant species, olive and juniper trees (Aynekulu et al. 2011), browsing and lopping of various species (Giday et al. 2018). The frequent drought occurrences that were linked to the declined resilience of the forest might also be a reasonable explanation for the decreased yield stability. A clear increment of growth was, however, observed in 2016. This might be attributed to the increased rainfall recorded in 2016 (Berhane et al. 2020). Because there was an acute drought in 2015 (Ahmed et al. 2017) and significant increment in precipitation in 2016, growth might have positively affected the biomass production in the forest.
Among the determinants of resilience, resistance, and growth stability, those above the median in the contribution of the factors are considered important (significant) factors (Gu et al. 2019) and are discussed.
Drivers of forest resilience
Precipitation of the wettest month was the most important factor associated with resilience. Although dry forests in the tropics are generally considered more resilient, their recovery is heavily dependent on the amount of precipitation (Álvarez-Yépiz et al. 2018), which is in line with the results of this study. A similar result was also reported in a wide range of tropical forest ecosystems where extended drought and low precipitation slows the recovery of forests in different continents (Verbesselt et al. 2016) and Amazon mountain forests (Nobre and Borma 2009).
Generally, tree diversity was associated with resilience, yet the Shannon and evenness indicators had a different impact. In the literature, there are contradicting findings on the effect of diversity on stability, where positive effect of species diversity has been reported in grasslands (Tilman et al. 2006; Van Ruijven and Berendse 2010), and in forests across Europe (Guyot et al. 2016, Sousa-Silva et al. 2018, Vannoppen et al. 2019), while others argue that there is no true positive diversity effect found so far on resilience (Bauhus et al. 2017). We found a positive association of Shannon diversity with resilience, but saturating eventually. The positive effect of diversity on resilience might be explained by the insurance effect where different species respond differently to disturbances stabilizing the overall resilience as a system regardless of the lowered performance of certain member species (Loreau 2004). However, the effect of evenness was unimodal, with the highest evenness values resulting in a lower forest resilience. In this forest, dominant species might be needed to some extent to keep the forest community more resilient. Such species could have particular functional traits that play a significant role in the stability of the forest community (Yan et al. 2011). However, diversity indices lack information to indicate the functional role of species (Yan et al. 2011) and limit the identification of the species that are disadvantaged when sites get more even. In Desa'a Forest, such late successional species could be those that are less competitive such as juniper tree (Alshahrani 2008), which are disadvantaged when they grow in even proportion to others, reducing the total resilience of the forest community.
Proximity to a settlement increases the probability of anthropogenic disturbance such as grazing and cutting, which are predominant in the forest (Giday et al. 2018). Our results confirm that the resilience of the vegetation located further than 5 km from settlements was considerably increased. The anthropogenic disturbance could affect resilience by affecting species composition, which might introduce an artificial dominance of a certain tree species and reduce species richness. That could have a direct impact on the resilience of the forest (Hillebrand et al. 2008).
The negative effect of slope on the resilience might be linked to its effect on soil depth, moisture content and susceptibility to degradation where steep slopes and exposed rocky areas have a little medium for plant growth due to erosion (Zhang et al. 2015), and when disturbances prevail, they are more affected than those in good soil conditions and gentle slopes. In general, in line to our hypotheses, the combination of tree diversity, local human impact, topographic position and climate (mainly precipitation) controlled resilience in dry Afromontane forest.
While temperature seasonality was negatively associated with resistance, mean temperature of the driest quarter, humus thickness and precipitation of the wettest month was positively associated. In contrary to resilience, the resistance of forests is dependent more on their productivity before a disturbance (Wang et al. 2007; Van Ruijven and Berendse 2010). Therefore, forest communities growing in productive sites, having favourable environmental conditions, are expected to show higher resistance (Wang et al. 2007). In line with this argument, our results indicated that vegetation growing in sites with thicker humus and more stony sites had higher and lowered resistance, respectively. The negative effect of increased temperature seasonality on forest resistance might be a general attribute to the tropical forests which have developed themselves under relatively stable climatic conditions (Blach-Overgaard et al. 2010). Therefore, in response to their narrow climatic tolerance, as the seasonality of temperature increases, forests might lose the capacity to rearrange (to adapt quickly) themselves so reducing their resilience capability (Blach-Overgaard et al. 2010). Our results indicate that higher temperature seasonality and annual temperature range were associated with lower resistance. In the highland parts of Desa'a Forest, where it is relatively colder and dominated by climax species, a negative correlation between temperature and growth of juniper and olive trees was reported (Mokria et al. 2017; Siyum et al. 2019). Temperature seasonality between 1.8 °C and 2.2 °C and an annual temperature range between 21 °C and 22 °C were associated with higher resilience. Increased temperature seasonality and annual temperature range prolongs the disturbance and slows the recovery and break the resistance (Anjos and De Toledo 2018) due to increased fluctuation and excessive evapotranspiration (Schroth et al. 2009).
In contrast to the resilience indicator, no association between biodiversity and resistance could be found. This is in line with the findings of Van Ruijven and Berendse (2010) who reported the positive effect of biodiversity on community resilience after a drought, but there was no association found with resistance. This is another strong evidence that resistance to disturbance depends on a prior forest condition (production, health, etc.). In contrast, the post-disturbance response of the forest could be supported by its constituents, such as diversity (Van Ruijven and Berendse 2010). While our hypothesis on the positive effect of climate and good edaphic properties on resistance holds true, the effect of tree diversity was not supported by our results.
The growth stability was mainly controlled by climate, the precipitation of the wettest month. The effect of tree diversity was not observed, and only the distance to settlement as an indicator of human impact indicator was detected though not significant. In dry forests, precipitation is the most important factor for the growth of trees and increased biomass (Hiltner et al. 2016). Dry forests are affected by high evapotranspiration due to the high temperature and low precipitation (Souza et al. 2016), and when precipitation gets higher, the growth of the forests is positively affected. The results are in line with the findings from different tropical forests; subtropical forest in China (Gu et al. 2019), dry tropical montane forests of Ethiopia (Hiltner et al. 2016), and in the dry Afromontane forests (Gebru et al. 2020). Effect of anthropogenic disturbances can be mediated and suppressed by the effect of precipitation which initiates more growth and system repair in forests (Rito et al. 2017), which could be the reason for the non-significant effect of disturbance on the growth stability of this forest.
The relationship among resilience, resistance and growth stability in Desa'a forest
Forest stability was successfully characterized using resilience and resistance from remotely sensed imagery in different forests (Sousa-Silva et al. 2018; Frazier et al. 2018). In Desa'a, a dry tropical Afromontane forest, the three stability metrics were modeled. The correlation analysis between the metrics showed that the correlation between resilience and resistance was very weak but positive. This is in line with the concept of DeRose and Long (2014), who argued that resistance and resilience act upon ecosystems differently. While resilience is related to the influence of disturbance on the structure and composition of the ecosystem, resistance is related to the influence of the structure and composition of an ecosystem on disturbance. In support of our results, Gazol et al. (2018) reported low resistant forests to be more resilient across different biomes. Against our findings, a negative correlation was found between resistance and recovery rate from another tropical dry forest (Bhaskar et al. 2018). The difference in the correlation results might be due to the difference in the interaction of climate and local degradation factors (Bhaskar et al. 2018).
The dry Afromontane forest of Desa'a was generally resistant but less resilient experiencing a continuous decline growth in stability in the last two decades. Climate variability played a pivotal role in the resilience, and resistance of the forest. While the precipitation of the wettest month is the most important factor in all the stability metrics, an inter-annual variation above 2 °C is was enough to degrade the resilience and resistance of the forest. Furthermore, tree species diversity was important to enhance the resilience of the dry Afromontane forest, but no evidence of tree diversity effects was found for resistance and growth stability. We found a threshold (0.7), above which tree species evenness leads to less resilience. Experimental research might be important to investigate into what extent of evenness species identity is important to promote resilience in the dry forests. Moreover, distance to the settlement, which is an indicator of degradation and slope were also important to promote resilience. Climate, both precipitation and temperature, edaphic factors, local human disturbance indicators and tree diversity were important for one or all of the stability metrics investigated in the dry Afromontane forest.
The datasets generated during and/or analyzed during the current study are available in the KU Leuven repository, and are accessible according to the regulation of the University.
Abbes A, Bounouh O, Farah IR, de Jong R, Martínez B (2018) Comparative study of three satellite image time-series decomposition methods for vegetation change detection. Eur J Remote Sens 51(1):607–615. https://doi.org/10.1080/22797254.2018.1465360
Abrha H, Adhana K (2019) Desa'a national forest reserve susceptibility to fire under climate change. Forest Sci Tech 15(3):140–146. https://doi.org/10.1080/21580103.2019.1628109
Aertsen W, Kint V, de Vos B, Deckers J, van Orshoven J, Muys B (2012) Predicting forest site productivity in temperate lowland from forest floor, soil and litterfall characteristics using boosted regression trees. Plant and Soil 354(1–2):157–172. https://doi.org/10.1007/s11104-011-1052-z
Ahmed H, Tessema Z, Adugna T, Diriba K (2017) Interconnection between El-Niño-southern oscillation induced rainfall variability, livestock population dynamics and pastoralists adaptation strategies in eastern Ethiopia. Proc Int Conf Impact El Niño Biodivers, Agric Food Security 7(February):19–36 http://www.haramaya.edu.et. Accessed 6 Sep 2020
Aide TM, Clark ML, Grau HR, López-Carr D, Levy MA, Redo D, Bonilla-Moheno M, Riner G, Andrade-Núñez MJ, Muñiz M (2013) Deforestation and reforestation of latin america and the caribbean (2001–2010). Biotropica 45(2):262–271. https://doi.org/10.1111/j.1744-7429.2012.00908.x
Alshahrani TS (2008) Effect of aqueous extract of the invasive species tobacco (Nicotiana glauca L.) on seedlings growth of Juniper (Juniperus procera L.). Emir J Food Agr 20(2):10–17
Álvarez-Yépiz JC, Martínez-Yrízar A, Fredericksen TS (2018) Special issue: resilience of tropical dry forests to extreme disturbance events. Forest Ecol Manag 426:1–6. https://doi.org/10.1016/j.foreco.2018.05.067
Anjos LJS, De Toledo PM (2018) Measuring resilience and assessing the vulnerability of terrestrial ecosystems to climate change in South America. PLoS One 13(3):1–15. https://doi.org/10.1371/journal.pone.0194654
Asrat A (2002) The rock-hewn churches of Tigrai, northern Ethiopia: a geological perspective. Geoarchaeol 17(7):649–663. https://doi.org/10.1002/gea.10035
Aynekulu E, Aerts R, Moonen P, Denich M, Gebrehiwot K, Vågen TG, Wolde W, Boehmer HJ (2012) Altitudinal variation and conservation priorities of vegetation along the great Rift Valley escarpment, northern Ethiopia. Biodivers Conserv 21(10):2691–2707. https://doi.org/10.1007/s10531-012-0328-9
Aynekulu E, Denich M, Tsegaye D, Aerts R, Neuwirth B, Boehmer HJ (2011) Dieback affects forest structure in a dry Afromontane forest in northern Ethiopia. J Arid Environ 75(5):499–503. https://doi.org/10.1016/j.jaridenv.2010.12.013
Bargués-Tobella A, Hasselquist NJ, Bazié HR, Bayala J, Laudon H, Ulrik Ilstedt U (2020) Trees in African drylands can promote deep soil and groundwater recharge in a future climate with more intense rainfall. Land Degrad Dev 31(1):81–95. https://doi.org/10.1002/ldr.3430
Bastin JF, Nora B, Alan G, Danae M, Danilo M, Rebecca M, Chiara P, Nicolas P, Ben S, Elena MA, Kamel A, Ayhan A, Fabio AB, Çağlar B, Adia B, Monica G, Luis GG, Nikée G, Greg G, Lars L, Andrew JL, Bako M, Giulio M, Paul P, Marcelo R, Stefano R, Ignacio S, Alfonso SD, Fred S, Venera S, Rene C (2017) The extent of forest in dryland biomes. Science 358(6365):635–638. https://doi.org/10.1126/science.aao1309
Bauhus J, Forrester DI, Gardiner B, Jactel H, Vallejo R, Pretzsch H (2017) Ecological stability of mixed-species forests. In: Pretzsch H, Forrester DI, Bauhus J (eds) Mixed-Species Forests: Ecology and Management. Springer Berlin Heidelberg, Berlin. https://doi.org/10.1007/978-3-662-54553-9_7
Berhane A, Hadgu G, Worku W, Abrha B (2020) Trends in extreme temperature and rainfall indices in the semi-arid areas of Western Tigray, Ethiopia. Environ Syst Res 9(1). https://doi.org/10.1186/s40068-020-00165-6.
Bhaskar R, Arreola F, Mora F, Martinez-Yrizar A, Martinez-Ramos M, Balvanera P (2018) Response diversity and resilience to extreme events in tropical dry secondary forests. Forest Ecol Manag 426:61–71. https://doi.org/10.1016/j.foreco.2017.09.028
Blach-Overgaard A, Svenning JC, Dransfield J, Greve M, Balslev H (2010) Determinants of palm species distributions across Africa: the relative roles of climate, non-climatic environmental factors, and spatial constraints. Ecography 33(2):380–391. https://doi.org/10.1111/j.1600-0587.2010.06273.x
Bognounou F, Tigabu M, Savadogo P, Thiombiano A, Boussim IJ, Oden PC, Guinko S (2010) Regeneration of five Combretaceae species along a latitudinal gradient in Sahelo-Sudanian zone of Burkina Faso. Ann Forest Sci 67(3):10. https://doi.org/10.1051/forest/2009119
Chen C, He B, Yuan W, Guo L, Zhang Y (2019) Increasing interannual variability of global vegetation greenness. Environ Res Lett. https://doi.org/10.1088/1748-9326/ab4ffc
Dakos V, Carpenter SR, Brock WA, Ellison AM, Guttal V, Ives AR, Kéfi S, Livina V, Seekell DA, Van Nes EH, Scheffer M (2012) Methods for detecting early warnings of critical transitions in time series illustrated using simulated ecological data. PLoS One 7(7). https://doi.org/10.1371/journal.pone.0041010
De Keersmaecker W, Lhermitte S, Honnay O, Farifteh J, Somers B, Coppin P (2014) How to measure ecosystem stability? An evaluation of the reliability of stability metrics based on remote sensing time series across the major global ecosystems. Glob Chang Biol 20(7):2149–2161. https://doi.org/10.1111/gcb.12495
De Keersmaecker W, Lhermitte S, Tits L, Honnay O, Somers B, Coppin P (2018) Resilience and the reliability of spectral entropy to assess ecosystem stability. Glob Chang Biol 24(1):e393–e394. https://doi.org/10.1111/gcb.12799
DeRose RJ, Long JN (2014) Resistance and resilience: a conceptual framework for silviculture. For Sci 60(6):1205–1212. https://doi.org/10.5849/forsci.13-507
Díaz S, Pascual U, Stenseke M, Martín-López B, Watson RT, Molnar Z, Hill R, Chan KMA, Baste IA, Brauman KA, Polasky S, Church A, Lonsdale M, Larigauderie A, Leadley PW, van Oudenhoven APE, van der Plaat F, Schroter M, Lavorel S, Aumeeruddy-Thomas Y, Bukvareva E, Davies K, Demissew S, Erpul G, Failler P, Guerra CA, Hewitt CL, Keune H, Lindley S, Shirayama Y (2018) Assessing nature's contributions to people. Science 359(6373):270–272. https://doi.org/10.1126/science.aap8826
Duffy JE, Godwin CM, Cardinale BJ (2017) Biodiversity effects in the wild are common and as strong as key drivers of productivity. Nature 549(7671):261–264. https://doi.org/10.1038/nature23886
Elith J, Leathwick JR, Hastie T (2008) A working guide to boosted regression trees. J An Ecol 77(Ml):802–813. https://doi.org/10.1111/j.1365-2656.2008.01390.x
Eriksson CP, Holmgren P (1996) Estimating stone and boulder content in forest soils - evaluating the potential of surface penetration methods. Catena 28(1–2):121–134. https://doi.org/10.1016/S0341-8162(96)00031-8
FAO (2010) Global Forest Resources Assessment 2010, Rome http://www.fao.org/forestry/20360-0381a9322cbc456bb05cfc9b6a7141cdf.pdf. Accessed 6 Sep 2020
Fick SE, Hijmans RJ (2017) WorldClim 2: new 1-km spatial resolution climate surfaces for global land areas. Int J Climatol 37(12):4302–4315. https://doi.org/10.1002/joc.5086
Frazier RJ, Coops NC, Wulder MA, Hermosilla T, White JC (2018) Analyzing spatial and temporal variability in short-term rates of post-fire vegetation return from Landsat time series. Remote Sens Environ 205:32–45. https://doi.org/10.1016/j.rse.2017.11.007
Friis I, Demissew S, Breugel PV (2010) Atlas of the potential vegetation of Ethiopia. The Royal Danish Academy of Sciences and Letters https://academic.oup.com/aob/article-lookup/doi/10.1093/aob/mcq242. Accessed 6 Sep 2020
Gazol A, Camarero JJ, Vicente-Serrano SM, Sánchez-Salguero R, Gutiérrez E, de Luis M, Sangüesa-Barreda G, Novak K, Rozas V, Tíscar PA, Linares JC, Martín-Hernández N, Martínez DCE, Ribas M, García-González I, Silla F, Camisón A, Génova M, Olano JM, Longares LA, Hevia A, Tomás-Burguera M, Galván JD (2018) Forest resilience to drought varies across biomes. Glob Chang Biol 24(5):2143–2158. https://doi.org/10.1111/gcb.14082
Gebrehiwot T, van der Veen A (2013) Climate change vulnerability in Ethiopia: disaggregation of Tigray region. J East Afr Stud 7(4):607–629. https://doi.org/10.1080/17531055.2013.817162
Gebru BM, Lee KW, Khamzina A, Wang WS, Cha S, Song C, Lamchin M (2020) Spatiotemporal multi-index analysis of desertification in dry afromontane forests of northern Ethiopia. Environ Develop Sust. https://doi.org/10.1007/s10668-020-00587-3
Giday K, Humnessa B, Muys B, Taheri F, Azadi H (2018) Effects of livestock grazing on key vegetation attributes of a remnant forest reserve: the case of Desa'a forest in northern Ethiopia. Global Ecol Conserv 14:e00395. https://doi.org/10.1016/j.gecco.2018.e00395
Grimm V, Wissel C (1997) Babel, or the ecological stability discussions: an inventory and analysis of terminology and a guide for avoiding confusion. Oecologia 109(3):323–334. https://doi.org/10.1007/s004420050090
Guyot V, Castagneyrol B, Vialatte A, Deconchat M, Jactel H (2016) Tree diversity reduces pest damage in mature forests across Europe. Biology Letters 12(4):0–4. https://doi.org/10.1098/rsbl.2015.1037
Gu H, Wang J, Ma L, Shang Z, Zhang Q (2019) Insights into the BRT (boosted regression trees) method in the study of the climate-growth relationship of Masson pine in subtropical China. Forests 10(3):1–20. https://doi.org/10.3390/f10030228
Hillebrand H, Bennett DM, Cadotte MW (2008) Consequences of dominance: a review of evenness effects on local and regional ecosystem processes. Ecology 89(6):1510–1520
Hiltner U, Bräuning A, Gebrekirstos A, Huth A (2016) Impacts of precipitation variability on the dynamics of a dry tropical montane forest. Ecol Model 320(2016):92–101. https://doi.org/10.1016/j.ecolmodel.2015.09.021
Hird JN, DeLancey ER, McDermid GJ, Kariyeva J (2017) Google earth engine, open-access satellite data, and machine learning in support of large-area probabilisticwetland mapping. Remote Sens (Basel) 9(12). https://doi.org/10.3390/rs9121315
Hishe H, Giday K, Van Orshoven J, Muys B, Taheri F, Azadi H, Feng L, Zamani O, Mirzaei M, Witlox F (2020) Analysis of land use land cover dynamics and driving factors in Desa'a forest in Northern Ethiopia. LAND USE POLICY. https://doi.org/10.1016/j.landusepol.2020.105039.
Huang J, Yu H, Guan X, Wang G, Guo R (2016) Accelerated dryland expansion under climate change. Nat Clim Chang 6(2):166–171. https://doi.org/10.1038/nclimate2837
Hutchison C, Gravel D, Guichard F, Potvin C (2018) Effect of diversity on growth, mortality, and loss of resilience to extreme climate events in a tropical planted forest experiment. Sci Rep 8(1):1–10. https://doi.org/10.1038/s41598-018-33670-x
Isbell FI, Polley HW, Wilsey BJ (2009) Biodiversity, productivity and the temporal stability of productivity: patterns and processes. Ecol Lett 12(5):443–451. https://doi.org/10.1111/j.1461-0248.2009.01299.x
Jacob M, Annys S, Frankl A, De Ridder M, Beeckman H, Guyassa E, Nyssen J (2014) Tree line dynamics in the tropical African highlands - identifying drivers and dynamics. J Veg Sci 26(1):9–20. https://doi.org/10.1111/jvs.12215
Jactel H, Menassieu P, Vetillard F, Gaulier A, Samalens JC, Brockerhoff EG (2006) Tree species diversity reduces the invasibility of maritime pine stands by the bast scale. Can J For Res 323:314–323. https://doi.org/10.1139/X05-251
Johnson KH, Kristiina AV, Heidi JC, Oswald JS, Daniel JV (1996) Biodiversity and the productivity and stability of ecosystems. Trends Ecol Evol 11(9):372–377. https://doi.org/10.1016/0169-5347(96)10040-9
Kogan FN (1995) Droughts of the late 1980s in the United States as derived from NOAA polar-orbiting satellite data. Bull Am Meteorol Soc 76(5):655–668. https://doi.org/10.1175/1520-0477(1995)076<0655:DOTLIT>2.0.CO;2
Leemput IA, Dakos V, Scheffer M, van Nes EH (2018) Slow recovery from local disturbances as an indicator for loss of ecosystem resilience. Ecosystems 21(1):141–152. https://doi.org/10.1007/s10021-017-0154-8
Lhermitte S, Verbesselt J, Verstraeten WW, Coppin P (2011) A comparison of time series similarity measures for classification and change detection of ecosystem dynamics. Remote Sens Environ 115(12):3129–3152. https://doi.org/10.1016/j.rse.2011.06.020
Lloret F, Lobo A, Estevan H, Maisongrande P, Vayreda J, Terradas J, Maisongrande P, Vayreda J (2007) Woody plant richness and NDVI response to drought events in Catalonian (northeastern Spain) forests. Ecology 88(9):2270–2279
Loreau M (2004) Does functional redundancy exist? Oikos 104(3):606–611. https://doi.org/10.1111/j.0030-1299.2004.12685.x
Lu D, Chen Q, Wang G, Liu L, Li G, Moran E (2016) A survey of remote sensing-based aboveground biomass estimation methods in forest ecosystems. Int J Dig Earth 9(1):63–105. https://doi.org/10.1080/17538947.2014.990526
McCann K (2000) The diversity–stability debate. Nature 405(6783):228–233. https://doi.org/10.1038/35012234
Mokria M, Gebrekirstos A, Abiyu A, Van Noordwijk M, Bräuning A (2017) Multi-century tree-ring precipitation record reveals increasing frequency of extreme dry events in the upper Blue Nile River catchment. Glob Chang Biol. https://doi.org/10.1111/gcb.13809
Myers N, Mittermeier RA, Mittermeier CG, da Fonseca GAB, Kent J (2000) Biodiversity hotspots for conservation priorities. Nature 403(6772):853–858. https://doi.org/10.1038/35002501
Nikinmaa L, Lindner M, Cantarello E, Jump AS, Seidl R, Winkel G, Muys B (2020) Reviewing the use of resilience concepts in forest sciences. Curr Forest Report 6(2):61–80. https://doi.org/10.1007/s40725-020-00110-x
Nobre CA, Borma LDS (2009) "Tipping points" for the Amazon forest. Curr Opin Environ Sustain 1(1):28–36. https://doi.org/10.1016/j.cosust.2009.07.003
Nyssen J, Vandenreyken H, Poesen J, Moeyersons J, Deckers J, Haile M, Salles C, Govers G (2005) Rainfall erosivity and variability in the northern Ethiopian highlands. J Hydrol 311(1–4):172–187. https://doi.org/10.1016/j.jhydrol.2004.12.016
O'Donnell MS, Ignizio DA (2012) Bioclimatic predictors for supporting ecological applications in the conterminous United States. U.S Geological Survey Data Series 691
Pimm SL (1984) The complexity and stability of ecosystems. Nature 37:321–326. https://doi.org/10.1038/307321a0
Quan J, Zhan W, Chen Y, Wang M, Wang J (2016) Time series decomposition of remotely sensed land surface temperature and investigation of trends and seasonal variations in surface urban heat islands. J Geophys Res Atm 175(121):2638–2657. https://doi.org/10.1002/2015JD024354
Ridgeway G (2007) Generalized boosted models: a guide to the gbm package. Computer 1(4):1–12. https://doi.org/10.1111/j.1467-9752.1996.tb00390.x
Rito KF, Arroyo-Rodríguez V, Queiroz RT, Leal IR, Tabarelli M (2017) Precipitation mediates the effect of human disturbance on the Brazilian Caatinga vegetation. J Ecol 105(3):828–838. https://doi.org/10.1111/1365-2745.12712
Safriel U, Adeel Z (2008) Development paths of drylands: thresholds and sustainability. Sustain Sci 3(1):117–123. https://doi.org/10.1007/s11625-007-0038-5
Safriel U, Niemeijer D, Puigdefabregas J, White R, Lal R, Winslow M, Prince S, Archer E, King C, Shapiro B, Wessels K, Nielsen T, Portnov B, Reshef I, Lachman E, Mcnab D (2005) Dryland systems. In: Hassan R, Scholes R, Ash N (eds) Ecosystems and human well-being: current state and trends: findings of the condition and trends (the millennium ecosystem assessment series). Island Press, Washington: DC
Sánchez-Azofeifa GA, Quesada M, Rodríguez JP, Nassar JM, Stoner KE, Castillo A, Garvin T, Zent EL, Calvo-Alvarado JC, Kalacska MER, Fajardo L, Gamon JA, Cuevas-Reyes P (2005) Research priorities for neotropical dry forests. Biotropica 37(4):477–485. https://doi.org/10.1111/j.1744-7429.2005.00066.x
Schroth G, Laderach P, Dempewolf J, Philpott S, Haggar J, Eakin H, Castillejos T, Moreno JG, Pinto LS, Hernandez R, Eitzinger A, Ramirez-Villegas J (2009) Towards a climate change adaptation strategy for coffee communities and ecosystems in the Sierra Madre de Chiapas, Mexico. Mitig Adapt Strat Glob Chang 14(7):605–625. https://doi.org/10.1007/s11027-009-9186-5
Shannon CE (1948) A mathematical theory of communication. Bell Syst Tech J 27:379–423. https://doi.org/10.1145/584091.584093
Siyum ZG, Ayoade JO, Onilude MA, Feyissa MT (2019) Climate forcing of tree growth in dry Afromontane forest fragments of northern Ethiopia: evidence from multi-species responses. Forest Ecosyst 6:15. https://doi.org/10.1186/s40663-019-0178-y
Sousa-Silva R, Verheyen K, Ponette Q, Bay E, Sioen G, Titeux H, Peer TV, Van Meerbeek K, Muys B (2018) Tree diversity mitigates defoliation after a drought-induced tipping point. Glob Chang Biol 24:4304–4315
Souza R, Feng X, Antonino A, Montenegro S, Souza E, Porporato A (2016) Vegetation response to rainfall seasonality and interannual variability in tropical dry forests. Hydrol Process 30(20):3583–3595. https://doi.org/10.1002/hyp.10953
Tefera AS, Ayoade JO, Bello NJ (2019) Drought occurrence pattern in Tigray Region, Northern Ethiopia. J Appl Sci Environ Manag 23(7):1341. https://doi.org/10.4314/jasem.v23i7.23
Tesemma AB (2007) Useful trees and shrubs of Ethiopia: identification, propagation, and management for 17 agroclimatic zones. RELMA in ICRAF Project, World Agroforestry Centre, Eastern Africa Region, Nairobi http://books.google.com.et/books?id=15UfAQAAIAAJ. Accessed 6 Sep 2020
Tilman D, Reich PB, Knops JMH (2006) Biodiversity and ecosystem stability in a decade-long grassland experiment. Nature 441(7093):629–632. https://doi.org/10.1038/nature04742
Van Ruijven J, Berendse F (2007) Contrasting effects of diversity on the temporal stability of plant populations. Oikos 116:1323–1330. https://doi.org/10.1111/j.2007.0030-1299.16005.x
Van Ruijven J, Berendse F (2010) Diversity enhances community recovery, but not resistance, after drought. J Ecol 98(1):81–86. https://doi.org/10.1111/j.1365-2745.2009.01603.x
Vannoppen A, Kint V, Ponette Q, Verheyen K, Muys B (2019) Tree species diversity impacts average radial growth of beech and oak trees in Belgium, not their long-term growth trend. Forest Ecosyst 6:10. https://doi.org/10.1186/s40663-019-0169-z
Verbesselt J, Umlauf N, Hirota M, Holmgren M, Van Nes EH, Herold M, Zeileis A, Scheffer M (2016) Remotely sensed resilience of tropical forests. Nat Clim Chang 6(11):1028–1031. https://doi.org/10.1038/nclimate3108
Waide RB, Willig MR, Steiner CF, Mittelbach G, Gough L, Dodson SI, Juday GP, Parmnter R (1999) The relationship between productivity and species richness. Annu Rev Ecol Evol Syst 30:257–300.
Wang J, Rich PM, Price KP, Kettle WD (2004) Relations between NDVI and tree productivity in the central Great Plains. Int J Remote Sens 25(16):3127–3138. https://doi.org/10.1080/0143116032000160499
Wang Y, Yu S, Wang J (2007) Biomass-dependent susceptibility to drought in experimental grassland communities. Ecol Lett 10(5):401–410. https://doi.org/10.1111/j.1461-0248.2007.01031.x
Webb CT (2007) What is the role of ecology in understanding ecosystem resilience? BioScience 57(6). https://doi.org/10.1641/B570606
Williams FM (2016) Understanding Ethiopia. In: Bobrowsky PT, Burnaby BC, Martínez-Frías J (eds) Wolfgang Eder AV. Springer International Publishing, Australia. https://doi.org/10.1007/978-3-319-02180-5
Yan H, Zhan J, Zhang T (2011) The resilience of forest ecosystems and its influencing factors. Procedia Environ Sci 10:2201–2206. https://doi.org/10.1016/j.proenv.2011.09.345
Yin H, Udelhoven T, Fensholt R, Pflugmacher D, Hostert P (2012) How normalized difference vegetation index (NDVI) trends from advanced very high resolution radiometer (AVHRR) and système probatoire d'observation de la terre vegetation (SPOT VGT) time series differ in agricultural areas: an inner Mongolian case study. Remote Sens (Basel) 4(11):3364–3389. https://doi.org/10.3390/rs4113364
Zhang Z, Sheng L, Yang J, Chen XA, Kong L, Wagan B (2015) Effects of land use and slope gradient on soil erosion in a red soil hilly watershed of southern China. Sustainability 7(10):14309–14325. https://doi.org/10.3390/su71014309
Forest inventory data were obtained from WeForest, an international nonprofit non-government organization, which is working on the restoration of Desa'a forest in collaboration with different national and international institutes (https://www.weforest.org/project/ethiopia-desaa).
PhD IRO grant from KU Leuven and WeForest Ethiopia supported the data collection. It is one of the chapters of a PhD research and there is no specific ID attached to the funds.
KU Leuven, Department of Earth and Environmental Sciences, Division Forest, Nature and Landscape, Celestijnenlaan 200E, P.O. Box 2411, 3001, Leuven, Belgium
Hadgu Hishe, Louis Oosterlynck, Wanda De Keersmaecker, Ben Somers & Bart Muys
Department of Land Resource Management and Environmental Protection, Mekelle University, College of Dryland Agriculture and Natural Resources, P.O. Box 231, Mekelle, Tigray, Ethiopia
Hadgu Hishe & Kidane Giday
Laboratory of Geo-Information Science and Remote Sensing, Wageningen University, 6708 PB, Wageningen, The Netherlands
Wanda De Keersmaecker
Hadgu Hishe
Louis Oosterlynck
Kidane Giday
Ben Somers
Bart Muys
Conceptualization: HH & BM; Methodology: HH & BM; Formal Analysis: HH & LO; Writing – original draft: HH; Writing review & editing: BM, BS, WD & KG; Supervision: BM. The author(s) read and approved the final manuscript.
Correspondence to Hadgu Hishe.
: Appendix 1. Methodological protocol for humus, soil depth, and local human disturbance indicators assessment.
Hishe, H., Oosterlynck, L., Giday, K. et al. A combination of climate, tree diversity and local human disturbance determine the stability of dry Afromontane forests. For. Ecosyst. 8, 16 (2021). https://doi.org/10.1186/s40663-021-00288-x
Biodiversity function | CommonCrawl |
A novel framework for cross-spectral iris matching
Mohammed A. M. Abdullah ORCID: orcid.org/0000-0002-3340-84891,2,
Satnam S. Dlay1,
Wai L. Woo1 &
Jonathon A. Chambers1
IPSJ Transactions on Computer Vision and Applications volume 8, Article number: 9 (2016) Cite this article
Previous work on iris recognition focused on either visible light (VL), near-infrared (NIR) imaging, or their fusion. However, limited numbers of works have investigated cross-spectral matching or compared the iris biometric performance under both VL and NIR spectrum using unregistered iris images taken from the same subject. To the best of our knowledge, this is the first work that proposes a framework for cross-spectral iris matching using unregistered iris images. To this end, three descriptors are proposed namely, Gabor-difference of Gaussian (G-DoG), Gabor-binarized statistical image feature (G-BSIF), and Gabor-multi-scale Weberface (G-MSW) to achieve robust cross-spectral iris matching. In addition, we explore the differences in iris recognition performance across the VL and NIR spectra. The experiments are carried out on the UTIRIS database which contains iris images acquired with both VL and NIR spectra for the same subject. Experimental and comparison results demonstrate that the proposed framework achieves state-of-the-art cross-spectral matching. In addition, the results indicate that the VL and NIR images provide complementary features for the iris pattern and their fusion improves notably the recognition performance.
Among the various traits used for human identification, the iris pattern has gained an increasing amount of attention for its accuracy, reliability, and noninvasive characteristics. In addition, iris patterns possess a high degree of randomness and uniqueness which is true even between identical twins, and the iris remains constantly stable throughout an adult's life [1, 2].
The initial pioneering work on iris recognition, which is the basis of many functioning commercial systems, was conducted by Daugman [1]. The performance of iris recognition systems is impressive as demonstrated by Daugman [3] who reported false acceptance rates of only 10−6 on a study of 200 billion cross-comparisons. Additionally, the potential of iris biometrics has also been affirmed with 1.2 trillion comparison by tests carried out by the National Institute of Standards and Technology (NIST) which confirmed that iris biometrics has the best balance between accuracy, template size, and speed compared to other biometric traits [4].
Iris recognition technology nowadays is widely deployed in various large-scale applications such as the border crossing system in the United Arab Emirates, Mexico national ID program, and the Unique Identification Authority of India (UIDAI) project [5]. As a case in point, more than one billion residents have been enrolled in the UIDAI project where about 1015 all-to-all check operations are carried out daily for identity de-duplication using iris biometrics as the main modality [5, 6].
Nearly all currently deployed iris recognition systems operate predominately in the near-infrared (NIR) spectrum capturing images at 800–900 nm wavelength. This is because there are fewer reflections coming from the cornea and the dark pigmented irides look clearer under the NIR light. In addition, external factors such as shadows and diffuse reflections become less under NIR light [7, 8].
The color of the irides is governed by the congruity of two molecules: eumelanin (black/brown) and pheomelanin (red/yellow). Dark pigmented irides have a high concentration of eumelanin. As the latter deeply absorbs visible light (VL), stromal features of the iris are only revealed under NIR and they become hidden in VL so the information related to the texture is revealed rather than the pigmentation. On the other hand, pheomelanin is dominant in light-pigmented irides. Capturing such irides under NIR light eliminates most of the rich pheomelanin information because the chromophore of the human iris is only visible under VL [8, 9]. Consequently, capturing iris images under different light conditions reveals different textural information.
Research in VL iris recognition has been gaining more attention in recent years due to the interest in iris recognition at a distance [10, 11]. In addition, competitions such as the Noisy Iris Challenge Evaluation (NICE) [12] and the Mobile Iris Challenge Evaluation [13] focus on the processing of VL iris images. This attention to visible wavelength-based iris recognition is boosted by several factors such as (1) visible range cameras can acquire images from long distance and they are cheaper than NIR cameras and (2) surveillance systems work in the visible range by capturing images of the body, face, and iris which could be used later for authentication [14].
Since both VL and NIR iris recognition systems are now widely deployed, studying the performance difference of iris recognition systems exploiting NIR and VL images is important because it gives insight into the essential features in each wavelength which in turn helps to develop a robust automatic identification system. On the other hand, cross-spectral iris recognition is essential in security applications when matching images from different lighting conditions is desired.
In this paper, we therefore propose a method for cross-spectral iris images matching. To the best of our knowledge, this attempt is amongst the first in the literature to investigate the problem of VL to NIR iris recognition (and vice versa) dealing with unregistered iris images belonging to the same subject. In addition, we investigate the difference in iris recognition performance with NIR and VL imaging. In particular, we investigate iris performance in each channel (red, green, blue, and NIR) and the feasibility of cross-channel authentication (i.e., NIR vs. VL). Furthermore, enhancing the iris recognition performance with multi-channel fusion is attained.
In summary, the main contributions of the paper are as follows:
A novel framework for cross-spectral iris recognition capable of matching unregistered iris images captured under different lighting conditions
Filling the gap in multi-spectral iris recognition by exploring the performance difference in iris biometrics under NIR and VL imaging
Boosting iris recognition performance with multi-channel fusion
The rest of this paper is organized as follows: related works are given in Section 2. The proposed framework for cross-spectral iris matching is explained in Section 3. Section 4 presents the experimental results and the discussion while Section 5 concludes this paper.
Iris recognition technology has witnessed a rapid development over the last decade driven by its wide applications in the world. At the outset, Daugman [1] proposed the first working iris recognition system which has been adopted later by several commercial companies such as IBM, Irdian, and Oki. In this work, the integro-differential operator is applied for iris segmentation and the 2D Gabor filters are utilized for feature extraction while the Hamming distance scores serve as a comparator. The second algorithm is due to Wildes [15] who applied the Hough transform for localizing the iris and the Laplacian pyramid to encode the iris pattern. However, this algorithm has a high computational demand.
Another interesting approach was proposed by Sun and Tan [2] exploiting ordinal measures for iris feature representation. Unlike the traditional approaches that use quantitative values, the ordinal measure focuses on qualitative values to represent features. The multi-lobe differential filters have been applied for iris feature extraction to generate a 128-byte ordinal code for each iris image. Then, the error rates have been calculated based on the measured Hamming distances between two ordinal templates of the same class.
All the previous work assessed iris recognition performance under NIR. The demand for more accurate and robust biometric systems has increased with the expanded deployment of large-scale national identity programs. Hence, researchers have investigated iris recognition performance under different wavelengths or the possibility of fusing NIR and VL iris images to enhance recognition performance. Nevertheless, inspecting the correlation of NIR and VL iris images has been understudied, and the problem of cross-spectral iris recognition is still unsolved.
Boyce et al. [16] explored iris recognition performance under different wavelengths on a small multi-spectral iris databases consisting of 120 images from 24 subjects. According to the authors, higher accuracy was achieved for the red channel compared to green and blue channels. The study also suggested that cross-channel matching is feasible. However, iris images were fully registered and captured under ideal conditions. In [17], the authors employed the feature fusion approach to enhance the recognition performance of iris images captured under under both VL and NIR. The wavelet transform and discrete cosine transform were used for feature extraction while the features were augmented with the ordered weighted average method to enhance the performance.
In Ngo et al. [18], a multi-spectral iris recognition system was implemented which employed eight wavelengths ranges from 405 to 1550 nm. The results on a database of 392 iris images showed that the best performance was achieved with a wavelength of 800 nm. Cross-spectral experiment results demonstrated that the performance degraded with larger wavelength difference. Ross et al. [19] explored the performance of iris recognition in wavelengths beyond 900 nm. In their experiments, they investigated the possibility of observing different iris structures under different wavelengths and the potential of performing multi-spectral fusion for enhancing iris recognition performance. Similarly, Ives et al. [20] examined the performance of iris recognition under a wide range of wavelengths between 405 and 1070 nm. The study suggests that illumination wavelength has a significant effect on iris recognition performance. Hosseini et al. [8] proposed a feature extraction method for iris images taken under VL using a shape analysis method. Potential improvement in recognition performance was reported when combining features from both NIR and VL iris images taken from the same subject.
Recently, Alonso-Fernandez et al. [21] conducted comparisons on the iris and periocular modalities and their fusion under NIR and VL imaging. However, the images were not taken from the same subjects as the experiments were carried out on different databases (three databases contained close-up NIR images, and two others contained VL images). Unfortunately, this may not give an accurate indication about the iris performance as the images do not belong to the same subject. In [22], the authors suggested enhancing iris recognition performance in non-frontal images through multi-spectral fusion of iris pattern and scleral texture. Since the scleral texture is better seen in VL and the iris pattern is observed in NIR, multi-spectral fusion could improve the overall performance.
In terms of cross-spectral iris matching, the authors in [14] proposed an adaptive method to predict the NIR channel image from VL iris images using neural networks. Similarly, Burge and Monaco [23, 24] proposed a model to predict NIR iris images using features derived from the color and structure of the visible light iris images. Although the aforementioned approaches ([14, 23, 24]) achieved good results, their methods require the iris images to be fully registered. Unfortunately, this is not applicable in reality because it is very difficult to capture registered iris images from the same subject simultaneously.
In our previous work [25], we explored the differences in iris recognition performance across the VL and NIR spectra. In addition, we investigated the possibility of cross-channel matching between the VL and NIR imaging. The cross-spectral matching turns out to be challenging with an equal error rate (EER) larger than 27 %. Lately, Ramaiah and Kumar [26] emphasized the need for cross-spectral iris recognition and introduced a database of registered iris images and conducted experiments on iris recognition performance under both NIR and VL. This database is not available yet. The results of cross-spectral matching achieved an EER larger than 34 % which confirms the challenge of cross-spectral matching. The authors concluded their paper by: "it is reasonable to argue that cross-spectral iris matching seriously degrades the iris matching accuracy".
Proposed cross-spectral iris matching framework
Matching across iris images captured in VL and NIR is a challenging task because there are considerable differences among such images pertaining to different wavelength bands. Although, the appearance of different spectrum iris images looks different, the structure is the same as they belong to the same person. Therefore, we exploited various photometric normalization techniques and descriptors to alleviate these differences. In this context, we employed the Binarized Statistical Image Features (BSIF) descriptor [27], DoG filtering in addition to a collection of the photometric normalization techniques available from the INface Toolbox1 [28, 29]: adaptive single scale retinex, non-local means, wavelet based normalization, homomorphic filtering, multi-scale quotient, Tan and Triggs normalization, and multi-scale Weberface (MSW).
Among these illumination techniques and descriptors, the DoG, BSIF, and MSW are noticed to reduce the iris cross-spectral variations. These models are described in the next subsections.
Difference of Gaussian (DoG)
The DoG is a feature enhancement technique which depends on the difference of Gaussians filter to generate a normalized image by acting as a bandpass filter. This is achieved by subtracting two blurred versions of the original images from each other [30]. The blurred versions G(x,y) are obtained by convolving the original image I(x,y) with two Gaussian kernels having differing standard deviations as shown in Eq. (1):
$$ D\left(x,y|\sigma_{0},\sigma_{1}\right)=\left[G\left(x,y|\sigma_{0}\right)-G\left(x,y|\sigma_{1}\right)\right]*I(x,y), $$
where * is the convolution operator and σ represents the Gaussian kernel function which is defined as
$$ G(x,y|\sigma)=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{-\left(x^{2}+y^{2}\right)/2\sigma^{2}} $$
Here, σ 0<σ 1 to construct a bandpass filter. The values of σ 0 and σ 1 are empirically set to 1 and 2, respectively. The DoG filter has a low computation complexity and is able to alleviate the illumination variation and aliasing. As there are variations in the frequency between VL and NIR images, the DoG filter is efficient because it suppresses these variations and alleviates noise and aliasing which paves the way for a better cross-spectral matching [30].
Binarized statistical image features (BSIF)
The BSIF [27] have been employed due to their ability to tolerate image degradation such as rotation and blurring [27]. Generally speaking, feature extraction methods usually filter the images with a set of linear filters then quantize the response of such filters. In this context, BSIF filters are learned by exploiting the statistics of natural images rather than using manually built filters. This has resulted in promising results for classifying the texture in different biometric traits [31, 32].
For an image patch X of size l×l pixels and a linear filter W i of the same size, the filter response s i is obtained by
$$ s_{i}=\sum W_{i}(u,v)X(u,v)={w^{T}_{i}}x. $$
The binarized feature b i is obtained based on the response values by setting b i =1 if s i >0 and b i =0 otherwise. The filters are learned from natural images using independent component analysis by maximizing the statistical independence of s i . Two parameters control the BSIF descriptor: the number of the filters (length n of the bit string) and the size of the filter l. In our approach, we used the default set of the filters2 which were learned from 5000 patches. Empirical results demonstrated that a filter size of 7×7 with 8 bits gives the best results.
Multi-scale Weberfaces (MSW)
Inspired by Weber's law which states that the ratio of the increment threshold to the background intensity is a constant [33], the authors in [34] showed that the ratio between local intensity of a pixel and its surrounding variations is constant. Hence, in [34], the face image is represented by its reflectance and the illumination factor is normalized and removed using the Weberface model. Following this, we applied the Weberface model to the iris images to remove the illumination variations that result from the differences between the VL and NIR imaging, thus making the iris images illumination invariant.
Following the works of [28, 29], the Weberface algorithm has been applied with three scales using the following values: σ= [1 0.75 0.5], Neighbor=[9 25 49] and alfa= [2 0.2 0.02]. The steps of the Weberface algorithm are listed in Algorithm 1.
Proposed scheme
The variations in iris appearance due to different sensors, spectral bands, and illumination variations are believed to significantly degrade the iris recognition performance. To overcome these artifacts, a robust method should be carefully designed. Extensive experiments demonstrated that using one of the aforementioned methods alone is not sufficient to achieve an acceptable iris recognition performance with EER >17 %. Therefore, we propose to integrate the Gabor filter with these methods in addition to decision level fusion to achieve a robust cross-spectral iris recognition. Also, using the phase information of the Gabor filter rather than amplitude is known to result in robustness to different variations such as illumination variations, imaging contrast, and camera gain [7]. Hence, we propose to integrate the 1D log-Gabor filter [35] with DoG, BSIF, and MSW to produce the G-DoG, G-BSIF, and G-MSW (where G stands for Gabor) in addition to decision level fusion to achieve a robust cross-spectral iris recognition. The block diagram of the proposed framework is depicted in Fig. 1.
Block diagram of the proposed cross-spectral matching framework
Unlike previous works [14, 23, 24] in which they require fully registered iris images and learn models that lack the ability of generalization, our framework does not require any training and works on unregistered iris images. This combination along with its decision level fusion achieved encouraging results as illustrated in the next section.
In this work, our aim is to ascertain true cross-spectral iris matching using images taken from the same subject under the VL and NIR spectra. In addition, we investigate the iris biometric performance under different imaging conditions and the fusion of VL+NIR images to boost the recognition performance. The recognition performance is measured with the EER and the receiver operating characteristic (ROC) curves.
The experiments are conducted on the UTIRIS database [8] from the University of Tehran. This database contains two sessions with 1540 images; the first session was captured under VL while the second session was captured under NIR. Each session has 770 images taken from the left and right eye of 79 subjects where each subject has an average of five iris images.
Pre-processing and feature extraction
Typically, an iris recognition system operates by extracting and comparing the pattern of the iris in the eye image. These operations involve four main steps namely, image acquisition, iris segmentation, normalization, feature extraction, and matching [7].
The UTIRIS database includes two types of iris images, half of which are captured in the NIR spectrum while the other half are captured under the VL spectrum. The VL session contains images in the sRGB color space which then are decomposed to the red, green, and blue channels. To segment the iris in the eye image, the circular Hough transform (CHT) is applied because the images used in our experiments were captured under a controlled environment so they can be segmented with circular approaches [36, 37].
It is noticed that the red channel gives the best segmentation results because the pupil region in this channel contains the smallest amount of reflection as shown in Figs. 2 and 3. The images in the VL session were down-sampled by two in each dimension to obtain the same size as the images in the NIR session. The segmented iris images are normalized with a resolution of 60×450 using the rubber sheet method [7].
Green-yellow iris image decomposed into red, green, blue, and grayscale with the NIR counterpart
Brown iris image decomposed into red, green, blue, and grayscale with the NIR counterpart
After feature extraction, the Hamming distance is used to find the similarity between two IrisCodes in order to decide if the vectors belong to the same person or not. Then, the ROC curves and the EER are used to judge the iris recognition performance for the images in each channel as illustrated in the next subsections.
NIR vs. VL performance
For feature extraction, the normalized iris image is convolved with the 1D log-Gabor filter to extract the features where the output of the filter is phase quantized to four levels to form the binary iris vector [35].
We carried out experiments on each channel (i.e., NIR, red, green, and blue) and measured the performance using ROC and EER. Figure 4 and Table 1 illustrates the EER and ROC curves for each channel. It can be seen that the best performance is achieved under the red channel with EER = 2.92 % followed by the green channel with EER = 3.11 % and the grayscale channel with EER = 3.26 % while the blue channel achieved worse results with EER = 6.33 %. It is also noticed that NIR images did not give the best performance for this database (EER = 3.45 %).
The performance of the iris recognition under red, green, blue, and NIR spectra
Table 1 EER (%) of different channels comparison on the UTIRIS database
This is in agreement with our results where the red channel images achieved better results than the NIR images as most of the iris images in the UTIRIS database are light pigmented. Figure 5 shows the distribution of the irides color in the UTIRIS database.
The color distributions of the irides of the 79 subjects in the UTIRIS database
Light-eyed vs. dark-eyed
As mentioned before, capturing iris images under NIR light eliminates most of the rich melanin information because the chromophore of the human iris is only visible under VL [8, 9]. Therefore, light-pigmented irides exhibit more information under visible light. Figure 2 shows a green-yellow iris image captured under NIR and VL. It can be seen that the red channel reveals more information than the NIR image. So, intuitively, the recognition performance would be better for such images in the VL rather than the NIR spectrum.
On the contrary, with dark-pigmented irides, stromal features of the iris are only revealed under NIR and they become hidden in VL so the information related to the texture is revealed rather than the pigmentation as shown in Fig. 2. Therefore, the recognition performance for the dark-pigmented irides would give better results if the images were captured under NIR spectrum.
Cross-spectral experiments
Cross-spectral study is important because it shows the feasibility of performing iris recognition in several security applications such as information forensics, security surveillance, and hazard assessment. Typically a person's iris images are captured under NIR but most of the security cameras operate in the VL spectrum. Hence, NIR vs. VL matching is desired.
In this context, we carried out these comparisons using the traditional 1D log-Gabor filter: NIR vs. red, NIR vs. green, and NIR vs. blue. Figure 6 depicts the ROC curves of these comparisons. According to Fig. 6, the green and blue channels resulted in bad performance due to the big gap in the electromagnetic spectrum between these channels and the NIR spectrum.
Cross-channel matching
On the contrary, the red channel gave the best performance compared to the green and blue channels. This can be attributed to the small gap in the wavelength of the red channel (780 nm) compared to the NIR (850 nm). Therefore, the comparisons of red vs. NIR is considered as the baseline for cross-spectral matching. Table 1 shows the EER of cross-channel matching experiments.
Cross-spectral matching
Cross-spectral performance turned out to be a challenging task with EER >27 % which is attributable to matching unregistered iris images from different spectral bands. Hence, to achieve an efficient cross-spectral matching, adequate transformations before the feature extraction are needed.
Different feature enhancement techniques are employed, out of which the DoG, MWS, and BSIF recorded the best results as shown in Table 2. Therefore, our proposed framework, which is depicted in Fig. 1, is based on these descriptors.
Table 2 Experiments on different descriptors for cross-spectral matching
For all cross-spectral experiments, we have adopted the leave-one-out approach to obtain the comparison results [38]. Hence, for each subject with (m) iris samples, we have set one sample as a probe and the comparison is repeated iteratively by swapping the probe with the remaining (m−1) samples. The experiments for each subject are repeated (m(m−1)/2) times, and the final performance is measured in terms of EER by taking the minimum of the obtained comparison scores of each subject.
Cross-spectral fusion
To further enhance the performance of cross-spectral matching, the fusion of the G-DoG, G-BSIF, and G-MSW is considered. Different fusion methods are investigated namely, feature fusion, score fusion and decision fusion, out of which the decision fusion is observed to be the most effective.
Table 3 shows the performance of different fusion strategies for cross-spectral matching in terms of EER. Feature fusion resulted in poor results where the EER varied from 14 to 18 %. Score level fusion with minimum rule achieved better results. On the other hand, AND rule decision level fusion achieved the best results with EER = 6.81 %.
Table 3 Experiments on different fusion strategies for cross-spectral matching
A low false accept rate (FAR) is preferred to achieve a secure biometric system. To enhance the performance of our system and reduce the FAR, a fusion at the decision level is performed. Thus, the conjunction "AND" rule is used to combine the decisions from the G-DoG, G-BSIF, and G-MSW. This means that a false accept can only happen when all the previous descriptors produce a false accept [39].
Let P D(F A), P S(F A), and P M(F A) represent the probability of a false accept using G-DoG, G-BSIF, and G-MSW, respectively. Similarly, P D(F R), P S(F R), and P M(F R) represent the probability of a false reject. Therefore, the combined probability of a false accept P C(F A) is the product of the three probabilities of the descriptors:
$$ PC(FA)=PD(FA).PS(FA).PM(FA). $$
On the other hand, the combined probability of a false reject P C(F R) can be expressed as the complement of the probability that none of the descriptors produce a false reject:
$$ \begin{aligned} &{}PC(FR)=(PD(FR)'.PS(FR)'.PM(FR)')', \\ &\,\,\,\quad=(1-(1-PD(FR))(1-PS(FR))(1\!-PM(FR))), \\ &\,\,\,\quad=PD(FR)+PS(FR)+PM(FR) \\ &\,\,\,\quad+PD(FR).PS(FR)+ PD(FR).PM(FR) \\ &\,\,\,\quad+PS(FR).PM(FR)+ PD(FR).PS(FR).PM(FR). \end{aligned} $$
It can be seen from the previous equations that the joint probability of false rejection increases while the joint probability of false acceptance decreases when using the AND conjunction rule.
All the previous descriptors (G-DoG, G-BSIF, and G-MSW) are considered as local descriptors. It can be argued that the fusion of local and global features could enhance the performance further. We wish to remark that fusing the local and global features would require further stages to augment the resultant global and local scores as they will be in different range/type [40]. Such stages will increase the complexity of the cross-spectral framework. We have carefully designed the proposed framework so that all three descriptors (G-DoG, G-BSIF, and G-MSW) generate homogenous scores (binary template). Therefore, a single comparator (Hamming distance) can be quickly used for score matching.
Multi-spectral iris recognition
The VL and NIR images in the UTIRIB database are not registered. Therefore, they provide different iris texture information. The cross-channel comparisons demonstrated that red and NIR channels are the most suitable candidates for fusion as they gave the lowest EER compared to other channels as shown in Figs. 4 and 6, so it is common sense to fuse them in order to boost the recognition performance. Score level fusion is adopted in this paper due to its efficiency and low complexity [41]. Hence, we combined the matching scores (Hamming distances) from both the red and NIR images using sum rule-based fusion with equal weights to generate a single matching score. After that, the recognition performance is evaluated again with the ROC curves and EER.
It is evident from Fig. 7 that such fusion is useful to the iris biometric as there is a significant improvement in the recognition performance after the fusion with EER of only 0.54 % compared to 2.92 and 3.45 % before the fusion.
ROC curves showing the iris recognition performance before and after fusing the information of the red and NIR channel
Comparisons with related work
Although the previous works [14, 23, 24] reported good results in terms of cross-spectral iris matching, it must be noted that these works have adopted fully registered iris images and learn models that lack the ability of generalization.
In the works of [25, 42], the results of cross-spectral matching on unregistered iris images were reported. However, no models were proposed to enhance the cross-spectral iris matching. Table 4 shows the comparison results of the aforementioned works compared to our method.
Table 4 Cross-spectral matching comparison with different methods
All experiments were conducted on a 3.2-GHz core i5 PC with 8 GB of RAM under the Matlab environment. The proposed framework consists of four main descriptors namely, BSIF, DoG, MSW, and 1D log-Gabor filter. The processing times of the 1D log-Gabor filter, BSIF, and DoG descriptors are 10, 20, and 70 ms, respectively, while the MSW processing times is 330 ms. Therefore, the total computations time of the proposed method is less than half a second which implies its suitability for real time applications.
In this paper, a novel framework for cross-spectral iris matching was proposed. In addition, this work highlights the applications and benefits of using multi-spectral iris information in iris recognition systems. We investigated iris recognition performance under different imaging channels: red, green, blue, and NIR. The experiments were carried out on the UTIRIS database, and the performance of the iris biometric was measured.
We drew the following conclusions from the results. According to Table 2, among a variety of descriptors, the difference of Gaussian (DoG), BSIF, and multi-scale Weberface (MSW) were found to give good cross-spectral performance after integrating them with the 1D log-Gabor filter. Table 4 and Fig. 6 showed a significant improvement in the cross-spectral matching performance using the proposed framework.
In terms of multi-spectral iris performance, Fig. 4 showed that the red channel achieved better performance compared to other channels or the NIR imaging. This can be attributed to the large number of the light-pigmented irides in the UTIRIS database. It was also noticed from Fig. 6 that the performance of the iris recognition varied as a function of the difference in wavelength among the image channels. Fusion of the iris images from the red and NIR channels notably improved the recognition performance. The results implied that both the VL and NIR imaging were important to form a robust iris recognition system as they provided complementary features for the iris pattern.
1 http://luks.fe.uni-lj.si/sl/osebje/vitomir/face_tools/INFace/
2 http://www.ee.oulu.fi/~jkannala/bsif/bsif.html
Daugman J (1993) High confidence visual recognition of persons by a test of statistical independence. Pattern Anal Mach Intell IEEE Trans 15(11): 1148–1161.
Sun Z, Tan T (2009) Ordinal measures for iris recognition. IEEE Trans Pattern Anal Mach Intell 31(12): 2211–2226.
Daugman J (2006) Probing the uniqueness and randomness of IrisCodes: results from 200 billion iris pair comparisons. Proc IEEE 94(11): 1927–1935.
Grother PJ, Quinn GW, Matey JR, Ngan ML, Salamon WJ, Fiumara GP, Watson CI (2012) IREX III: performance of iris identification algorithms, Report, National Institute of Standards and Technology.
Jain AK, Nandakumar K, Ross A (2016) 50 Years of biometric research: accomplishments, challenges, and opportunities. Pattern Recognit Lett 79: 80–105.
Daugman J (2007) Evolving methods in iris recognition. IEEE International Conference on Biometrics: Theory, Applications, and Systems, (BTAS07), (online). http://www.cse.nd.edu/BTAS_07/John_Daugman_BTAS.pdf. Accessed Sept 2016.
Daugman J (2004) How iris recognition works. IEEE Trans Circ Syst Video Technol 14(1): 21–30.
Hosseini MS, Araabi BN, Soltanian-Zadeh H (2010) Pigment melanin: pattern for iris recognition. IEEE Trans Instrum Meas 59(4): 792–804.
Meredith P, Sarna T (2006) The physical and chemical properties of eumelanin. Pigment Cell Res 19(6): 572–594.
Dong W, Sun Z, Tan T (2009) A design of iris recognition system at a distance In: Chinese Conference on Pattern Recognition, (CCPR 2009), 1–5.. IEEE, Nanjing. http://ieeexplore.ieee.org/document/5344030/.
Proenca H, Filipe S, Santos R, Oliveira J, Alexandre LA (2010) The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance. IEEE Trans Pattern Anal Mach Intell 32(8): 1529–1535.
Bowyer KW (2012) The results of the NICE.II iris biometrics competition. Pattern Recognit Lett 33(8): 965–969.
De Marsico M, Nappi M, Riccio D, Wechsler H (2015) Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit Lett 57(0): 17–23.
Jinyu Z, Nicolo F, Schmid NA (2010) Cross spectral iris matching based on predictive image mapping In: Fourth IEEE International Conference on Biometrics: Theory Applications and Systems (BTAS'10), 1–5.. IEEE, Washington D.C.
Wildes RP (1997) Iris recognition: an emerging biometric technology. Proc IEEE 85(9): 1348–1363.
Boyce C, Ross A, Monaco M, Hornak L, Xin L (2006) Multispectral iris analysis: a preliminary study In: Computer Vision and Pattern Recognition Workshop, 51–51.. IEEE, New York, doi:10.1109/CVPRW.2006.141. http://www.cse.msu.edu/~rossarun/pubs/RossMSIris_CVPRW06.pdf. Accessed Sept 2016.
Tajbakhsh N, Araabi BN, Soltanianzadeh H (2008) Feature fusion as a practical solution toward noncooperative iris recognition In: 11th International Conference on Information Fusion, 1–7.. IEEE, Cologne.
Ngo HT, Ives RW, Matey JR, Dormo J, Rhoads M, Choi D (2009) Design and implementation of a multispectral iris capture system In: Asilomar Conference on Signals, Systems and Computers, 380–384.. IEEE, Pacific Grove.
Ross A, Pasula R, Hornak L (2009) Exploring multispectral iris recognition beyond 900 nm In: IEEE 3rd International Conference on Biometrics: Theory, Applications, and Systems. (BTAS'09), 1–8.. IEEE, Washington D.C.
Ives RW, Ngo HT, Winchell SD, Matey JR (2012) Preliminary evaluation of multispectral iris imagery In: IET Conference on Image Processing (IPR 2012), 1–5.. IET, London.
Alonso-Fernandez F, Mikaelyan A, Bigun J (2015) Comparison and fusion of multiple iris and periocular matchers using near-infrared and visible images In: 2015 International Workshop on Biometrics and Forensics (IWBF), 1–6.. IEEE, Gjøvik.
Crihalmeanu SG, Ross AA (2016) Multispectral Ocular Biometrics. In: Bourlai T (ed)Face Recognition Across the Imaging Spectrum, 355–380.. Springer International Publishing, Cham. ISBN:978-3-319-28501-6. doi:10.1007/978-3-319-28501-6_15.
Burge MJ, Monaco MK (2009) Multispectral iris fusion for enhancement, interoperability, and cross wavelength matching In: Proceeding of SPIE 7334, Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV, Vol. 7334.. SPIE. pp 73341D–1–73341D–8. doi:10.1117/12.819058. http://spie.org/Publications/Proceedings/Paper/10.1117/12.819058. Accessed Sept 2016.
Burge M, Monaco M (2013) Multispectral iris fusion and cross-spectrum matching. Springer, London. pp 171–181.
Abdullah MAM, Chambers JA, Woo WL, Dlay SS (2015) Iris biometric: is the near-infrared spectrum always the best? In: 3rd Asian Conference on Pattern Recognition (ACPR2015), 816–819.. IEEE, Kuala Lumpur, doi:10.1109/ACPR.2015.7486616, http://ieeexplore.ieee.org/document/7486616/. Accessed Sept 2016.
Ramaiah NP, Kumar A (2016) Advancing Cross-Spectral Iris Recognition Research Using Bi-Spectral Imaging. In: Singh R, Vatsa M, Majumdar A, Kumar A (eds)Machine Intelligence and Signal Processing, 1–10.. Springer India, New Delhi. ISBN:978-81-322-2625-3. doi:10.1007/978-81-322-2625-3_1.
Kannala J, Rahtu E (2012) BSIF: binarized statistical image features In: 21st International Conference on Pattern Recognition (ICPR), 1363–1366.. IEEE, Tsukuba Science City.
Štruc V, Pavesic N (2009) Gabor-based kernel partial-least-squares discrimination features for face recognition. Informatica 20(1): 115–138.
MATH Google Scholar
Štruc V, Pavesic N (2011) Photometric Normalization Techniques for Illumination Invariance. Advances in Face Image Analysis: Techniques and Technologies. IGI Global. pp 279–300.
Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6): 1635–1650.
Article MathSciNet Google Scholar
Arashloo SR, Kittler J (2014) Class-specific kernel fusion of multiple descriptors for face verification using multiscale binarised statistical image features. IEEE Trans Inf Forensic Secur 9(12): 2100–2109.
Li X, Bu W, Wu X (2015) Palmprint Liveness Detection by Combining Binarized Statistical Image Features and Image Quality Assessment. In: Yang J, Yang J, Sun Z, Shan S, Zheng W, Feng J (eds)Biometric Recognition: 10th Chinese Conference, CCBR 2015, Tianjin, China, November 13-15, 2015, Proceedings, 275–283.. Springer International Publishing, Cham. ISBN:978-3-319-25417-3. doi:10.1007/978-3-319-25417-3_33.
Jain AK (1989) Fundamentals of digital image processing. Prentice-Hall, Inc., Upper Saddle River.
Wang B, Li W, Yang W, Liao Q (2011) Illumination normalization based on Weber's law with application to face recognition. IEEE Signal Process Lett 18(8): 462–465.
Masek L, Kovesi P (2003) MATLAB source code for a biometric identification system based on iris patterns.
Abdullah MAM, Dlay SS, Woo WL, Chambers JA (2016) Robust iris segmentation method based on a new active contour force with a noncircular normalization. IEEE Trans Syst Man Cybernet SystPP(99): 1–14. doi:10.1109/TSMC.2016.2562500. http://ieeexplore.ieee.org/document/7473859/. Accessed Sept 2016.
Abdullah MAM, Dlay SS, Woo WL (2014) Fast and accurate pupil isolation based on morphology and active contour In: The 4th International conference on Signal, Image Processing and Applications, Vol. 4, 418–420.. IACSIT, Nottingham.
Raja KB, Raghavendra R, Vemuri VK, Busch C (2015) Smartphone based visible iris recognition using deep sparse filtering. Pattern Recogn Lett 57: 33–42.
Maltoni D, Maio D, Jain A, Prabhakar S (2003) Multimodal biometric systems. Springer, New York. pp 233–255.
Fang Y, Tan T, Wang Y (2002) Fusion of global and local features for face verification In: 16th International Conference on Pattern Recognition, Vol. 2, 382–385.. IEEE, Quebec City.
He M, Horng S-J, Fan P, Run R-S, Chen R-J, Lai J-L, Khan MK, Sentosa KOPerformance evaluation of score level fusion in multimodal biometric systems. Pattern Recognit 43(5): 1789–1800.
Wild P, Radu P, Ferryman J (2015) On fusion for multispectral iris recognition In: 2015 International Conference on Biometrics (ICB), 31–37.. IEEE, Phuket.
Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7): 971–987.
Article MATH Google Scholar
The first author would like to thank the Ministry of Higher Education and Scientific Research (MoHESR) in Iraq for supporting this work.
MA carried out the design of the iris cross-spectral matching framework, performed the experiments, and drafted the manuscript. SD participated in the design of the framework and helped in drafting the manuscript. WW participated in the comparison experiments and helped in drafting the manuscript. JC guided the work, supervised the experimental design, and helped in drafting the manuscript. All authors read and approved the final manuscript.
ComS2IP Group, School of Electrical and Electronic Engineering, Newcastle University, England, UK
Mohammed A. M. Abdullah, Satnam S. Dlay, Wai L. Woo & Jonathon A. Chambers
Department of Computer and Information Engineering, Ninevah University, Nineveh, Iraq
Mohammed A. M. Abdullah
Satnam S. Dlay
Wai L. Woo
Jonathon A. Chambers
Correspondence to Mohammed A. M. Abdullah.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Abdullah, M.A., Dlay, S.S., Woo, W.L. et al. A novel framework for cross-spectral iris matching. IPSJ T Comput Vis Appl 8, 9 (2016). https://doi.org/10.1186/s41074-016-0009-9
Multi-spectral recognition
Photometric normalization
Score fusion | CommonCrawl |
Blow-up and global existence of solutions to a parabolic equation associated with the fraction p-Laplacian
CPAA Home
Backward compact and periodic random attractors for non-autonomous sine-Gordon equations with multiplicative noise
May 2019, 18(3): 1177-1203. doi: 10.3934/cpaa.2019057
The Cauchy problem for a family of two-dimensional fractional Benjamin-Ono equations
Eddye Bustamante , José Jiménez Urrea and Jorge Mejía
Departamento de Matemáticas, Universidad Nacional de Colombia, A. A. 3840 Medellín, Colombia
Received March 2018 Revised August 2018 Published November 2018
In this work we prove that the initial value problem (IVP) associated to the fractional two-dimensional Benjamin-Ono equation
$\left. \begin{array}{rl} u_t+D_x^{\alpha} u_x +\mathcal Hu_{yy} +uu_x &\hspace{-2mm} = 0, \qquad\qquad (x, y)\in\mathbb R^2, \; t\in\mathbb R, \\ u(x, y, 0)&\hspace{-2mm} = u_0(x, y), \end{array} \right\}\, , $
$0 < \alpha\leq1$, $D_x^{\alpha}$
denotes the operator defined through the Fourier transform by
$(D_x^{\alpha}f)\widehat{\;\;}(\xi, \eta): = |\xi|^{\alpha}\widehat{f}(\xi, \eta)\, , ~~~~~~~~~~~~~~~~~~~~~~~~(0.1)$
$\mathcal H$
denotes the Hilbert transform with respect to the variable x, is locally well posed in the Sobolev space
$H^s(\mathbb R^2)$ with $s>\dfrac32+\dfrac14(1-\alpha)$
Keywords: Benjamin Ono equation.
Mathematics Subject Classification: 35Q53.
Citation: Eddye Bustamante, José Jiménez Urrea, Jorge Mejía. The Cauchy problem for a family of two-dimensional fractional Benjamin-Ono equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1177-1203. doi: 10.3934/cpaa.2019057
M. J. Ablowitz and P. A. Clarkson, Solitons, Nonlinear Evolution Equations and Inverse Scattering, vol. 149 of London Mathematical Society Lecture Notes Series. Cambridge University Press, Cambridge, 1991. doi: 10.1017/CBO9780511623998. Google Scholar
M. J. Ablowitz and H. Segur, Long internal waves in fluids of great depth, Stud. App. Math., 62 (1980), 249-262. doi: 10.1002/sapm1980623249. Google Scholar
B. Akers and P. Milewski, A model equation for wave packet solitary waves arising from capillary-gravity flows, Studies in Applied Mathematics, 122 (2009), 249-274. doi: 10.1111/j.1467-9590.2009.00432.x. Google Scholar
J. L. Bona and R. Smith, The initial value problem for the Korteweg-de Vries equation, Philos. Trans. R. Soc. Lond., Ser. A, 278 (1975), 555-601. doi: 10.1098/rsta.1975.0035. Google Scholar
A. Cunha and A. Pastor, The IVP for the Benjamin-Ono-Zakharov-Kuznetsov equation in low regularity Sobolev spaces, J. Differential Equations, 261 (2016), 2041-2067. doi: 10.1016/j.jde.2016.04.022. Google Scholar
A. Esfahani and A. Pastor, Ill-posedness results for the (generalized) Benjamin-Ono-Zakharov-Kuznetsov equation, Proc. Amer. Math. Soc., 139 (2011), 943-956. doi: 10.1090/S0002-9939-2010-10532-4. Google Scholar
A. Esfahani and A. Pastor, Two dimensional solitary waves in shear flows, Calc. Var. Partial Differential Equations, 57 (2018), 57-102. doi: 10.1007/s00526-018-1383-1. Google Scholar
T. Kato, Quasilinear equations of evolution, with applications to PDE, Lecture Notes in Mathematics, vol. 448, Springer, Berlin, (1975), 25–70. Google Scholar
T. Kato, On the Korteweg-de Vries equation, Manuscripta Math., 28 (1979), 89-99. doi: 10.1007/BF01647967. Google Scholar
T. Kato and G. Ponce, Commutator estimates and the Euler and Navier-Stokes equations, Comm. Pure Appl. Math., 41 (1988), 891-907. doi: 10.1002/cpa.3160410704. Google Scholar
C. Kenig, On the local and global well-posedness theory for the KP-I equation, Ann. I.H. PoincaréAN, 21 (2004), 87-838. doi: 10.1016/j.anihpc.2003.12.002. Google Scholar
C. Kenig and K. D. Koenig, On the local well posedness of the Benjamin-Ono and modified Benjamin-Ono equations, Math. Res. Lett., 10 (2003), 879-895. doi: 10.4310/MRL.2003.v10.n6.a13. Google Scholar
C. Kenig, G. Ponce and L. Vega, On the (generalized) Korteweg-de Vries equation, Duke Mathematical Journal, 59 (1989), 585-610. doi: 10.1215/S0012-7094-89-05927-9. Google Scholar
C. Kenig, G. Ponce and L. Vega, Oscillatory integrals and regularity of dispersive equations, Indiana Univ. Math. J., 40 (1991), 33-69. doi: 10.1512/iumj.1991.40.40003. Google Scholar
C. Kenig, G. Ponce and L. Vega, Well-posedness and scattering results for the generalized Korteweg-de Vries equation via the contraction principle, Comm. Pure Appl. Math., 46 (1993), 527-620. doi: 10.1002/cpa.3160460405. Google Scholar
B. Kim, Three-dimensional Solitary Waves in Dispersive Wave Systems, PhD thesis, Massachusets Institute of Technology, Department of Mathematics, Cambridge, MA, 2006. Google Scholar
H. Koch and N. Tzvetkov, On the local well-posedness of the Benjamin-Ono equation in $H^s(\mathbb R)$, IMRN International Mathematics Research Notices, 26 (2003), 1449–1464. doi: 10.1155/S1073792803211260. Google Scholar
F. Linares and G. Ponce, Introduction to Nonlinear Dispersive Equations, Universitext, Springer, 2015. doi: 10.1007/978-1-4939-2181-2. Google Scholar
F. Linares, D. Pilod and J. C. Saut, Dispersive perturbations of Burgers and hyperbolic equations I: Local theory, Siam J. Math. Anal., 46 (2014), 1505-1537. doi: 10.1137/130912001. Google Scholar
F. Linares, D. Pilod and J. C. Saut, The Cauchy problem for the fractional Kadomtsev-Petviashvili equations, SIAM J. Math. Analysis, 50 (2018), 3172-3209. doi: 10.1137/17M1145379. Google Scholar
L. Molinet, J. C. Saut and N. Tzvetkov, Ill-posedness issues for the Benjamin-Ono and related equations, Siam J. Math. Anal., 33 (2001), 982-988. doi: 10.1137/S0036141001385307. Google Scholar
D. E. Pelinovsky and V. I. Shrira, Collapse transformation for self-focusing solitary waves in boundary-layer type shear flows, Physics Letters A, 206 (1995), 195-202. Google Scholar
G. Ponce, On the global well-posedness of the Benjamin-Ono equation, Differential Integral Equations, 4 (1991), 527-542. Google Scholar
G. Preciado and F. Soriano, On the Cauchy problem of a two-dimensional Benjamin-Ono equation, arXiv:1503.04290v1 [Math.AP] 14 Mar 2015. doi: 10.12732/ijam.v26i6.1. Google Scholar
J. C. Saut, Sur quelques gééalisations de l'éuation de Korteweg-de Vries, J. Math. Pures Appl., 58 (1979), 21-61. Google Scholar
T. Tao, Nonlinear Dispersive Equations. Local and Global Analysis, Regional Conference Series in Mathematics, Number 106, AMS, 2006. doi: 10.1090/cbms/106. Google Scholar
Jerry L. Bona, Angel Durán, Dimitrios Mitsotakis. Solitary-wave solutions of Benjamin-Ono and other systems for internal waves. I. approximations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 87-111. doi: 10.3934/dcds.2020215
Marc Homs-Dones. A generalization of the Babbage functional equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 899-919. doi: 10.3934/dcds.2020303
Julian Tugaut. Captivity of the solution to the granular media equation. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021002
Bilel Elbetch, Tounsia Benzekri, Daniel Massart, Tewfik Sari. The multi-patch logistic equation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021025
Peter Poláčik, Pavol Quittner. Entire and ancient solutions of a supercritical semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 413-438. doi: 10.3934/dcds.2020136
Jianhua Huang, Yanbin Tang, Ming Wang. Singular support of the global attractor for a damped BBM equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020345
Stefano Bianchini, Paolo Bonicatto. Forward untangling and applications to the uniqueness problem for the continuity equation. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020384
Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265
François Dubois. Third order equivalent equation of lattice Boltzmann scheme. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 221-248. doi: 10.3934/dcds.2009.23.221
Oleg Yu. Imanuvilov, Jean Pierre Puel. On global controllability of 2-D Burgers equation. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 299-313. doi: 10.3934/dcds.2009.23.299
Biyue Chen, Chunxiang Zhao, Chengkui Zhong. The global attractor for the wave equation with nonlocal strong damping. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021015
Yukihiko Nakata. Existence of a period two solution of a delay differential equation. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1103-1110. doi: 10.3934/dcdss.2020392
Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079
Cheng He, Changzheng Qu. Global weak solutions for the two-component Novikov equation. Electronic Research Archive, 2020, 28 (4) : 1545-1562. doi: 10.3934/era.2020081
Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364
Jiaquan Liu, Xiangqing Liu, Zhi-Qiang Wang. Sign-changing solutions for a parameter-dependent quasilinear equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020454
Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448
Reza Chaharpashlou, Abdon Atangana, Reza Saadati. On the fuzzy stability results for fractional stochastic Volterra integral equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020432
Eddye Bustamante José Jiménez Urrea Jorge Mejía | CommonCrawl |
Effect of sample orientation on the microstructure and microhardness of additively manufactured AlSi10Mg processed by high-pressure torsion
Shahir Mohd Yusuf1,
Mathias Hoegden1 &
Nong Gao ORCID: orcid.org/0000-0002-7430-03191
The International Journal of Advanced Manufacturing Technology (2020)Cite this article
For the first time, high-pressure torsion (HPT) was applied to additively manufactured AlSi10Mg built in two directions (vertical and horizontal) by selective laser melting (SLM), and the influence of extreme torsional strain on the porosity, microstructure and microhardness of the alloy was investigated. ImageJ analysis indicates that significant porosity reduction is achieved by 1/4 HPT revolution (low strain). Optical microscopy (OM) and scanning electron microscopy (SEM) observations reveal the steady distortion and elongation of the melt pools, the continuous elongation of the cellular-dendritic Al matrix and breakage of the eutectic Si phase network with increased HPT revolutions. Microhardness measurements indicate that despite the significant increase in hardness attained from HPT processing, hardness saturation and microstructural homogeneity are not achieved even after 10 HPT revolutions. X-ray diffraction (XRD) line broadening analysis demonstrates increased dislocation densities with increased HPT revolutions, which contributes to the considerably higher hardness values compared to as-received samples.
Powder bed fusion laser additive manufacturing (L-PBF AM) techniques, such as selective laser melting (SLM) and electron beam melting (EBM) have emerged as attractive methods of fabricating metallic components suitable for a wide range of applications, including automotive and aerospace. These techniques are highly attractive because they are able to manufacture engineering components with intricate structures and tailored microstructures [1]. Compared to traditional manufacturing routes, AM also offers the advantage of short lead times and low material wastage while maintaining accuracy and high resolution for the built structure [2,3,4]. SLM is one of the most common L-PBF AM technology that builds metallic parts by selectively laser scanning a powder bed in a layer-by-layer fashion (bottom-up approach) through computer-aided design (CAD) until a complete three-dimensional (3D) structure is formed.
To date, a wide range of materials have been processed by SLM, including stainless steels [5,6,7], Ni-based alloys [8,9,10], Ti-based alloys [11,12,13] and Al-based alloys [14,15,16]. Various reports have suggested improvements in mechanical properties for AM-fabricated parts compared to that of their traditional counterparts, including higher yield and tensile strengths [17, 18], better corrosion resistance [19,20,21] and enhanced fatigue life [22]. Such improvements are attributed to the unique and very fine microstructure as a result of rapid solidification due to the short laser-material interaction time and high cooling rates of AM processes (103–108 K s−1) [23,24,25]. However, the high residual stress, porosity and other defects that often exist in as-manufactured parts cause some concern, which means that some kind of post-processing, e.g. hot isostatic pressing (HIP) and heat treatment before being ready for service [26,27,28].
On the other hand, severe plastic deformation (SPD) is a unique metal processing technique that introduces large amounts of strain on bulk metallic materials to achieve extreme grain refinement down to sub-micron (0.1–1 μm) or even nanoscale (< 100 nm), collectively termed ultrafine grained microstructure (UFG) [29, 30]. High-pressure torsion (HPT) is one of the most effective SPD techniques in attaining UFG microstructures with large fractions of high-angle grain boundaries via imposing significantly high torsional strains on disk-shaped materials. Various studies have shown improvement in properties of materials processed by SPD processes, in particular by HPT, compared to cast or wrought materials. Examples include higher yield and tensile strengths [31, 32], enhanced wear and corrosion performance [33, 34] and superplasticity at room temperature [35, 36]. The high strengths in SPD-processed metals are largely attributed to the Hall-Petch effect from the UFG microstructure and dislocation strengthening because of large amounts of dislocations introduced during SPD processing. Furthermore, reports have also emerged on attaining pore-free metallic materials via HPT, e.g. in pure Cu [37] and β-Ti alloys [38].
AlSi10Mg is a traditional hypoeutectic Al–Si-based cast alloy that is highly attractive for aerospace, marine and automotive applications due to its light weight, high specific strength, good corrosion resistance and low thermal expansion [39,40,41,42]. AlSi10Mg possesses good weldability due to its near Al–Si eutectic composition, in which the size and morphology of eutectic Si significantly influences the overall mechanical properties of this alloy [43, 44]. Coarse and acicular-shaped eutectic Si phase often cause cracks in cast AlSi10Mg (cooling rate < 102 K s−1) when placed under tensile load, resulting in mechanical performance [45]. Therefore, rapid solidification (103–108 K s−1) is desired to attain refined Si phase that are distributed homogeneously in the Al matrix to achieve better mechanical properties [46, 47]. This can be easily achieved by AM processes, and various studies have indeed shown enhancement in strength and ductility, and other properties of AM-fabricated AlSi10Mg compared to cast or wrought counterparts [48,49,50,51].
For example, two distinct microstructures can be observed in SLM-fabricated AlSi10Mg as a result of the rapid heating/cooling cycle: (a) cellular-dendritic α-Al structures and (b) fine, fibrous eutectic Si phase network around the α-Al phase [48, 49, 52]. Such unique microstructures contribute to the improved mechanical properties of SLM AlSi10Mg, e.g. strength and ductility under quasi-static loading [50, 51, 53]. Kempen et al. [48] established a process parameter window comprising of laser power, P and scan speed, v to achieve optimum densification levels in SLM AlSi10Mg. Wei et al. [49] found that the porosity in SLM ALSi10Mg can be controlled (but not eliminated) by adjusting the energy density parameter, particularly laser power, P and laser scan spacing, h. Furthermore, Hadadzadeh et al. [54], Chen et al. [55] and Li et al. [56] all agree that the strengthening in SLM AlSi10Mg is caused by Orowan strengthening, Hall-Petch effect, dislocation hardening, or combination of the strengthening mechanisms.
On the other hand, while AlSi10Mg has not been processed by HPT yet, there are vast amount of literature on the HPT processing of other Al–Si-based alloys, e.g. Al-2Si, Al-7Si and Al-12 Si [57,58,59,60]. For example, Wang et al. [57] attributed the enhanced hardness and corrosion performance of HPT-processed Al-7Si to breakage of coarse Si particles and intermetallic phases, microstructure homogeneity and increased active sites due to HPT. Mungole et al. [58] found a correlation between microhardness and microstructural homogeneity in HPT-processed Al-7Si. Furthermore, El Aal et al. [59] attributed the improved wear properties of HPT-processed Al-7wt.%Si to increased microhardness due to grain refinement to the nano-scale and the uniform dispersion of Si particles.
In fact, it is only very recently that the combination of SLM and HPT for 316L stainless steel was studied by the current authors [20]. It was found that HPT processing is able to substantially reduce porosity even at low strain levels, achieving nano-scale grain sizes and resulting in enhanced microhardness and corrosion performance. Accordingly, the objective of this study is to investigate the influence of HPT processing on the porosity, microstructure and microhardness of SLM-manufactured AlSi10Mg built on both vertical and horizontal orientations through various number of HPT revolutions (strain levels) at room temperature. Microstructural evolution of the HPT-processed alloy is characterized by optical microscopy (OM), scanning electron microscopy (SEM), Vickers microhardness (HV) measurements and x-ray diffraction (XRD).
Gas-atomised AlSi10Mg powder supplied by Concept Laser was used in this study with the chemical composition shown in Table 1.
Table 1 Chemical composition of AlSi10Mg used in this study (wt%)
A Concept Laser M2 SLM machine was used to additively manufacture the AlSi10Mg samples. The processing parameters chosen were as recommended by concept laser as listed in Table 2. The samples were built in a chamber with N2 gas environment at room temperature, while an alternating bi-directional scan strategy was used in this study [61].
Table 2 Processing parameters used to fabricate AlSi10Mg via SLM in this study
To investigate the influence of build orientation on the microstructure and microhardness of HPT-processed SLM AlSi10Mg, two 200 mm long cylindrical rods with a diameter of 10 mm each were manufactured in vertical and horizontal orientations, respectively, as shown in Fig. 1. In both cases, the laser beam was scanned on the powder bed along the x-y plane and moved vertically upwards along the z-axis.
Schematic of sample orientations, build direction and laser beam moving plane for the two cylindrical rods
For HPT processing, the cylindrical rods were sliced into thin disks with thickness of 1 mm and diameter of ~ 9.8 mm each before being ground down with 800 grit SiC paper to a thickness of ~ 0.85 mm. Quasi-constrained HPT processing was conducted at room temperature using a HPT facility having two anvils, each containing a circular cavity with depth of 0.25 mm and diameter of 10 mm, respectively [62,63,64]. The disks were placed inside the cavity of the lower anvil before the anvil is pushed upwards into the cavity of the upper anvil via a compressive force. A small gap existing between the anvils enables some material outflow upon HPT torsional straining. The disks were then deformed under a pressure of 6 GPa by rotating the lower anvil at 1 rpm, from 1/4 to 10 revolutions. The equivalent von Mises strain, εeq. Imposed by HPT straining can be calculated by using the following equation [30]:
$$ {\varepsilon}_{eq}=\frac{2\pi NR}{h\sqrt{3}} $$
where N is the number of revolutions, R the distance from disk centre and h the initial disk thickness.
In this paper, disk samples sliced from the vertically built rod are denoted with N-V, while disk samples sliced from the horizontally built rod are denoted as N-H, where N corresponds to the number of HPT revolutions; e.g. 1/4-V (1/4 HPT revolution, vertical), 1-H (1 HPT revolution, horizontal). On the other hand, as-received disks are denoted as AR-V (vertical) and AR-H (horizontal) respectively.
The porosity content in the as-received and HPT-processed disks was analysed using ImageJ analysis software based on images taken from OM and SEM observations following the procedures described in Ref. [20]. The microstructures of the as-received and the HPT-processed disks were characterized via OM and SEM after being ground using different grits of SiC paper, polished using 3 and 1 μm Al2O3 suspensions to a mirror-like surface finish and finally etched with Keller's reagent. OM and SEM observations were conducted on the disk surface, i.e. the circular cross-section of the rods. Thus, for vertically built samples, the disk surface is parallel to the x-y plane, while the disk surface for horizontally built samples is parallel to the x-z plane.
Two different sets of microhardness measurements were recorded using Future Tech FM-300 Vickers hardness testing machine under an applied load of 500 gf with a dwell time of 15 s. The results are expressed in two ways: as plots of hardness against distance from the centre of the disk and as 2D colour-coded contour mapping plots throughout the disk surface. Firstly, a series of HV measurements were taken across the diameter of the disks with a distance of 0.3 mm between one indentation point to another. To improve the accuracy of the measurements, four further HV measurements were made with distances of 0.15 mm around each initial indentation point. The five readings were averaged and the error bars were recorded. Secondly, microhardness measurements were taken throughout the whole surface of the disks in a rectilinear grid mapping pattern determined by x- and y-coordinates, with coordinates (0, 0) defined as the centre of the disks. The indentation points are placed 0.3 mm between each other.
The phase composition and dislocation density, ρ, of the as-received and HPT-processed disks were determined by x-ray diffraction (XRD) analysis using Rigaku SmartLab X-ray Diffractometer, with 10 steps per degree and a count time of 1 s per step on the instrument using a slit length of 5 mm, which is equipped with a graphite monochromatore using CuKα radiation. The phase composition was determined by the XRD peaks and peak broadening data, while the dislocation density was estimated from the microstrain, ε and crystallite size, Dc using materials analysis using diffraction (MAUD) software based on the Rietveld refinement method [65,66,67].
Porosity evolution
Spherical and non-spherical pores were observed on the polished samples shown in Fig. 2a. The spherical pores are also known as gas-induced porosity, which could be caused by the entrapment of inert gas in the melt pool during the melting of powder or may already exist inside the initial raw powder and then remain in the finished structure [68, 69]. The irregular-shaped pores are known as process-induced porosity, which could arise from incomplete melting of the powder as the result of insufficient energy, or due to spatter ejection from the powder bed upon contact with the laser beam [25].
a OM image showing types of porosity in the as-received AlSi10Mg. b SEM image showing unmelted powder region. c Dashed circle showing gas-induced porosity in AR-V sample. d Process-induced porosity at the melt pool boundaries (MPB) in AR-H sample
The SEM image in Fig. 2b shows an unmelted powder region causing lack-of-fusion porosity (a type of process-induced porosity) as the result of inadequate energy contact at that particular powder bed region to melt the powder. Etched OM image in Fig. 2c shows a mix of gas-induced pores (dashed yellow circle) and process-induced pores within the melt pool microstructure of the vertically built samples (AR-V), while etched OM image in Fig. 2d indicate that process-induced pores (solid red circle) are more apparent in the horizontally built samples (AR-H). The lack of gas-induced porosity in the AR-H samples may be the result of the continuous melting and re-melting of successive layers during the SLM process, which can cause the entrapped gas to eject away from the melt pool upon contact with the laser beam [70]. On the other hand, upon scanning a single layer of powder bed for the AR-V samples, it could be more difficult for the entrapped gas to move out of the melt pool since their movement are restricted by the compact powder distribution within the layer.
Porosity measurement was conducted on polished but unetched AR-V and AR-H samples using ImageJ software to determine the initial porosity level before HPT processing. The average porosity content for AR-V samples is 0.766 ± 0.023% and 0.695 ± 0.019% for AR-H samples. Despite the slight discrepancy between porosity level in AR-V and AR-H samples, they indicate high solidification levels (> 99%) were attained for samples built in both orientations. Figures 3 a and b show the distribution of pore diameter in the as-received disks before HPT processing, which indicates a wide, but roughly similar distribution of pore diameter within the 0–90 μm range for both sample orientations. The slightly larger pore diameter distribution in AR-H samples may be caused by the larger process-induced pores in the horizontally built samples compared to those in the vertically built samples. The average pore diameter is 19.38 ± 15.64 μm for AR-V and 19.50 ± 17.7 μm for AR-H. Most of the pores lie between 5 and 10 μm and 5–15 μm for AR-V and AR-H, respectively.
Comparison of pore diameter in as-received samples a AR-V and b AR-H, and in samples processed through 1/4 HPT revolutions c 1/4-V and d 1/4-H
After 1/4 revolution of HPT processing, the porosity content is reduced to 0.045 ± 0.011% and 0.037 ± 0.008% for 1/4-V and 1/4-H, respectively, yielding ~ 94% decrease compared to the as-received samples. Figures 3c and d show the distribution of pore diameter for the disks processed through 1/4 HPT revolution. The results indicate a much narrower distribution of pore diameter ranging from 0 to 40 μm and significantly reduced pore diameters, with an average of 5.52 ± 4.81 μm for 1/4-V and 4.33 ± 3.62 μm for 1/4-H. Majority of the pores are in the sizes of 0–5 μm, and to a much lesser extent, between 5 and 15 μm. Only small amounts of pores in the sizes of 15–40 μm (< 10 counts) were observed for both 1/4-V and 1/4-H disks. Such dramatic reduction in porosity level was also observed by Yusuf et al. [20] in their study for SLM-fabricated and then HPT-processed 316L SS. These observations indicate that the HPT-imposed shear strain is able to effectively 'close' the pores, significantly reducing pore sizes for larger pores (40–90 μm) and possibly eliminate smaller pores (0–20 μm).
It is widely accepted that the generation and growth of pores in metals mainly depend on large levels of stress triaxiality under hydrostatic tension experienced, either under loading or during processing [71, 72]. On the other hand, the application of hydrostatic pressure has been found to be crucial in eliminating porosity. For example, Nakasaki et al. [73] observed that rolling a steel billet in multiple passes via a constant hydrostatic pressure was able to eliminate the centre pores that initially existed in the steel billet. Wang et al. [74] found that the rate of pore closure during hot rolling of steels are influenced by rolling process parameters and the pore location relative to the rolling contact surfaces.
In addition to hydrostatic pressure, the application of shear strain also promotes the collapse and closure of pores, enabling a sound bonding to be achieved throughout the pores after they come into close contact atomically as the result of both hydrostatic pressure and shear strain [75, 76]. Hydrostatic pressure and shear strain are two main features in severe plastic deformation (SPD) techniques to yield UFG microstructures and promote strong metallic bonding within the processed materials at both atomic and macroscopic scales [77, 78].
In addition, Qi et al. [37] systematically studied the generation and healing of porosity in HPT processing of pure Cu and explained the pore elimination mechanism in terms of grain refinement. The compressive pressure (from pushing the lower anvil to the upper anvil) and even small amount of torsional strain could cause the pores to collapse and stretch their internal surfaces together with the generation of UFG microstructures. Upon higher torsional strain, true sub-micron grains with clear grain boundaries start to form and the pores continuously elongate parallel to the shearing direction, fragmenting the pores and creating intimate atomic contact within the internal surface of the pores. Finally, at even higher torsional strain, internal friction within the internal pore surfaces creates strain gradients and high strain localisation, thus closing the pores. In this study, HPT has been proven as a very effective approach to reduce porosity via the strong metallic bonding that brings together and closes the pores as the result of the combination of hydrostatic pressure and torsional shear strain imposed.
Phase composition and dislocation density
XRD analysis Fig. 4 shows both Al and Si peaks are present in the AR-V and AR-H samples. A weak peak of Mg2Si intermetallic is also detected at ~ 40°, which might precipitate from the diffusion of Mg and Si as the result of repeated heating cycles experienced during SLM processing [79]. Such heating cycles resemble the heat treatment process during the initial stages of age hardening of Al–Mg–Si alloys [80]. Upon HPT processing, broadening and shifting of peaks could be observed in both vertically and horizontally built samples (except for (1 1 1) orientation), similar to other studies on HPT-processed metals [81, 82]. These are mainly caused by lattice defects, e.g. internal microstrain, dislocations, small crystallite sizes and additional grain boundaries introduced via the HPT-imposed torsional strains [83] (Fig. 4).
XRD patterns for as-received and HPT-processed disks for samples built in the a vertical and b horizontal directions
The dislocation density ρ can be calculated from the lattice microstrain based on the following equation [84, 85]:
$$ {\rho}_{XRD}=\frac{2\sqrt{3}{\left\langle {\varepsilon}^2\right\rangle}^{1/2}}{D_cb} $$
where Dc is the average crystallite size, referred to as coherently scattered domains (CSD), and b is the burgers vector (b = 0.286 nm for Al [55]). The values of \( <{\varepsilon}^2{>}^{\frac{1}{2}} \) and Dc are obtained through the Rietveld refinement method applied in MAUD software, and the corresponding values of ρ calculated based on Eq. 2 are shown in Table 3. It is revealed that the values of ρ for as-received samples built in both orientations are already in the order of 1 × 1014 m−2, much higher than × 109 or × 1010 m−2 in conventional cast or wrought metals [86]. Such high ρ values for as-built SLM samples are consistent with other studies on SLM AlSi10Mg [54] and pure iron [87], which are attributed to the numerous fine cellular structure colonies that store a large amount of dislocations, providing more sites to inhibit dislocation motions, and leading to relatively higher hardness compared to cast or wrought metals [86]. Upon HPT processing through 1 and 10 revolutions, the dislocation densities further increase to the region of ~ 3.5–5 × 1014 m−2. On the other hand, the crystallite sizes dramatically decrease from ~ 280 to ~ 55–57 nm after 1 HPT revolution, before only slightly decreasing to ~ 51–52 nm after 10 HPT revolutions. The decrease in crystallite size is common in SPD-processed materials because the X-ray coherency is broken due to the small misorientations in the actual grains as the result of large shear strains applied during the process [83]. Therefore, the XRD analysis actually measures the crystallite sizes, rather than actual grain sizes.
Table 3 Dislocation density parameters obtained through MAUD analysis of the XRD data
For HPT-processed samples, the total dislocation density was calculated as the sum of geometrically necessary dislocations (GNDs) and statistically stored dislocations (SSDs). This is due to the presence of a strain gradient throughout the disk, with higher strains imposed at the edge compared to the centre region of the HPT-processed disk as the result of radial dependency, in accordance with Eq. 1 [81, 88].
Microhardness measurements for as-received and HPT-processed samples
Figures 5a and b show that the as-received samples exhibit reasonable homogeneity in the distribution of HV values across the diameter of the disks in both build orientations. It is worth noting that the HV values for SLM AlSi10Mg from this study (130–150 HV) are higher than that of their traditionally manufactured counterparts, e.g. 95–105 HV in as-high pressure die cast (HPDC) parts [89] and 64–70 HV in as-cast parts [90]. Higher hardness in SLM is also observed in various literatures and is commonly attributed to the finer microstructures attained as the result of significantly higher cooling rates/rapid solidification achieved in SLM due to the short laser-material contact time, thereby preventing recrystallisation or coarsening of the final microstructure [1, 41, 44, 91].
Microhardness measurements for as-received and HPT-processed disks for samples built in the a vertical and b horizontal directions
However, the HV values for AR-V (average: ~ 145 HV) is slightly higher than that of AR-H (average: ~ 136 HV), indicating anisotropy in SLM AlSi10Mg between both build orientations. Similar anisotropies are also observed in other AM materials, in which their properties differ with respect to build direction [2, 68, 91, 92]. Several studies have identified possible causes for anisotropy in AM materials, including level of defects, residual stress, local heat transfer conditions, scan strategy, grain orientation, and to a lesser extent, crystallographic texture [22, 42]. For example, Yadroitsev et al. [93] attributed the lower elastic modulus in IN 625 specimens built vertically (parallel to z-axis) to the higher defect content due to larger concentration of residual stresses compared to the horizontal ones (parallel to x-y plane). Frazier et al. [2] explained that lack-of-fusion pores is the reason for lower ductility of Ti alloys in the z-direction. Carroll et al. [92] observed lower ductility in Ti alloys built horizontally as a result of the orientation of prior-β grains parallel to the build (z-direction). Chlebus et al. [94] ascribed the anisotropy in fracture modes and tensile properties of SLM IN 718 to the different orientations of the columnar grains with respect to loading directions. Accordingly, the lower hardness for AR-H samples than AR-V could be attributed to the larger content of lack-of-fusion pores (process-induced porosity) at the melt pool boundaries (MPB) of AR-H compared to AR-V (Fig. 2c, d. This may lead to grain structure heterogeneity and structural discontinuities, which could weaken the bonding of the surrounding microstructure, thereby affecting the macroscopic mechanical behaviour [42].
In comparison to as-received samples, it is clear that the HV values in HPT-processed samples are significantly higher, increase with distance from the centre to the edge of the disk and with increased HPT revolutions. A relatively strong hardening was obtained after 1/4 revolution, by a factor of ~ 2 for both sample orientations. It is also apparent that the majority hardness increment was achieved from 1/4 revolution, a common trend observed in many HPT-processed metals [95]. However, the increase in hardness is not homogeneous, with the central region exhibiting low HV values compared to the peripheral regions. This is expected because of the radial dependency of HPT-induced torsional strain in accordance with Eq. 1. Additional hardening was achieved with increasing number of revolutions and the highest HV value attained after 10 revolutions at the peripheral region was 218 HV and 210 HV for samples built vertically and horizontally, respectively. However, the hardness distribution remains inhomogeneous across the disks even after 10 revolutions for samples built in both orientations, indicating that hardness saturation has not been achieved, even at extreme torsional strain imposed at the highest number of revolutions.
Nevertheless, the HV values are slightly higher in vertically built samples than that of horizontally built samples for all conditions, indicating that some level of anisotropy might still exist even after 10 revolutions of HPT processing. However, it is possible that this discrepancy is due to the grain orientation rather than porosity, since it is already established that the pores that were initially present in the as-received samples were significantly reduced after HPT.
The variation of HV values with equivalent strain εeq calculated from Eq. 1 is shown in Fig. 6a, b. It is apparent that the hardness varies monotonically with the imposed strain. However, microhardness saturation is still not achieved for samples built in both directions even after 10 HPT revolutions. Other studies have also confirmed the difficulty of achieving complete homogeneity in some high-strength materials even after 20 revolutions [96,97,98]. They attributed this heterogeneity to the difficulty to attain true sub-micron or nano-scale grains at the central region or near the centre of the disks, even via extreme torsional strains.
Plot of HV against equivalent strain, εeq, for as-received (dotted lines) and HPT-processed disks for samples built in the a vertical and b horizontal directions
Colour-coded contour maps are generated based on the microhardness measurements from the rectilinear grid mapping pattern to provide visualization of the distribution of individual HV values for the as-received and HPT-processed disks. Figure 7a shows the HV mapping for as-received samples, and Fig. 7b–e for HPT-processed samples through 1 and 10 revolutions. The colour-coded legend on the right of Fig. 7a defines the range of HV values represented by the different colours in the contour maps (from 140 to 220 HV).
Microhardness distribution throughout the disk cross-section for a as-received samples, after 1 HPT revolution for b 1-V and c 1-H, and after 10 HPT revolutions for d 10-V and e 10-H
A significant evolution of HV values could be observed after 1 HPT revolution for both 1-V (Fig. 7b) and 1-H (Fig. 7c) disks compared to that of the as-received samples (Fig. 7a). This is indicated by the transition from the blue region of low HV values to green–orange region of higher HV values. However, within the disks processed through 1 HPT revolution, Fig. 7b, c demonstrates that the sample built in vertical orientation shows higher HV values (~ 190–205 HV) compared to that built in horizontal orientation (~ 175–195 HV). These are indicated by the cooler colours in 1-H compared to 1-V, especially at the peripheral regions, with larger area in the centre of the disk extending over a width of ~ 4 mm exhibiting lower HV values. The HV values further increase after 10 revolutions, as indicated by the transition from green–orange to green–red region in Fig. 7d, e. It is apparent that 10-V disks possess higher HV values (~ 190–220 HV) than 10-H disks (~ 190–210 HV), as illustrated by the hotter colours at the peripheral regions and smaller area of cooler colours at the central region. Nevertheless, these contour maps further demonstrate that hardness saturation and microstructural homogeneity could still not be fully achieved, particularly at the central region even though the area with low HV values shrink with increasing torsional strain values.
Microstructure observations for as-received samples
Figures 8a and b show low-magnification OM images of the as-received samples built vertically (AR-V) and horizontally (AR-H), respectively, revealing distinct melt pool morphologies for samples built in different orientations. The AR-V sample (Fig. 8a) exhibits alternating bi-directional melt pool patterns corresponding to the scan strategy used in this study, while the AR-H sample (Fig. 8b) illustrates a fish scale (semi-circle) morphology corresponding to the layer-by-layer laser interaction with the powder bed. Good inter- and intra-layer overlapping is obtained for both samples, indicating a good bonding within a single layer and between successive layers [99].
OM images of a AR-V and b AR-H samples, SEM images of c AR-V and d AR-H samples, and e SEM image showing typical microstructures of as-received SLM AlSi10Mg taken from AR-V
Figures 8c and d exhibit SEM images of AR-V and AR-H, respectively, while Fig. 8e shows a higher magnification SEM image of Fig. 8c, displaying much clearer SLM AlSi10Mg microstructures. They consist of very fine cellular-dendritic α-Al matrix (size from ~ 0.5 to 1 μm) surrounded by thin fibrous eutectic Si network (thickness < 0.2 μm). Such fine microstructures are attributed to the rapid solidification of SLM process through repeated fast melting/cooling cycles, kinetically favouring the cellular solidification morphology and the extended solubility of Si into Al [22, 100]. Sub-micron Si particles that did not fuse together with the eutectic Si network could also be observed in the as-received samples.
The MPBs (dashed red lines in Fig. 8c, d) and the areas close to the MPBs have relatively coarser and more elongated microstructures, compared to the central region of the melt pools. This could be ascribed to the relatively different solidification rates throughout the melt pool, even though the solidification rates in SLM process is generally high (103–108 Ks-1). In particular, the region around the MPBs and/or at the overlapping region between MPBs experiences longer laser-material interaction time due to the melting and re-melting of nearby powder bed in a single layer (AR-V) and of successive powder layers (AR-H). Hence, these regions could maintain at higher temperatures for longer periods, causing slower cooling rates and thus allowing the microstructures to coarsen. This phenomenon is common in SLM AlSi10Mg, which has been observed in other AM alloys [22, 41, 44].
In addition, partially broken and coarsened eutectic Si network could also be observed at the MPBs (dashed red lines in Fig. 8c, d) due to the lower solidification rate at that region. Mg2Si precipitates have not been found from the SEM observations in this study, despite the presence of weak XRD peaks for this intermetallic.
Microstructural evolution after HPT
Figure 9 displays representative OM images of HPT-processed disks through 1 and 10 revolutions for samples built in both orientations at the centre and edge (3 mm from centre) of the disks. These images show the evolution of melt pools as a result of HPT processing. The melt pools are observed to be slightly distorted and oriented at the centre after 1 revolution for 1-V (Fig. 9a) and 1-H (Fig. 9c) disks, compared to the shape and morphology of melt pools observed in AR-V (Fig. 8a) and AR-H (Fig. 8b). However, the melt pools at the edge of the disks become significantly sheared and extremely elongated for 1-V (Fig. 9b) and 1-H (Fig. 9d) disks, resulting in the loss of shape and morphology of the as-received melt pools.
OM images at the centre (a, c, e, g) and edge (b, d, f, h) of disks processed through 1 and 10 HPT revolutions for samples built on both vertical and horizontal orientations
These observations provide clear indications of the inhomogeneous strain distribution across the diameter of the disks due to the radial dependency of shear strain imposed during HPT in accordance with Eq. 2. Based on this equation, the edges experience higher amounts of torsional strain compared to the centre of the disks, and this is confirmed via the heavily deformed melt pools at the edges after one turn corresponding to εeq of ~ 13.6 (1-V, Fig. 9b and 1-H, Fig. 9d) compared to that at the centre (1-V, Fig. 9a, 1-H Fig. 9c). This is also in agreement with the higher HV values at the peripheral regions compared to the central region (Fig. 5a, b), thus affirming the application of microhardness measurements as an effective method to evaluate the extent of microstructural evolution in HPT-processed materials [101,102,103].
After 10 revolutions (εeq ≈ 136), the increased torsional strain at this stage further deformed the melt pools at the centre, such that they are extremely oriented, and the initial shape and morphology are no longer apparent although not completely eliminated as shown in Fig. 9e for 10-V and Fig. 9g for 10-H. In addition, the shape and morphology of the melt pools at the edges are completely lost for samples built in both directions (10-V, Fig. 10f and 10-H, Fig. 10h).
SEM images at the centre (a, c, e, g) and edge (b, d, f, h) of disks processed through 1 and 10 HPT revolutions for samples built on both vertical and horizontal orientations
Figure 10 illustrates representative SEM images of HPT-processed disks through 1 and 10 revolutions for samples built in both orientations at the centre and edge (3 mm from centre) of the disks. After HPT processing through 1 revolution, at the centre of the disks (1-V, Fig. 10a and 1-H, Fig. 10c), the cellular-dendritic α-Al matrix was compressed, while the eutectic Si network was refined and elongated. Some of the eutectic Si networks were also broken into fine Si particles. On the other hand, most of the eutectic Si networks are broken into fine Si particles that are dispersed more uniformly in the Al matrix at the edge of the disks (1-V, Fig. 10b and 1-H, Fig. 10d). After 10 revolutions, Fig. 10e–h exhibits the complete breakage of the eutectic Si network into fine Si particles. At this stage, the size of the fine Si particles significantly decreases, while the volume fraction obviously increases and more homogeneously distributed throughout the Al matrix compared to that after 1 revolution. This is because of the continuous breakage of the eutectic Si network after 10 HPT revolutions, resulting in the formation of new fine Si particles.
The observation of eutectic Si network breakage into fine Si particles upon increased HPT-imposed torsional strain is consistent with other studies on the HPT processing of traditional cast Al-7Si alloy [58, 104], which can be attributed to two factors [104, 105]: (i) increase in temperature due to friction generated during HPT straining may lead to nucleation and growth of fine Si particles or (ii) the severe plastic deformation that could cause fragmentation and redistribution of larger Si particles into smaller segments in the Al matrix.
To evaluate the extent of the microstructural evolution due to HPT, ImageJ software was used to characterize the cellular-dendritic network observed via SEM. Values for average diameter of the cells in the as-received and HPT-processed disks are listed in Table 4 for samples built in both directions. It could be seen that the average cell diameter for as-received samples is ~ 0.75 μm, which then decreases to ~ 0.50 μm (centre) and ~ 0.37 μm (edge) after 1 HPT revolution for samples built in both orientations.
Table 4 Average cell diameter measured through ImageJ software
On the other hand, Fig. 11 shows the evolution of the circularity of the cellular-dendritic network in as-received and HPT-processed disks for samples built in the horizontal orientation. Values closer to 1 indicate increasingly circular shape, while values closer to 0 represents increasingly elongated morphology. The circularity of the cells for the as-received sample ranges from 0.5 to 0.95 (average: 0.756 ± 0.07) and after 1 HPT revolution, decreases to 0.25–0.75 (average 0.473 ± 0.10) at the centre and to 0.15–0.60 (average 0.351 ± 0.08) at the edge of the disks. The lower average circularity at the edge compared to at the centre of the disks after 1 HPT revolution (by ~ 25%) further confirms the radial influence of the torsional strain imposed via HPT. The diameter and circularity of the cellular-dendritic structure could not be measured after 10 HPT revolutions because most of the eutectic Si networks are broken down into finer Si particles. Nevertheless, the reduced average diameter and circularity of the cells with increased HPT revolutions are attributed to the large shear strains due to the HPT-induced torsional strain, which have also been observed by Yusuf et al. in their study of 316L SS fabricated by SLM and then processed by HPT [20].
Circularity of the cellular-dendritic structure for as-received and HPT-processed disks in horizontally built samples
The evolution of microstructure and microhardness of AlSi10Mg fabricated by SLM in both vertical and horizontal orientations and then processed by HPT through 1/4 to 10 revolutions were investigated through OM, SEM, microhardness measurements, and XRD. The main conclusions can be drawn as follows:
Although both spherical (gas-induced) and non-spherical (process-induced) pores are present in vertically built samples, process-induced porosity is more apparent in horizontally built samples.
The microstructure of as-received SLM AlSi10Mg consists of cellular-dendritic Al matrix surrounded by fine, fibrous eutectic Si phase network for samples built in both build orientation.
The lower HV values in horizontally built samples are attributed to the larger fraction of process-induced pores at the fish-scale MPB, compared to vertically built samples.
HPT processing for 1/4 revolution significantly reduced the porosity by ~ 94% compared to the as-received samples.
HPT processing significantly increases the hardness of the alloy, but microstructural homogeneity was not achieved even after 10 HPT revolutions.
As the number of HPT revolutions increase, the cellular-dendritic Al matrix becomes increasingly refined and elongated, while the eutectic Si network is continuously broken into fine Si particles and dispersed uniformly throughout the Al Matrix.
Gu DD, Meiners W, Wissenbach K, Poprawe R (2012) Laser additive manufacturing of metallic components: materials, processes and mechanisms. Int Mater Rev 57:133–164. https://doi.org/10.1179/1743280411Y.0000000014
Frazier WE (2014) Metal additive manufacturing: a review. J Mater Eng Perform 23:1917–1928. https://doi.org/10.1007/s11665-014-0958-z
Buchbinder D, Meiners W, Wissenbach K, Poprawe R (2015) Selective laser melting of aluminum die-cast alloy—correlations between process parameters, solidification conditions, and resulting mechanical properties. J Laser Appl 27:S29205. https://doi.org/10.2351/1.4906389
Rao H, Giet S, Yang K et al (2016) The influence of processing parameters on aluminium alloy A357 manufactured by selective laser melting. Mater Des 109:334–346. https://doi.org/10.1016/j.matdes.2016.07.009
Ma M, Wang Z, Zeng X (2017) A comparison on metallurgical behaviors of 316L stainless steel by selective laser melting and laser cladding deposition. Mater Sci Eng A 685:265–273. https://doi.org/10.1016/j.msea.2016.12.112
Sun Z, Tan X, Tor SB, Yeong WY (2016) Selective laser melting of stainless steel 316L with low porosity and high build rates. Mater Des 104:197–204. https://doi.org/10.1016/j.matdes.2016.05.035
Wang D, Song C, Yang Y, Bai Y (2016) Investigation of crystal growth mechanism during selective laser melting and mechanical property characterization of 316L stainless steel parts. Mater Des 100:291–299. https://doi.org/10.1016/j.matdes.2016.03.111
Geiger F, Kunze K, Etter T (2016) Tailoring the texture of IN738LC processed by selective laser melting (SLM) by specific scanning strategies. Mater Sci Eng A 661:240–246. https://doi.org/10.1016/j.msea.2016.03.036
Sun SH, Hagihara K, Nakano T (2018) Effect of scanning strategy on texture formation in Ni-25 at.%Mo alloys fabricated by selective laser melting. Mater Des 140:307–316. https://doi.org/10.1016/j.matdes.2017.11.060
Kunze K, Etter T, Jürgen G, Shklover V (2014) Texture, anisotropy in microstructure and mechanical properties of IN738LC alloy processed by selective laser melting (SLM). Mater Sci Eng A 620. https://doi.org/10.1016/j.msea.2014.10.003
Biamino S, Penna A, Ackelid U et al (2011) Electron beam melting of Ti-48Al-2Cr-2Nb alloy: microstructure and mechanical properties investigation. Intermetallics 19:776–781. https://doi.org/10.1016/j.intermet.2010.11.017
Song B, Dong S, Liao H, Coddet C (2012) Process parameter selection for selective laser melting of Ti6Al4V based on temperature distribution simulation and experimental sintering. Int J Adv Manuf Technol 61:967–974. https://doi.org/10.1007/s00170-011-3776-6
Xu W, Brandt M, Sun S et al (2015) Additive manufacturing of strong and ductile Ti–6Al–4V by selective laser melting via in situ martensite decomposition. Acta Mater 85:74–84. https://doi.org/10.1016/j.actamat.2014.11.028
Leary M, Mazur M, Elambasseril J et al (2016) Selective laser melting (SLM) of AlSi12Mg lattice structures. Mater Des 98:344–357. https://doi.org/10.1016/j.matdes.2016.02.127
Manfredi D, Bidulský R (2017) Laser powder bed fusion of aluminum alloys. Acta Metall Slovaca 23:276. https://doi.org/10.12776/ams.v23i3.988
Li Y, Gu D (2014) Parametric analysis of thermal behavior during selective laser melting additive manufacturing of aluminum alloy powder. Mater Des 63:856–867. https://doi.org/10.1016/j.matdes.2014.07.006
Li XP, Wang XJ, Saunders M et al (2015) A selective laser melting and solution heat treatment refined Al-12Si alloy with a controllable ultrafine eutectic microstructure and 25% tensile ductility. Acta Mater 95:74–82. https://doi.org/10.1016/j.actamat.2015.05.017
Suryawanshi J, Prashanth KG, Scudino S et al (2016) Simultaneous enhancements of strength and toughness in an Al-12Si alloy synthesized using selective laser melting. Acta Mater 115:285–294. https://doi.org/10.1016/j.actamat.2016.06.009
Trelewicz JR, Halada GP, Donaldson OK, Manogharan G (2016) Microstructure and corrosion resistance of laser additively manufactured 316L stainless steel. Jom 68:850–859. https://doi.org/10.1007/s11837-016-1822-4
Mohd Yusuf S, Nie M, Chen Y et al (2018) Microstructure and corrosion performance of 316L stainless steel fabricated by selective laser melting and processed through high-pressure torsion. J Alloys Compd 763:360–375. https://doi.org/10.1016/j.jallcom.2018.05.284
Geenen K, Röttger A, Theisen W (2017) Corrosion behavior of 316L austenitic steel processed by selective laser melting, hot-isostatic pressing, and casting. Mater Corros 9999:1–12. https://doi.org/10.1002/maco.201609210
Trevisan F, Calignano F, Lorusso M et al (2017) On the selective laser melting (SLM) of the AlSi10Mg alloy: process, microstructure, and mechanical properties. Materials (Basel) 10. https://doi.org/10.3390/ma10010076
Herzog D, Seyda V, Wycisk E, Emmelmann C (2016) Additive manufacturing of metals. Acta Mater 117:371–392. https://doi.org/10.1016/j.actamat.2016.07.019
Gu D, Hagedorn Y-C, Meiners W et al (2012) Densification behavior, microstructure evolution, and wear performance of selective laser melting processed commercially pure titanium. Acta Mater 60:3849–3860. https://doi.org/10.1016/j.actamat.2012.04.006
Yusuf SM, Gao N (2017) Influence of energy density on metallurgy and properties in metal additive manufacturing. Mater Sci Technol 33:1269–1289. https://doi.org/10.1080/02670836.2017.1289444
Li Y, Chen K, Tamura N (2018) Mechanism of heat affected zone cracking in Ni-based superalloy DZ125L fabricated by laser 3D printing technique. Mater Des 150:171–181. https://doi.org/10.1016/j.matdes.2018.04.032
Popovich VA, Borisov EV, Popovich AA et al (2017) Impact of heat treatment on mechanical behaviour of Inconel 718 processed with tailored microstructure by selective laser melting. Mater Des 131:12–22. https://doi.org/10.1016/j.matdes.2017.05.065
Lavery NP, Cherry J, Mehmood S et al (2017) Effects of hot isostatic pressing on the elastic modulus and tensile properties of 316L parts made by powder bed laser fusion. Mater Sci Eng A 693:186–213. https://doi.org/10.1016/j.msea.2017.03.100
Azushima A, Kopp R, Korhonen A et al (2008) Severe plastic deformation (SPD) processes for metals. CIRP Ann Manuf Technol 57:716–735. https://doi.org/10.1016/j.cirp.2008.09.005
Edalati K, Horita Z (2016) A review on high-pressure torsion (HPT) from 1935 to 1988. Mater Sci Eng A 652:325–352. https://doi.org/10.1016/j.msea.2015.11.074
Abramova MM, Enikeev NA, Valiev RZ et al (2014) Grain boundary segregation induced strengthening of an ultrafine-grained austenitic stainless steel. Mater Lett 136:349–352. https://doi.org/10.1016/j.matlet.2014.07.188
Valiev RZ, Langdon TG (2006) Principles of equal-channel angular pressing as a processing tool for grain refinement. Prog Mater Sci 51:881–981. https://doi.org/10.1016/j.pmatsci.2006.02.003
Gao N, Wang CT, Wood RJK, Langdon TG (2012) Tribological properties of ultrafine-grained materials processed by severe plastic deformation. J Mater Sci 47:4779–4797. https://doi.org/10.1007/s10853-011-6231-z
Zheng ZJ, Gao Y, Gui Y, Zhu M (2012) Corrosion behaviour of nanocrystalline 304 stainless steel prepared by equal channel angular pressing. Corros Sci 54:60–67. https://doi.org/10.1016/j.corsci.2011.08.049
Sakai G, Horita Z, Langdon TG (2005) Grain refinement and superplasticity in an aluminum alloy processed by high-pressure torsion. Mater Sci Eng A 393:344–351. https://doi.org/10.1016/j.msea.2004.11.007
Furukawa M, Horita Z, Nemoto M, Langdon TG (2001) Review: processing of metals by equal-channel angular pressing. J Mater Sci 36:2835–2843. https://doi.org/10.1023/a:1017932417043
Qi Y, Kosinova A, Kilmametov AR et al (2018) Generation and healing of porosity in high purity copper by high-pressure torsion. Mater Charact 145:1–9. https://doi.org/10.1016/j.matchar.2018.08.023
Wilde G, Zehetbauer M, Wegner M et al (2011) Deformation induced percolating porosity in high pressure torsioned (HPT) copper. Mater Sci Forum 702–703:105–108. https://doi.org/10.4028/www.scientific.net/msf.702-703.105
Read N, Wang W, Essa K, Attallah MM (2015) Selective laser melting of AlSi10Mg alloy: process optimisation and mechanical properties development. Mater Des 65:417–424. https://doi.org/10.1016/j.matdes.2014.09.044
Li W, Li S, Liu J et al (2016) Effect of heat treatment on AlSi10Mg alloy fabricated by selective laser melting: microstructure evolution, mechanical properties and fracture mechanism. Mater Sci Eng A 663:116–125. https://doi.org/10.1016/j.msea.2016.03.088
Asgari H, Baxter C, Hosseinkhani K, Mohammadi M (2017) On microstructure and mechanical properties of additively manufactured AlSi10Mg_200C using recycled powder. Mater Sci Eng A 707:148–158. https://doi.org/10.1016/j.msea.2017.09.041
Tang M, Pistorius PC (2017) Anisotropic mechanical behavior of AlSi10Mg parts produced by selective laser melting. Jom 69:516–522. https://doi.org/10.1007/s11837-016-2230-5
Mao F, Yan G, Xuan Z et al (2015) Effect of trace La addition on the microstructures and mechanical properties of A356 (Al–7Si–0.35Mg) aluminum aluminum alloys. J Alloys Compd 650:896–906. https://doi.org/10.1016/j.jallcom.2015.06.266
Liu X, Zhao C, Zhou X et al (2019) Microstructure of selective laser melted AlSi10Mg alloy. Mater Des 168:1–9. https://doi.org/10.1016/j.matdes.2019.107677
McDonald SD, Nogita K, Dahle AK (2004) Eutectic nucleation in Al-Si alloys. Acta Mater 52:4273–4280. https://doi.org/10.1016/j.actamat.2004.05.043
Lu L, Nogita K, Dahle AK (2005) Combining Sr and Na additions in hypoeutectic Al-Si foundry alloys. Mater Sci Eng A 399:244–253. https://doi.org/10.1016/j.msea.2005.03.091
Karaköse E, Keskin M (2009) Effect of solidification rate on the microstructure and microhardness of a melt-spun Al-8Si-1Sb alloy. J Alloys Compd 479:230–236. https://doi.org/10.1016/j.jallcom.2009.01.006
Kempen K, Thijs L, Van Humbeeck J, Kruth J-P (2015) Processing AlSi10Mg by selective laser melting: parameter optimisation and material characterisation. Mater Sci Technol 31. https://doi.org/10.1179/1743284714Y.0000000702
Wei P, Wei Z, Chen Z et al (2017) The AlSi10Mg samples produced by selective laser melting: single track, densification, microstructure and mechanical behavior. Appl Surf Sci 408:38–50. https://doi.org/10.1016/j.apsusc.2017.02.215
Aboulkhair NT, Maskery I, Tuck C et al (2016) The microstructure and mechanical properties of selectively laser melted AlSi10Mg: the effect of a conventional T6-like heat treatment. Mater Sci Eng A 667:139–146. https://doi.org/10.1016/j.msea.2016.04.092
Chou R, Ghosh A, Chou SC et al (2017) Microstructure and mechanical properties of Al10SiMg fabricated by pulsed laser powder bed fusion. Mater Sci Eng A 689:53–62. https://doi.org/10.1016/j.msea.2017.02.023
Lam LP, Zhang DQ, Liu ZH, Chua CK (2015) Phase analysis and microstructure characterisation of AlSi10Mg parts produced by selective laser melting. Virtual Phys Prototyp 10:207–215. https://doi.org/10.1080/17452759.2015.1110868
Kempen K, Thijs L, Van Humbeeck J, Kruth JP (2012) Mechanical properties of AlSi10Mg produced by selective laser melting. Phys Procedia 39:439–446. https://doi.org/10.1016/j.phpro.2012.10.059
Hadadzadeh A, Baxter C, Amirkhiz BS, Mohammadi M (2018) Strengthening mechanisms in direct metal laser sintered AlSi10Mg: comparison between virgin and recycled powders. Addit Manuf 23:108–120. https://doi.org/10.1016/j.addma.2018.07.014
Chen B, Moon SK, Yao X et al (2017) Strength and strain hardening of a selective laser melted AlSi10Mg alloy. Scr Mater 141:45–49. https://doi.org/10.1016/j.scriptamat.2017.07.025
Li XP, Ji G, Chen Z et al (2017) Selective laser melting of nano-TiB2 decorated AlSi10Mg alloy with high fracture strength and ductility. Acta Mater. https://doi.org/10.1016/j.actamat.2017.02.062
Wang X, Nie M, Wang CT et al (2015) Microhardness and corrosion properties of hypoeutectic Al-7Si alloy processed by high-pressure torsion. Mater Des 83:193–202. https://doi.org/10.1016/j.matdes.2015.06.018
Mungole T, Nadammal N, Dawra K, Kumar P, Kawasaki M, Langdon TG (2013) Evolution of microhardness and microstructure in a cast Al-7% Si alloy during high-pressure torsion. J Mater Sci 48:4671–4680. https://doi.org/10.1007/s10853-012-7061-3
El Aal MIA, Kim HS (2014) Wear properties of high pressure torsion processed ultrafine grained Al-7%Si alloy. Mater Des 53:373–382. https://doi.org/10.1016/j.matdes.2013.07.045
Gunenthiram V, Peyre P, Schneider M et al (2017) Experimental analysis of spatter generation and melt-pool behavior during the powder bed laser beam melting process. J Mater Process Technol 251:376–386. https://doi.org/10.1016/j.jmatprotec.2017.08.012
Ali H, Ghadbeigi H, Mumtaz K (2018) Effect of scanning strategies on residual stress and mechanical properties of selective laser melted Ti6Al4V. Mater Sci Eng A 712:175–187. https://doi.org/10.1016/j.msea.2017.11.103
Kawasaki M, Figueiredo RB, Langdon TG (2011) An investigation of hardness homogeneity throughout disks processed by high-pressure torsion. Acta Mater 59:308–316. https://doi.org/10.1016/j.actamat.2010.09.034
Wang CT, Gao N, Gee MG et al (2012) Effect of grain size on the micro-tribological behavior of pure titanium processed by high-pressure torsion. Wear 280–281:28–35. https://doi.org/10.1016/j.wear.2012.01.012
Nie M, Wang CT, Qu M, Gao N, Wharton JA, Langdon TG (2014) The corrosion behaviour of commercial purity titanium processed by high-pressure torsion. J Mater Sci 49:2824–2831. https://doi.org/10.1007/s10853-013-7988-z
Lutterotti L, Gialanella S (1998) X-ray diffraction characterization of heavily deformed metallic specimens. Acta Mater 46:101–110
Mccusker LB, Von Dreele RB, Cox DE et al (1999) Rietveld refinement guidelines. J Appl Crystallogr 32:36–50. https://doi.org/10.1107/S0021889898009856
Young RA, Wiles DB (1982) Profile shape functions in Rietveld refinements. J Appl Crystallogr 15:430–438
Sames WJ, List FA, Pannala S et al (2016) The metallurgy and processing science of metal additive manufacturing. Int Mater Rev 61:1–46. https://doi.org/10.1080/09506608.2015.1116649
Sames WJJ, Medina F, Peter WHH et al (2014) Effect of process control and powder quality on IN 718 produced using electron beam melting. In: 8th international symposium on Superalloy 718 and derivatives. Wiley-Blackwell, Pittsburgh, pp 409–423
Hebert RJ (2016) Viewpoint: metallurgical aspects of powder bed metal additive manufacturing. J Mater Sci 51:1165–1175. https://doi.org/10.1007/s10853-015-9479-x
Rice JR, Tracey DM (1969) On the ductile enlargement of voids in triaxial stress fields*. J Mech Phys Solids 17:201–217. https://doi.org/10.1016/0022-5096(69)90033-7
Xue L (2007) Damage accumulation and fracture initiation in uncracked ductile solids subject to triaxial loading. Int J Solids Struct 44:5163–5181. https://doi.org/10.1016/j.ijsolstr.2006.12.026
Nakasaki M, Takasu I, Utsunomiya H (2006) Application of hydrostatic integration parameter for free-forging and rolling. J Mater Process Technol 177:521–524. https://doi.org/10.1016/j.jmatprotec.2006.04.102
Wang A, Thomson PF, Hodgson PD (1996) A study of pore closure and welding in hot rolling process. J Mater Process Technol 60:95–102. https://doi.org/10.1016/0924-0136(96)02313-8
Saby M, Bouchard PO, Bernacki M (2015) Void closure criteria for hot metal forming: a review. J Manuf Process 19:239–250. https://doi.org/10.1016/j.jmapro.2014.05.006
Zhang XX, Cui ZS, Chen W, Li Y (2009) A criterion for void closure in large ingots during hot forging. J Mater Process Technol 209:1950–1959. https://doi.org/10.1016/j.jmatprotec.2008.04.051
Langdon TG (2013) Twenty-five years of ultrafine-grained materials: achieving exceptional properties through grain refinement. Acta Mater 61:7035–7059. https://doi.org/10.1016/j.actamat.2013.08.018
Estrin Y, Vinogradov A (2013) Extreme grain refinement by severe plastic deformation: a wealth of challenging science. Acta Mater 61:782–817. https://doi.org/10.1016/j.actamat.2012.10.038
Hadadzadeh A, Amirkhiz BS, Mohammadi M (2019) Contribution of Mg2Si precipitates to the strength of direct metal laser sintered AlSi10Mg. Mater Sci Eng A 739:295–300. https://doi.org/10.1016/j.msea.2018.10.055
Sato T, Hirosawa S, Hirose K, Maeguchi T (2003) Roles of microalloying elements on the cluster formation in the initial stage of phase decomposition of Al-based alloys. Metall Mater Trans A 34:2745–2755. https://doi.org/10.1007/s11661-003-0176-z
Chen Y, Gao N, Sha G et al (2015) Strengthening of an Al-Cu-Mg alloy processed by high-pressure torsion due to clusters, defects and defect-cluster complexes. Mater Sci Eng A 627:10–20. https://doi.org/10.1016/j.msea.2014.12.107
Chen Y, Gao N, Sha G et al (2016) Microstructural evolution, strengthening and thermal stability of an ultrafine-grained Al-Cu-Mg alloy. Acta Mater 109:202–212. https://doi.org/10.1016/j.actamat.2016.02.050
Ungár T (2004) Microstructural parameters from X-ray diffraction peak broadening. Scr Mater 51:777–781. https://doi.org/10.1016/j.scriptamat.2004.05.007
Williamson GK, Smallman RE (1956) III. Dislocation densities in some annealed and cold-worked metals from measurements on the X-ray Debye-Scherrer spectrum. Philos Mag 1:34–46. https://doi.org/10.1080/14786435608238074
Zhao YH, Liao XZ, Jin Z et al (2004) Microstructures and mechanical properties of ultrafine grained 7075 Al alloy processed by ECAP and their evolutions during annealing. Acta Mater 52:4589–4599. https://doi.org/10.1016/j.actamat.2004.06.017
Gorsse S, Hutchinson C, Gouné M, Banerjee R (2017) Additive manufacturing of metals: a brief review of the characteristic microstructures and properties of steels, Ti-6Al-4V and high-entropy alloys. Sci Technol Adv Mater 18:584–610. https://doi.org/10.1080/14686996.2017.1361305
Song B, Dong S, Deng S et al (2014) Microstructure and tensile properties of iron parts fabricated by selective laser melting. Opt Laser Technol 56:451–460. https://doi.org/10.1016/j.optlastec.2013.09.017
Zhang J, Gao N, Starink MJ (2010) Al-Mg-Cu based alloys and pure Al processed by high pressure torsion: the influence of alloying additions on strengthening. Mater Sci Eng A 527:3472–3479. https://doi.org/10.1016/j.msea.2010.02.016
Maamoun AH, Xue YF, Elbestawi MA, Veldhuis SC (2018) The effect of selective laser melting process parameters on the microstructure and mechanical properties of Al6061 and AlSi10Mg alloys. Materials (Basel) 12. https://doi.org/10.3390/ma12010012
Zyguła K, Nosek B, Pasiowiec H, Szysiak N (2019) Mechanical properties and microstructure of AlSi10Mg alloy obtained by casting and SLM technique. World Sci News 104:462–472
Lewandowski JJ, Seifi M (2016) Metal additive manufacturing: a review of mechanical properties. Annu Rev Mater Res 46:151–186. https://doi.org/10.1146/annurev-matsci-070115-032024
Carroll BE, Palmer T a, Beese AM (2015) Anisotropic tensile behavior of Ti-6Al-4V components fabricated with directed energy deposition additive manufacturing. Acta Mater 87:309–320. https://doi.org/10.1016/j.actamat.2014.12.054
Yadroitsev I, Thivillon L, Bertrand P, Smurov I (2007) Strategy of manufacturing components with designed internal structure by selective laser melting of metallic powder. Appl Surf Sci 254:980–983. https://doi.org/10.1016/j.apsusc.2007.08.046
Chlebus E, Gruber K, Kuźnicka B et al (2015) Effect of heat treatment on the microstructure and mechanical properties of Inconel 718 processed by selective laser melting. Mater Sci Eng A 639:647–655. https://doi.org/10.1016/j.msea.2015.05.035
Kawasaki M (2014) Different models of hardness evolution in ultrafine-grained materials processed by high-pressure torsion. J Mater Sci 49:18–34. https://doi.org/10.1007/s10853-013-7687-9
Tian YZ, Li JJ, Zhang P et al (2012) Microstructures, strengthening mechanisms and fracture behavior of Cu-Ag alloys processed by high-pressure torsion. Acta Mater 60:269–281. https://doi.org/10.1016/j.actamat.2011.09.058
Bayramoglu S, Gür CH, Alexandrov IV, Abramova MM (2010) Characterization of ultra-fine grained steel samples produced by high pressure torsion via magnetic Barkhausen noise analysis. Mater Sci Eng A 527:927–933. https://doi.org/10.1016/j.msea.2009.09.006
Vorhauer A, Pippan R (2004) On the homogeneity of deformation by high pressure torsion. Scr Mater 51:921–925. https://doi.org/10.1016/j.scriptamat.2004.04.025
Yan C, Hao L, Hussein A et al (2015) Microstructure and mechanical properties of aluminium alloy cellular lattice structures manufactured by direct metal laser sintering. Mater Sci Eng A 628:238–246. https://doi.org/10.1016/j.msea.2015.01.063
Prashanth KG, Scudino S, Klauss HJ et al (2014) Microstructure and mechanical properties of Al-12Si produced by selective laser melting: effect of heat treatment. Mater Sci Eng A 590:153–160. https://doi.org/10.1016/j.msea.2013.10.023
Harai Y, Ito Y, Horita Z (2008) High-pressure torsion using ring specimens. Scr Mater 58:469–472. https://doi.org/10.1016/j.scriptamat.2007.10.037
Xu C, Horita Z, Langdon TG (2008) The evolution of homogeneity in an aluminum alloy processed using high-pressure torsion. Acta Mater 56:5168–5176. https://doi.org/10.1016/j.actamat.2008.06.036
Wongsa-Ngam J, Kawasaki M, Zhao Y, Langdon TG (2011) Microstructural evolution and mechanical properties of a Cu-Zr alloy processed by high-pressure torsion. Mater Sci Eng A 528:7715–7722. https://doi.org/10.1016/j.msea.2011.06.056
Zhilyaev AP, García-Infanta JM, Carreño F et al (2007) Particle and grain growth in an Al-Si alloy during high-pressure torsion. Scr Mater 57:763–765. https://doi.org/10.1016/j.scriptamat.2007.06.029
Edalati K, Miresmaeili R, Horita Z et al (2011) Significance of temperature increase in processing by high-pressure torsion. Mater Sci Eng A 528:7301–7305. https://doi.org/10.1016/j.msea.2011.06.031
S. Mohd Yusuf would like to thank the Faculty of Engineering and Physical Sciences, University of Southampton, UK for providing a studentship for his PhD study.
Materials Research Group, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, UK
Shahir Mohd Yusuf
, Mathias Hoegden
& Nong Gao
Search for Shahir Mohd Yusuf in:
Search for Mathias Hoegden in:
Search for Nong Gao in:
Correspondence to Nong Gao.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Yusuf, S.M., Hoegden, M. & Gao, N. Effect of sample orientation on the microstructure and microhardness of additively manufactured AlSi10Mg processed by high-pressure torsion. Int J Adv Manuf Technol (2020) doi:10.1007/s00170-019-04817-5
Microstructure
AlSi10Mg
High-pressure torsion | CommonCrawl |
Mathematical Framework for latentcor
Mingze Huang, Christian L. Müller, Irina Gaynanova
Latent Gaussian Copula Model for Mixed Data
latentcor utilizes the powerful semi-parametric latent Gaussian copula models to estimate latent correlations between mixed data types (continuous/binary/ternary/truncated or zero-inflated). Below we review the definitions for each type.
Definition of continuous model (Fan et al. 2017)
A random \(X\in\cal{R}^{p}\) satisfies the Gaussian copula (or nonparanormal) model if there exist monotonically increasing \(f=(f_{j})_{j=1}^{p}\) with \(Z_{j}=f_{j}(X_{j})\) satisfying \(Z\sim N_{p}(0, \Sigma)\), \(\sigma_{jj}=1\); we denote \(X\sim NPN(0, \Sigma, f)\).
X = gen_data(n = 6, types = "con")$X
#> [,1]
#> [1,] 1.7634726
#> [4,] -0.9003767
Definition of binary model (Fan et al. 2017)
A random \(X\in\cal{R}^{p}\) satisfies the binary latent Gaussian copula model if there exists \(W\sim NPN(0, \Sigma, f)\) such that \(X_{j}=I(W_{j}>c_{j})\), where \(I(\cdot)\) is the indicator function and \(c_{j}\) are constants.
X = gen_data(n = 6, types = "bin")$X
#> [,1]
#> [1,] 0
Definition of ternary model (Quan, Booth, and Wells 2018)
A random \(X\in\cal{R}^{p}\) satisfies the ternary latent Gaussian copula model if there exists \(W\sim NPN(0, \Sigma, f)\) such that \(X_{j}=I(W_{j}>c_{j})+I(W_{j}>c'_{j})\), where \(I(\cdot)\) is the indicator function and \(c_{j}<c'_{j}\) are constants.
X = gen_data(n = 6, types = "ter")$X
Definition of truncated or zero-inflated model (Yoon, Carroll, and Gaynanova 2020)
A random \(X\in\cal{R}^{p}\) satisfies the truncated latent Gaussian copula model if there exists \(W\sim NPN(0, \Sigma, f)\) such that \(X_{j}=I(W_{j}>c_{j})W_{j}\), where \(I(\cdot)\) is the indicator function and \(c_{j}\) are constants.
X = gen_data(n = 6, types = "tru")$X
#> [,1]
#> [1,] 1.0510283
Mixed latent Gaussian copula model
The mixed latent Gaussian copula model jointly models \(W=(W_{1}, W_{2}, W_{3}, W_{4})\sim NPN(0, \Sigma, f)\) such that \(X_{1j}=W_{1j}\), \(X_{2j}=I(W_{2j}>c_{2j})\), \(X_{3j}=I(W_{3j}>c_{3j})+I(W_{3j}>c'_{3j})\) and \(X_{4j}=I(W_{4j}>c_{4j})W_{4j}\).
set.seed("234820")
X = gen_data(n = 100, types = c("con", "bin", "ter", "tru"))$X
head(X)
#> [,1] [,2] [,3] [,4]
#> [1,] -0.5728663 0 1 0.0000000
#> [3,] 0.4600555 1 1 0.2634213
Moment-based estimation of \(\Sigma\) based on bridge functions
The estimation of latent correlation matrix \(\Sigma\) is achieved via the bridge function \(F\) which is defined such that \(E(\hat{\tau}_{jk})=F(\sigma_{jk})\), where \(\sigma_{jk}\) is the latent correlation between variables \(j\) and \(k\), and \(\hat{\tau}_{jk}\) is the corresponding sample Kendall's \(\tau\).
Kendall's \(\tau\) (\(\tau_{a}\))
Given observed \(\mathbf{x}_{j}, \mathbf{x}_{k}\in\cal{R}^{n}\),
\[ \hat{\tau}_{jk}=\hat{\tau}(\mathbf{x}_{j}, \mathbf{x}_{k})=\frac{2}{n(n-1)}\sum_{1\le i<i'\le n}sign(x_{ij}-x_{i'j})sign(x_{ik}-x_{i'k}), \] where \(n\) is the sample size.
latentcor calculates pairwise Kendall's \(\widehat \tau\) as part of the estimation process
estimate = latentcor(X, types = c("con", "bin", "ter", "tru"))
K = estimate$K
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 0.2557576 0.2456566 0.3331313
Using \(F\) and \(\widehat \tau_{jk}\), a moment-based estimator is \(\hat{\sigma}_{jk}=F^{-1}(\hat{\tau}_{jk})\) with the corresponding \(\hat{\Sigma}\) being consistent for \(\Sigma\) (Fan et al. 2017; Quan, Booth, and Wells 2018; Yoon, Carroll, and Gaynanova 2020).
The explicit form of bridge function \(F\) has been derived for all combinations of continuous(C)/binary(B)/ternary(N)/truncated(T) variable types, and we summarize the corresponding references. Each of this combinations is implemented in latentcor.
zero-inflated (truncated)
continuous Liu, Lafferty, and Wasserman (2009) - - -
binary Fan et al. (2017) Fan et al. (2017) - -
ternary Quan, Booth, and Wells (2018) Quan, Booth, and Wells (2018) Quan, Booth, and Wells (2018) -
zero-inflated (truncated) Yoon, Carroll, and Gaynanova (2020) Yoon, Carroll, and Gaynanova (2020) See Appendix Yoon, Carroll, and Gaynanova (2020)
Below we provide an explicit form of \(F\) for each combination.
Theorem (explicit form of bridge function) Let \(W_{1}\in\cal{R}^{p_{1}}\), \(W_{2}\in\cal{R}^{p_{2}}\), \(W_{3}\in\cal{R}^{p_{3}}\), \(W_{4}\in\cal{R}^{p_{4}}\) be such that \(W=(W_{1}, W_{2}, W_{3}, W_{4})\sim NPN(0, \Sigma, f)\) with \(p=p_{1}+p_{2}+p_{3}+p_{4}\). Let \(X=(X_{1}, X_{2}, X_{3}, X_{4})\in\cal{R}^{p}\) satisfy \(X_{j}=W_{j}\) for \(j=1,...,p_{1}\), \(X_{j}=I(W_{j}>c_{j})\) for \(j=p_{1}+1, ..., p_{1}+p_{2}\), \(X_{j}=I(W_{j}>c_{j})+I(W_{j}>c'_{j})\) for \(j=p_{1}+p_{2}+1, ..., p_{3}\) and \(X_{j}=I(W_{j}>c_{j})W_{j}\) for \(j=p_{1}+p_{2}+p_{3}+1, ..., p\) with \(\Delta_{j}=f(c_{j})\). The rank-based estimator of \(\Sigma\) based on the observed \(n\) realizations of \(X\) is the matrix \(\mathbf{\hat{R}}\) with \(\hat{r}_{jj}=1\), \(\hat{r}_{jk}=\hat{r}_{kj}=F^{-1}(\hat{\tau}_{jk})\) with block structure
\[ \mathbf{\hat{R}}=\begin{pmatrix} F_{CC}^{-1}(\hat{\tau}) & F_{CB}^{-1}(\hat{\tau}) & F_{CN}^{-1}(\hat{\tau}) & F_{CT}^{-1}(\hat{\tau})\\ F_{BC}^{-1}(\hat{\tau}) & F_{BB}^{-1}(\hat{\tau}) & F_{BN}^{-1}(\hat{\tau}) & F_{BT}^{-1}(\hat{\tau})\\ F_{NC}^{-1}(\hat{\tau}) & F_{NB}^{-1}(\hat{\tau}) & F_{NN}^{-1}(\hat{\tau}) & F_{NT}^{-1}(\hat{\tau})\\ F_{TC}^{-1}(\hat{\tau}) & F_{TB}^{-1}(\hat{\tau}) & F_{TN}^{-1}(\hat{\tau}) & F_{TT}^{-1}(\hat{\tau}) \end{pmatrix} \] \[ F(\cdot)=\begin{cases} CC:\ 2\sin^{-1}(r)/\pi \\ \\ BC: \ 4\Phi_{2}(\Delta_{j},0;r/\sqrt{2})-2\Phi(\Delta_{j}) \\ \\ BB: \ 2\{\Phi_{2}(\Delta_{j},\Delta_{k};r)-\Phi(\Delta_{j})\Phi(\Delta_{k})\} \\ \\ NC: \ 4\Phi_{2}(\Delta_{j}^{2},0;r/\sqrt{2})-2\Phi(\Delta_{j}^{2})+4\Phi_{3}(\Delta_{j}^{1},\Delta_{j}^{2},0;\Sigma_{3a}(r))-2\Phi(\Delta_{j}^{1})\Phi(\Delta_{j}^{2})\\ \\ NB: \ 2\Phi_{2}(\Delta_{j}^{2},\Delta_{k},r)\{1-\Phi(\Delta_{j}^{1})\}-2\Phi(\Delta_{j}^{2})\{\Phi(\Delta_{k})-\Phi_{2}(\Delta_{j}^{1},\Delta_{k},r)\} \\ \\ NN: \ 2\Phi_{2}(\Delta_{j}^{2},\Delta_{k}^{2};r)\Phi_{2}(-\Delta_{j}^{1},-\Delta_{k}^{1};r)-2\{\Phi(\Delta_{j}^{2})-\Phi_{2}(\Delta_{j}^{2},\Delta_{k}^{1};r)\}\{\Phi(\Delta_{k}^{2})-\Phi_{2}(\Delta_{j}^{1},\Delta_{k}^{2};r)\} \\ \\ TC: \ -2\Phi_{2}(-\Delta_{j},0;1/\sqrt{2})+4\Phi_{3}(-\Delta_{j},0,0;\Sigma_{3b}(r)) \\ \\ TB: \ 2\{1-\Phi(\Delta_{j})\}\Phi(\Delta_{k})-2\Phi_{3}(-\Delta_{j},\Delta_{k},0;\Sigma_{3c}(r))-2\Phi_{3}(-\Delta_{j},\Delta_{k},0;\Sigma_{3d}(r)) \\ \\ TN: \ -2\Phi(-\Delta_{k}^{1})\Phi(\Delta_{k}^{2}) + 2\Phi_{3}(-\Delta_{k}^{1},\Delta_{k}^{2},\Delta_{j};\Sigma_{3e}(r))+2\Phi_{4}(-\Delta_{k}^{1},\Delta_{k}^{2},-\Delta_{j},0;\Sigma_{4a}(r))+2\Phi_{4}(-\Delta_{k}^{1},\Delta_{k}^{2},-\Delta_{j},0;\Sigma_{4b}(r)) \\ \\ TT: \ -2\Phi_{4}(-\Delta_{j},-\Delta_{k},0,0;\Sigma_{4c}(r))+2\Phi_{4}(-\Delta_{j},-\Delta_{k},0,0;\Sigma_{4d}(r)) \\ \end{cases} \]
where \(\Delta_{j}=\Phi^{-1}(\pi_{0j})\), \(\Delta_{k}=\Phi^{-1}(\pi_{0k})\), \(\Delta_{j}^{1}=\Phi^{-1}(\pi_{0j})\), \(\Delta_{j}^{2}=\Phi^{-1}(\pi_{0j}+\pi_{1j})\), \(\Delta_{k}^{1}=\Phi^{-1}(\pi_{0k})\), \(\Delta_{k}^{2}=\Phi^{-1}(\pi_{0k}+\pi_{1k})\),
\[ \Sigma_{3a}(r)= \begin{pmatrix} 1 & 0 & \frac{r}{\sqrt{2}} \\ 0 & 1 & -\frac{r}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 \end{pmatrix}, \;\;\; \Sigma_{3b}(r)= \begin{pmatrix} 1 & \frac{1}{\sqrt{2}} & \frac{r}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} & 1 & r \\ \frac{r}{\sqrt{2}} & r & 1 \end{pmatrix}, \;\;\; \Sigma_{3c}(r)= \begin{pmatrix} 1 & -r & \frac{1}{\sqrt{2}} \\ -r & 1 & -\frac{r}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 \end{pmatrix}, \]
\[ \Sigma_{3d}(r)= \begin{pmatrix} 1 & 0 & -\frac{1}{\sqrt{2}} \\ 0 & 1 & -\frac{r}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 \end{pmatrix}, \;\;\; \Sigma_{3e}(r)= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & r & 1 \end{pmatrix}, \;\;\; \Sigma_{4a}(r)= \begin{pmatrix} 1 & 0 & 0 & \frac{r}{\sqrt{2}} \\ 0 & 1 & -r & \frac{r}{\sqrt{2}} \\ 0 & -r & 1 & -\frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 \end{pmatrix}, \]
\[ \Sigma_{4b}(r)= \begin{pmatrix} 1 & 0 & r & \frac{r}{\sqrt{2}} \\ 0 & 1 & 0 & \frac{r}{\sqrt{2}} \\ r & 0 & 1 & \frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 1 \end{pmatrix}, \;\;\; \Sigma_{4c}(r)= \begin{pmatrix} 1 & 0 & \frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} \\ 0 & 1 & -\frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & 1 & -r \\ -\frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & -r & 1 \end{pmatrix}\;\;\text{and}\;\; \Sigma_{4d}(r)= \begin{pmatrix} 1 & r & \frac{1}{\sqrt{2}} & \frac{r}{\sqrt{2}} \\ r & 1 & \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & \frac{r}{\sqrt{2}} & 1 & r \\ \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & r & 1 \end{pmatrix}. \]
Estimation methods
Given the form of bridge function \(F\), obtaining a moment-based estimation \(\widehat \sigma_{jk}\) requires inversion of \(F\). latentcor implements two methods for calculation of the inversion:
method = "original" Subsection describing original method and relevant parameter tol
method = "approx" Subsection describing approximation method and relevant parameter ratio
Both methods calculate inverse bridge function applied to each element of sample Kendall's \(\tau\) matrix. Because the calculation is performed point-wise (separately for each pair of variables), the resulting point-wise estimator of correlation matrix may not be positive semi-definite. latentcor performs projection of the pointwise-estimator to the space of positive semi-definite matrices, and allows for shrinkage towards identity matrix using the parameter nu (see Subsection describing adjustment of point-wise estimator and relevant parameter nu).
Original method (method = "original")
Original estimation approach relies on numerical inversion of \(F\) based on solving uni-root optimization problem. Given the calculated \(\widehat \tau_{jk}\) (sample Kendall's \(\tau\) between variables \(j\) and \(k\)), the estimate of latent correlation \(\widehat \sigma_{jk}\) is obtained by calling optimize function to solve the following optimization problem: \[ \widehat r_{jk} = \arg\min_{r} \{F(r) - \widehat \tau_{jk}\}^2. \] The parameter tol controls the desired accuracy of the minimizer and is passed to optimize, with the default precision of 1e-8:
estimate = latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original", tol = 1e-8)
Algorithm for Original method
Input: \(F(r)=F(r, \mathbf{\Delta})\) - bridge function based on the type of variables \(j\), \(k\)
Step 1. Calculate \(\hat{\tau}_{jk}\) using (1).
estimate$K
Step 2. For binary/truncated variable \(j\), set \(\hat{\mathbf{\Delta}}_{j}=\hat{\Delta}_{j}=\Phi^{-1}(\pi_{0j})\) with \(\pi_{0j}=\sum_{i=1}^{n}\frac{I(x_{ij}=0)}{n}\). For ternary variable \(j\), set \(\hat{\mathbf{\Delta}}_{j}=(\hat{\Delta}_{j}^{1}, \hat{\Delta}_{j}^{2})\) where \(\hat{\Delta}_{j}^{1}=\Phi^{-1}(\pi_{0j})\) and \(\hat{\Delta}_{j}^{2}=\Phi^{-1}(\pi_{0j}+\pi_{1j})\) with \(\pi_{0j}=\sum_{i=1}^{n}\frac{I(x_{ij}=0)}{n}\) and \(\pi_{1j}=\sum_{i=1}^{n}\frac{I(x_{ij}=1)}{n}\).
estimate$zratios
#> [[1]]
#> [1] NA
#> [1] 0.5
#> [1] 0.3 0.8
Compute \(F^{-1}(\hat{\tau}_{jk})\) as \(\hat{r}_{jk}=argmin\{F(r)-\hat{\tau}_{jk}\}^{2}\) solved via optimize function in R with accuracy tol.
estimate$Rpointwise
Approximation method (method = "approx")
A faster approximation method is based on multi-linear interpolation of pre-computed inverse bridge function on a fixed grid of points (Yoon, Müller, and Gaynanova 2021). This is possible as the inverse bridge function is an analytic function of at most 5 parameters:
Kendall's \(\tau\)
Proportion of zeros in the 1st variable
(Possibly) proportion of zeros and ones in the 1st variable
(Possibly) proportion of zeros in the 2nd variable
(Possibly) proportion of zeros and ones in the 2nd variable
In short, d-dimensional multi-linear interpolation uses a weighted average of \(2^{d}\) neighbors to approximate the function values at the points within the d-dimensional cube of the neighbors, and to perform interpolation, latentcor takes advantage of the R package chebpol (Gaure 2019). This approximation method has been first described in (Yoon, Müller, and Gaynanova 2021) for continuous/binary/truncated cases. In latentcor, we additionally implement ternary case, and optimize the choice of grid as well as interpolation boundary for faster computations with smaller memory footprint.
estimate = latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx")
Algorithm for Approximation method
Input: Let \(\check{g}=h(g)\), pre-computed values \(F^{-1}(h^{-1}(\check{g}))\) on a fixed grid \(\check{g}\in\check{\cal{G}}\) based on the type of variables \(j\) and \(k\). For binary/continuous case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j})\); for binary/binary case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j}, \check{\Delta}_{k})\); for truncated/continuous case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j})\); for truncated/truncated case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j}, \check{\Delta}_{k})\); for ternary/continuous case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j}^{1}, \check{\Delta}_{j}^{2})\); for ternary/binary case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j}^{1}, \check{\Delta}_{j}^{2}, \check{\Delta}_{k})\); for ternary/truncated case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j}^{1}, \check{\Delta}_{j}^{2}, \check{\Delta}_{k})\); for ternay/ternary case, \(\check{g}=(\check{\tau}_{jk}, \check{\Delta}_{j}^{1}, \check{\Delta}_{j}^{2}, \check{\Delta}_{k}^{1}, \check{\Delta}_{k}^{2})\).
Step 1 and Step 2 same as Original method.
Step 3. If \(|\hat{\tau}_{jk}|\le \mbox{ratio}\times \bar{\tau}_{jk}(\cdot)\), apply interpolation; otherwise apply Original method.
To avoid interpolation in areas with high approximation errors close to the boundary, we use hybrid scheme in Step 3. The parameter ratio controls the size of the region where the interpolation is performed (ratio = 0 means no interpolation, ratio = 1 means interpolation is always performed). For the derivation of approximate bound for BC, BB, TC, TB, TT cases see Yoon, Müller, and Gaynanova (2021). The derivation of approximate bound for NC, NB, NN, NT case is in the Appendix.
\[ \bar{\tau}_{jk}(\cdot)= \begin{cases} 2\pi_{0j}(1-\pi_{0j}) & for \; BC \; case\\ 2\min(\pi_{0j},\pi_{0k})\{1-\max(\pi_{0j}, \pi_{0k})\} & for \; BB \; case\\ 2\{\pi_{0j}(1-\pi_{0j})+\pi_{1j}(1-\pi_{0j}-\pi_{1j})\} & for \; NC \; case\\ 2\min(\pi_{0j}(1-\pi_{0j})+\pi_{1j}(1-\pi_{0j}-\pi_{1j}),\pi_{0k}(1-\pi_{0k})) & for \; NB \; case\\ 2\min(\pi_{0j}(1-\pi_{0j})+\pi_{1j}(1-\pi_{0j}-\pi_{1j}), \\ \;\;\;\;\;\;\;\;\;\;\pi_{0k}(1-\pi_{0k})+\pi_{1k}(1-\pi_{0k}-\pi_{1k})) & for \; NN \; case\\ 1-(\pi_{0j})^{2} & for \; TC \; case\\ 2\max(\pi_{0k},1-\pi_{0k})\{1-\max(\pi_{0k},1-\pi_{0k},\pi_{0j})\} & for \; TB \; case\\ 1-\{\max(\pi_{0j},\pi_{0k},\pi_{1k},1-\pi_{0k}-\pi_{1k})\}^{2} & for \; TN \; case\\ 1-\{\max(\pi_{0j},\pi_{0k})\}^{2} & for \; TT \; case\\ \end{cases} \]
By default, latentcor uses ratio = 0.9 as this value was recommended in Yoon, Müller, and Gaynanova (2021) having a good balance of accuracy and computational speed. This value, however, can be modified by the user.
latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.99)$R
latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.4)$R
latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original")$R
The lower is the ratio, the closer is the approximation method to original method (with ratio = 0 being equivalent to method = "original"), but also the higher is the cost of computations.
microbenchmark(latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.99)$R)
#> Unit: milliseconds
#> expr
#> latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.99)$R
#> min lq mean median uq max neval
#> 1.639 1.6983 1.822794 1.7386 1.8939 2.6955 100
microbenchmark(latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.4)$R)
#> expr
#> latentcor(X, types = c("con", "bin", "ter", "tru"), method = "approx", ratio = 0.4)$R
#> min lq mean median uq max neval
#> 3.2896 3.4053 3.586645 3.50665 3.6126 7.0284 100
microbenchmark(latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original")$R)
#> expr
#> latentcor(X, types = c("con", "bin", "ter", "tru"), method = "original")$R
#> min lq mean median uq max neval
#> 29.2739 29.4958 29.97854 29.6599 29.96185 39.6511 100
Rescaled Grid for Interpolation
Since \(|\hat{\tau}|\le \bar{\tau}\), the grid does not need to cover the whole domain \(\tau\in[-1, 1]\). To optimize memory associated with storing the grid, we rescale \(\tau\) as follows: \(\check{\tau}_{jk}=\tau_{jk}/\bar{\tau}_{jk}\in[-1, 1]\), where \(\bar{\tau}_{jk}\) is as defined above.
In addition, for ternary variable \(j\), it always holds that \(\Delta_{j}^{2}>\Delta_{j}^{1}\) since \(\Delta_{j}^{1}=\Phi^{-1}(\pi_{0j})\) and \(\Delta_{j}^{2}=\Phi^{-1}(\pi_{0j}+\pi_{1j})\). Thus, the grid should not cover the the area corresponding to \(\Delta_{j}^{2}\ge\Delta_{j}^{1}\). We thus rescale as follows: \(\check{\Delta}_{j}^{1}=\Delta_{j}^{1}/\Delta_{j}^{2}\in[0, 1]\); \(\check{\Delta}_{j}^{2}=\Delta_{j}^{2}\in[0, 1]\).
Speed Comparison
To illustrate the speed improvement by method = "approx", we plot the run time scaling behavior of method = "approx" and method = "original" (setting types for gen_data by replicating c("con", "bin", "ter", "tru") multiple times) with increasing dimensions \(p = [20, 40, 100, 200, 400]\) at sample size \(n = 100\) using simulation data. Figure below summarizes the observed scaling in a log-log plot. For both methods we observe the expected \(O(p^2)\) scaling behavior with dimension p, i.e., a linear scaling in the log-log plot. However, method = "approx" is at least one order of magnitude faster than method = "original" independent of the dimension of the problem.
Adjustment of pointwise-estimator for positive-definiteness
Since the estimation is performed point-wise, the resulting matrix of estimated latent correlations is not guaranteed to be positive semi-definite. For example, this could be expected when the sample size is small (and so the estimation error for each pairwise correlation is larger)
X = gen_data(n = 6, types = c("con", "bin", "ter", "tru"))$X
out = latentcor(X, types = c("con", "bin", "ter", "tru"))
out$Rpointwise
#> [,1] [,2] [,3] [,4]
#> [1,] 1.0000000 -0.1477240 0.9990000 0.8548518
#> [2,] -0.1477240 1.0000000 0.3523666 -0.5030324
#> [3,] 0.9990000 0.3523666 1.0000000 0.9114307
eigen(out$Rpointwise)$values
#> [1] 2.85954424 1.29130852 0.09944544 -0.25029820
latentcor automatically corrects the pointwise estimator to be positive definite by making two adjustments. First, if Rpointwise has smallest eigenvalue less than zero, the latentcor projects this matrix to the nearest positive semi-definite matrix. The user is notified of this adjustment through the message (supressed in previous code chunk), e.g.
Second, latentcor shrinks the adjusted matrix of correlations towards identity matrix using the parameter \(\nu\) with default value of 0.001 (nu = 0.001), so that the resulting R is strictly positive definite with the minimal eigenvalue being greater or equal to \(\nu\). That is \[ R = (1 - \nu) \widetilde R + \nu I, \] where \(\widetilde R\) is the nearest positive semi-definite matrix to Rpointwise.
out = latentcor(X, types = c("con", "bin", "ter", "tru"), nu = 0.001)
out$R
As a result, R and Rpointwise could be quite different when sample size \(n\) is small. When \(n\) is large and \(p\) is moderate, the difference is typically driven by parameter nu.
Derivation of bridge function \(F\) for ternary/truncated case
Without loss of generality, let \(j=1\) and \(k=2\). By the definition of Kendall's \(\tau\), \[ \tau_{12}=E(\hat{\tau}_{12})=E[\frac{2}{n(n-1)}\sum_{1\leq i\leq i' \leq n} sign\{(X_{i1}-X_{i'1})(X_{i2}-X_{i'2})\}]. \] Since \(X_{1}\) is ternary, \[\begin{align} &sign(X_{1}-X_{1}') \nonumber\\ =&[I(U_{1}>C_{11},U_{1}'\leq C_{11})+I(U_{1}>C_{12},U_{1}'\leq C_{12})-I(U_{1}>C_{12},U_{1}'\leq C_{11})] \nonumber\\ &-[I(U_{1}\leq C_{11}, U_{1}'>C_{11})+I(U_{1}\leq C_{12}, U_{1}'>C_{12})-I(U_{1}\leq C_{11}, U_{1}'>C_{12})] \nonumber\\ =&[I(U_{1}>C_{11})-I(U_{1}>C_{11},U_{1}'>C_{11})+I(U_{1}>C_{12})-I(U_{1}>C_{12},U_{1}'>C_{12}) \nonumber\\ &-I(U_{1}>C_{12})+I(U_{1}>C_{12},U_{1}'>C_{11})] \nonumber\\ &-[I(U_{1}'>C_{11})-I(U_{1}>C_{11},U_{1}'>C_{11})+I(U_{1}'>C_{12})-I(U_{1}>C_{12},U_{1}'>C_{12}) \nonumber\\ &-I(U_{1}'>C_{12})+I(U_{1}>C_{11},U_{1}'>C_{12})] \nonumber\\ =&I(U_{1}>C_{11})+I(U_{1}>C_{12},U_{1}'>C_{11})-I(U_{1}'>C_{11})-I(U_{1}>C_{11},U_{1}'>C_{12}) \nonumber\\ =&I(U_{1}>C_{11},U_{1}'\leq C_{12})-I(U_{1}'>C_{11},U_{1}\leq C_{12}). \end{align}\] Since \(X_{2}\) is truncated, \(C_{1}>0\) and \[\begin{align} sign(X_{2}-X_{2}')=&-I(X_{2}=0,X_{2}'>0)+I(X_{2}>0,X_{2}'=0) \nonumber\\ &+I(X_{2}>0,X_{2}'>0)sign(X_{2}-X_{2}') \nonumber\\ =&-I(X_{2}=0)+I(X_{2}'=0)+I(X_{2}>0,X_{2}'>0)sign(X_{2}-X_{2}'). \end{align}\] Since \(f\) is monotonically increasing, \(sign(X_{2}-X_{2}')=sign(Z_{2}-Z_{2}')\), \[\begin{align} \tau_{12}=&E[I(U_{1}>C_{11},U_{1}'\leq C_{12}) sign(X_{2}-X_{2}')] \nonumber\\ &-E[I(U_{1}'>C_{11},U_{1}\leq C_{12}) sign(X_{2}-X_{2}')] \nonumber\\ =&-E[I(U_{1}>C_{11},U_{1}'\leq C_{12}) I(X_{2}=0)] \nonumber\\ &+E[I(U_{1}>C_{11},U_{1}'\leq C_{12}) I(X_{2}'=0)] \nonumber\\ &+E[I(U_{1}>C_{11},U_{1}'\leq C_{12})I(X_{2}>0,X_{2}'>0)sign(Z_{2}-Z_{2}')] \nonumber\\ &+E[I(U_{1}'>C_{11},U_{1}\leq C_{12}) I(X_{2}=0)] \nonumber\\ &-E[I(U_{1}'>C_{11},U_{1}\leq C_{12}) I(X_{2}'=0)] \nonumber\\ &-E[I(U_{1}'>C_{11},U_{1}\leq C_{12})I(X_{2}>0,X_{2}'>0)sign(Z_{2}-Z_{2}')] \nonumber\\ =&-2E[I(U_{1}>C_{11},U_{1}'\leq C_{12}) I(X_{2}=0)] \nonumber\\ &+2E[I(U_{1}>C_{11},U_{1}'\leq C_{12}) I(X_{2}'=0)] \nonumber\\ &+E[I(U_{1}>C_{11},U_{1}'\leq C_{12})I(X_{2}>0,X_{2}'>0)sign(Z_{2}-Z_{2}')] \nonumber\\ &-E[I(U_{1}'>C_{11},U_{1}\leq C_{12})I(X_{2}>0,X_{2}'>0)sign(Z_{2}-Z_{2}')]. \end{align}\] From the definition of \(U\), let \(Z_{j}=f_{j}(U_{j})\) and \(\Delta_{j}=f_{j}(C_{j})\) for \(j=1,2\). Using \(sign(x)=2I(x>0)-1\), we obtain \[\begin{align} \tau_{12}=&-2E[I(Z_{1}>\Delta_{11},Z_{1}'\leq \Delta_{12},Z_{2}\leq \Delta_{2})]+2E[I(Z_{1}>\Delta_{11},Z_{1}'\leq \Delta_{12},Z_{2}'\leq \Delta_{2})] \nonumber\\ &+2E[I(Z_{1}>\Delta_{11},Z_{1}'\leq \Delta_{12})I(Z_{2}>\Delta_{2},Z_{2}'>\Delta_{2},Z_{2}-Z_{2}'>0)] \nonumber\\ &-2E[I(Z_{1}'>\Delta_{11},Z_{1}\leq \Delta_{12})I(Z_{2}>\Delta_{2},Z_{2}'>\Delta_{2},Z_{2}-Z_{2}'>0)] \nonumber\\ =&-2E[I(Z_{1}>\Delta_{11},Z_{1}'\leq \Delta_{12}, Z_{2}\leq \Delta_{2})]+2E[I(Z_{1}>\Delta_{11},Z_{1}'\leq \Delta_{12}, Z_{2}'\leq \Delta_{2})] \nonumber\\ &+2E[I(Z_{1}>\Delta_{11},Z_{1}'\leq\Delta_{12},Z_{2}'>\Delta_{2},Z_{2}>Z_{2}')] \nonumber\\ &-2E[I(Z_{1}'>\Delta_{11},Z_{1}\leq\Delta_{12},Z_{2}'>\Delta_{2},Z_{2}>Z_{2}')]. \end{align}\] Since \(\{\frac{Z_{2}'-Z_{2}}{\sqrt{2}}, -Z{1}\}\), \(\{\frac{Z_{2}'-Z_{2}}{\sqrt{2}}, Z{1}'\}\) and \(\{\frac{Z_{2}'-Z_{2}}{\sqrt{2}}, -Z{2}'\}\) are standard bivariate normally distributed variables with correlation \(-\frac{1}{\sqrt{2}}\), \(r/\sqrt{2}\) and \(-\frac{r}{\sqrt{2}}\), respectively, by the definition of \(\Phi_3(\cdot,\cdot, \cdot;\cdot)\) and \(\Phi_4(\cdot,\cdot, \cdot,\cdot;\cdot)\) we have \[\begin{align} F_{NT}(r;\Delta_{j}^{1},\Delta_{j}^{2},\Delta_{k})= & -2\Phi_{3}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},\Delta_{k};\begin{pmatrix} 1 & 0 & -r \\ 0 & 1 & 0 \\ -r & 0 & 1 \end{pmatrix} \right\} \nonumber\\ &+2\Phi_{3}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},\Delta_{k};\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & r & 1 \end{pmatrix}\right\}\nonumber \\ & +2\Phi_{4}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k},0;\begin{pmatrix} 1 & 0 & 0 & \frac{r}{\sqrt{2}} \\ 0 & 1 & -r & \frac{r}{\sqrt{2}} \\ 0 & -r & 1 & -\frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 \end{pmatrix}\right\} \nonumber\\ &-2\Phi_{4}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k},0;\begin{pmatrix} 1 & 0 & r & -\frac{r}{\sqrt{2}} \\ 0 & 1 & 0 & -\frac{r}{\sqrt{2}} \\ r & 0 & 1 & -\frac{1}{\sqrt{2}} \\ -\frac{r}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 \end{pmatrix}\right\}. \end{align}\] Using the facts that \[\begin{align} &\Phi_{4}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k},0;\begin{pmatrix} 1 & 0 & r & -\frac{r}{\sqrt{2}} \\ 0 & 1 & 0 & -\frac{r}{\sqrt{2}} \\ r & 0 & 1 & -\frac{1}{\sqrt{2}} \\ -\frac{r}{\sqrt{2}} & -\frac{r}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 \end{pmatrix}\right\} \nonumber\\ &+\Phi_{4}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k},0;\begin{pmatrix} 1 & 0 & r & \frac{r}{\sqrt{2}} \\ 0 & 1 & 0 & \frac{r}{\sqrt{2}} \\ r & 0 & 1 & \frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 1 \end{pmatrix}\right\} \nonumber\\ =&\Phi_{3}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k};\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & r & 1 \end{pmatrix}\right\} \end{align}\] and \[\begin{align} &\Phi_{3}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k};\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & r & 1 \end{pmatrix}\right\}+\Phi_{3}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},\Delta_{k};\begin{pmatrix} 1 & 0 & -r \\ 0 & 1 & 0 \\ -r & 0 & 1 \end{pmatrix} \right\} \nonumber\\ =&\Phi_{2}(-\Delta_{j}^{1},\Delta_{j}^{2};0) =\Phi(-\Delta_{j}^{1})\Phi(\Delta_{j}^{2}). \end{align}\] So that, \[\begin{align} F_{NT}(r;\Delta_{j}^{1},\Delta_{j}^{2},\Delta_{k})= & -2\Phi(-\Delta_{j}^{1})\Phi(\Delta_{j}^{2}) \nonumber\\ &+2\Phi_{3}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},\Delta_{k};\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & r \\ 0 & r & 1 \end{pmatrix}\right\}\nonumber \\ & +2\Phi_{4}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k},0;\begin{pmatrix} 1 & 0 & 0 & \frac{r}{\sqrt{2}} \\ 0 & 1 & -r & \frac{r}{\sqrt{2}} \\ 0 & -r & 1 & -\frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 1 \end{pmatrix}\right\} \nonumber\\ &+2\Phi_{4}\left\{-\Delta_{j}^{1},\Delta_{j}^{2},-\Delta_{k},0;\begin{pmatrix} 1 & 0 & r & \frac{r}{\sqrt{2}} \\ 0 & 1 & 0 & \frac{r}{\sqrt{2}} \\ r & 0 & 1 & \frac{1}{\sqrt{2}} \\ \frac{r}{\sqrt{2}} & \frac{r}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 1 \end{pmatrix}\right\}. \end{align}\]
It is easy to get the bridge function for truncated/ternary case by switching \(j\) and \(k\).
Derivation of approximate bound for the ternary/continuous case
Let \(n_{0x}=\sum_{i=1}^{n_x}I(x_{i}=0)\), \(n_{2x}=\sum_{i=1}^{n_x}I(x_{i}=2)\), \(\pi_{0x}=\frac{n_{0x}}{n_{x}}\) and \(\pi_{2x}=\frac{n_{2x}}{n_{x}}\), then \[\begin{align} |\tau(\mathbf{x})|\leq & \frac{n_{0x}(n-n_{0x})+n_{2x}(n-n_{0x}-n_{2x})}{\begin{pmatrix} n \\ 2 \end{pmatrix}} \nonumber\\ = & 2\{\frac{n_{0x}}{n-1}-(\frac{n_{0x}}{n})(\frac{n_{0x}}{n-1})+\frac{n_{2x}}{n-1}-(\frac{n_{2x}}{n})(\frac{n_{0x}}{n-1})-(\frac{n_{2x}}{n})(\frac{n_{2x}}{n-1})\} \nonumber\\ \approx & 2\{\frac{n_{0x}}{n}-(\frac{n_{0x}}{n})^2+\frac{n_{2x}}{n}-(\frac{n_{2x}}{n})(\frac{n_{0x}}{n})-(\frac{n_{2x}}{n})^2\} \nonumber\\ = & 2\{\pi_{0x}(1-\pi_{0x})+\pi_{2x}(1-\pi_{0x}-\pi_{2x})\} \end{align}\]
For ternary/binary and ternary/ternary cases, we combine the two individual bounds.
Derivation of approximate bound for the ternary/truncated case
Let \(\mathbf{x}\in\mathcal{R}^{n}\) and \(\mathbf{y}\in\mathcal{R}^{n}\) be the observed \(n\) realizations of ternary and truncated variables, respectively. Let \(n_{0x}=\sum_{i=0}^{n}I(x_{i}=0)\), \(\pi_{0x}=\frac{n_{0x}}{n}\), \(n_{1x}=\sum_{i=0}^{n}I(x_{i}=1)\), \(\pi_{1x}=\frac{n_{1x}}{n}\), \(n_{2x}=\sum_{i=0}^{n}I(x_{i}=2)\), \(\pi_{2x}=\frac{n_{2x}}{n}\), \(n_{0y}=\sum_{i=0}^{n}I(y_{i}=0)\), \(\pi_{0y}=\frac{n_{0y}}{n}\), \(n_{0x0y}=\sum_{i=0}^{n}I(x_{i}=0 \;\& \; y_{i}=0)\), \(n_{1x0y}=\sum_{i=0}^{n}I(x_{i}=1 \;\& \; y_{i}=0)\) and \(n_{2x0y}=\sum_{i=0}^{n}I(x_{i}=2 \;\& \; y_{i}=0)\) then \[\begin{align} |\tau(\mathbf{x}, \mathbf{y})|\leq & \frac{\begin{pmatrix}n \\ 2\end{pmatrix}-\begin{pmatrix}n_{0x} \\ 2\end{pmatrix}-\begin{pmatrix}n_{1x} \\ 2\end{pmatrix}-\begin{pmatrix} n_{2x} \\ 2 \end{pmatrix}-\begin{pmatrix}n_{0y} \\ 2\end{pmatrix}+\begin{pmatrix}n_{0x0y} \\ 2 \end{pmatrix}+\begin{pmatrix}n_{1x0y} \\ 2\end{pmatrix}+\begin{pmatrix}n_{2x0y} \\ 2\end{pmatrix}}{\begin{pmatrix}n \\ 2\end{pmatrix}} \nonumber \end{align}\] Since \(n_{0x0y}\leq\min(n_{0x},n_{0y})\), \(n_{1x0y}\leq\min(n_{1x},n_{0y})\) and \(n_{2x0y}\leq\min(n_{2x},n_{0y})\) we obtain \[\begin{align} |\tau(\mathbf{x}, \mathbf{y})|\leq & \frac{\begin{pmatrix}n \\ 2\end{pmatrix}-\begin{pmatrix}n_{0x} \\ 2\end{pmatrix}-\begin{pmatrix}n_{1x} \\ 2\end{pmatrix}-\begin{pmatrix} n_{2x} \\ 2 \end{pmatrix}-\begin{pmatrix}n_{0y} \\ 2\end{pmatrix}}{\begin{pmatrix}n \\ 2\end{pmatrix}} \nonumber\\ & + \frac{\begin{pmatrix}\min(n_{0x},n_{0y}) \\ 2 \end{pmatrix}+\begin{pmatrix}\min(n_{1x},n_{0y}) \\ 2\end{pmatrix}+\begin{pmatrix}\min(n_{2x},n_{0y}) \\ 2\end{pmatrix}}{\begin{pmatrix}n \\ 2\end{pmatrix}} \nonumber\\ \leq & \frac{\begin{pmatrix}n \\ 2\end{pmatrix}-\begin{pmatrix}\max(n_{0x},n_{1x},n_{2x},n_{0y}) \\ 2\end{pmatrix}}{\begin{pmatrix}n \\ 2\end{pmatrix}} \nonumber\\ \leq & 1-\frac{\max(n_{0x},n_{1x},n_{2x},n_{0y})(\max(n_{0x},n_{1x},n_{2x},n_{0y})-1)}{n(n-1)} \nonumber\\ \approx & 1-(\frac{\max(n_{0x},n_{1x},n_{2x},n_{0y})}{n})^{2} \nonumber\\ =& 1-\{\max(\pi_{0x},\pi_{1x},\pi_{2x},\pi_{0y})\}^{2} \nonumber\\ =& 1-\{\max(\pi_{0x},(1-\pi_{0x}-\pi_{2x}),\pi_{2x},\pi_{0y})\}^{2} \end{align}\]
It is easy to get the approximate bound for truncated/ternary case by switching \(\mathbf{x}\) and \(\mathbf{y}\).
Croux, Christophe, Peter Filzmoser, and Heinrich Fritz. 2013. "Robust Sparse Principal Component Analysis." Technometrics 55 (2): 202–14.
Fan, Jianqing, Han Liu, Yang Ning, and Hui Zou. 2017. "High Dimensional Semiparametric Latent Graphical Model for Mixed Data." Journal of the Royal Statistical Society. Series B: Statistical Methodology 79 (2): 405–21.
Filzmoser, Peter, Heinrich Fritz, and Klaudius Kalcher. 2021. pcaPP: Robust PCA by Projection Pursuit. https://CRAN.R-project.org/package=pcaPP.
Fox, John. 2019. Polycor: Polychoric and Polyserial Correlations. https://CRAN.R-project.org/package=polycor.
Gaure, Simen. 2019. Chebpol: Multivariate Interpolation. https://github.com/sgaure/chebpol.
Liu, Han, John Lafferty, and Larry Wasserman. 2009. "The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs." Journal of Machine Learning Research 10 (10).
Quan, Xiaoyun, James G Booth, and Martin T Wells. 2018. "Rank-Based Approach for Estimating Correlations in Mixed Ordinal Data." arXiv Preprint arXiv:1809.06255.
Yoon, Grace, Raymond J Carroll, and Irina Gaynanova. 2020. "Sparse Semiparametric Canonical Correlation Analysis for Data of Mixed Types." Biometrika 107 (3): 609–25.
Yoon, Grace, Christian L Müller, and Irina Gaynanova. 2021. "Fast Computation of Latent Correlations." Journal of Computational and Graphical Statistics, 1–8. | CommonCrawl |
High-harmonic generation in metallic titanium nitride
Generation of even and odd high harmonics in resonant metasurfaces using single and multiple ultra-intense laser pulses
Maxim R. Shcherbakov, Haizhong Zhang, … Gennady Shvets
High-efficiency near-infrared optical parametric amplifier for intense, narrowband THz pulses tunable in the 4 to 19 THz region
Meenkyo Seo, Je-Hoi Mun, … Dong Eon Kim
Beating absorption in solid-state high harmonics
Hanzhe Liu, Giulio Vampa, … David A. Reis
Plasmonic mid-infrared third harmonic generation in germanium nanoantennas
Marco P. Fischer, Aaron Riede, … Daniele Brida
Near-zero-index ultra-fast pulse characterization
Wallace Jaffray, Federico Belli, … Marcello Ferrera
Powerful terahertz waves from long-wavelength infrared laser filaments
Vladimir Yu. Fedorov & Stelios Tzortzakis
Observation of extremely efficient terahertz generation from mid-infrared two-color laser filaments
Anastasios D. Koulouklidis, Claudia Gollner, … Stelios Tzortzakis
Photon acceleration and tunable broadband harmonics generation in nonlinear time-dependent metasurfaces
Maxim R. Shcherbakov, Kevin Werner, … Gennady Shvets
Attosecond science based on high harmonic generation from gases and solids
Jie Li, Jian Lu, … Zenghu Chang
A. Korobenko ORCID: orcid.org/0000-0003-2285-28211,
S. Saha ORCID: orcid.org/0000-0002-1022-80472,
A. T. K. Godfrey1,
M. Gertsvolf3,
A. Yu. Naumov ORCID: orcid.org/0000-0002-4593-25601,
D. M. Villeneuve ORCID: orcid.org/0000-0002-2810-36481,
A. Boltasseva ORCID: orcid.org/0000-0001-8905-26052,
V. M. Shalaev ORCID: orcid.org/0000-0001-8976-11022 &
P. B. Corkum1
Nature Communications volume 12, Article number: 4981 (2021) Cite this article
High-harmonic generation
Nonlinear optics
High-harmonic generation is a cornerstone of nonlinear optics. It has been demonstrated in dielectrics, semiconductors, semi-metals, plasmas, and gases, but, until now, not in metals. Here we report high harmonics of 800-nm-wavelength light irradiating metallic titanium nitride film. Titanium nitride is a refractory metal known for its high melting temperature and large laser damage threshold. We show that it can withstand few-cycle light pulses with peak intensities as high as 13 TW/cm2, enabling high-harmonics generation up to photon energies of 11 eV. We measure the emitted vacuum ultraviolet radiation as a function of the crystal orientation with respect to the laser polarization and show that it is consistent with the anisotropic conduction band structure of titanium nitride. The generation of high harmonics from metals opens a link between solid and plasma harmonics. In addition, titanium nitride is a promising material for refractory plasmonic devices and could enable compact vacuum ultraviolet frequency combs.
When intense light irradiates a transparent material, harmonics are generated by the bound electrons or laser-generated free electrons1,2,3,4,5,6,7,8,9. The former is the realm of perturbative nonlinear optics while the latter are responsible for extreme nonlinear optics. Free electron related harmonics are primarily due to newly created free electrons that either recombine after a brief interval in the continuum (interband), or after creation, move non-harmonically on the complex bands of the material (intraband). Experiments indicate that, for near-infrared radiation, pre-existing free electrons are not a significant source10.
In contrast, when normally incident light irradiates a plasma, the high density of free electrons keeps the light out of the material by reflecting it. The phase of the reflected light from a dense plasma is such that it forms a standing wave with a node at the plasma surface. High harmonics from plasmas, observed in many experiments, arise from p-polarized light where electrons are extracted from the surface and the surface discontinuity plays a critical role. A metal, with its high density of electrons, shares many characteristics of plasmas, but the lattice, the resulting band structure and band filling, cannot be ignored.
In this paper, we experimentally study the damage threshold of the epitaxial films of the refractory metal, titanium nitride (TiN). We show that, although lower than expected based on the lattice melting, thermal transport, and light absorption in the material, the damage threshold is still high11,12,13, enabling us to observe high harmonics. We find harmonics of 800 nm light reaching 11 eV with brightness comparable to those from magnesium oxide (MgO), a high melting point dielectric, irradiated with the same intensity. Thus, metals can produce high harmonics. We propose that they will occur universally in hard-to-damage bulk metals irradiated with few-cycle pulses.
Because the motion of the conduction electrons is responsible for the plasma response in metals, we develop a simple model, considering the oscillation of the Fermi sea of the laser-driven electrons in a single conduction band of TiN. Extracting the band structure from density functional theory calculations, we use this model to qualitatively predict the angle dependence of the anharmonic motion as the laser polarization is rotated with respect to the lattice structure of the solid. The agreement between the prediction and experiment suggests that the average response of the electrons on the TiN conduction band is an important component of a complete theory.
Damage threshold
Figure 1 shows the layout of the optical setup. To determine the damage threshold, we block the laser beam and adjust its peak intensity with a wire grid polarizer pair. Once the power of the beam is established, it is unblocked, irradiating the film with 60,000 laser pulses. The sample, 200 nm-thick TiN film, epitaxially grown on MgO substrate (see the "Methods" section for details on sample preparation), is then translated by 100 μm to a new spot, and the procedure repeated with a different pulse intensity. After scanning a range of intensities, we removed the sample from the vacuum chamber and inspected it under an optical microscope (Fig. 2a) and an atomic force microscope (AFM) (Fig. 2b).
Fig. 1: Experimental setup.
A 2.3-cycle laser pulse (central wavelength 770 nm) was passed through two wire grid polarizers and a half-wave plate. It was focused with a focusing mirror onto the TiN sample inside a vacuum chamber. The sample was mounted on a motorized XY stage, allowing its translation without realigning the optics. The generated high-harmonics radiation (HHG) passed through a slit, diffracted from a curved VUV grating, and reached the imaging microchannel plate (MCP) detector. The observed VUV spectrum was imaged with a CCD camera.
Fig. 2: Damage threshold measurement.
a Optical microscope image of the irradiated spots on the TiN surface. Numbers 1 through 5 indicate the spots corresponding with the peak field intensities of 12, 13, 17, 21, and 24 TW/cm2 respectively. We observed modification starting from spot #2, and the film appeared stripped, with the underlying MgO exposed at spots #3, #4 and #5. b AFM image of spot #4 reveals a ~150 nm-deep crater, surrounded by a halo of swollen TiN material. The bottom of the crater shows a 40-times increase in surface roughness (17 nm RMS), compared to the unmodified region of the sample (0.4 nm RMS), also showing scattered chunks of material with a characteristic size of 100 nm. The two blue dashed-dotted lines are the contour lines of the independently measured incident beam profile, corresponding to the peak intensity of 13 and 15 TW/cm2. These contours set the thresholds for material modification and removal, respectively.
Comparing the images with the independently measured incident beam profile, we determined the intensity thresholds to be 13 TW/cm2 and 15 TW/cm2 for the TiN modification and ablation, respectively. The damage in pristine MgO was observed at around 50 TW/cm2. Using the two-temperature model approach14,15 for the photo-induced damage in metal and TiN thermodynamic constants reported previously16 we estimated the heat deposition depth \({x}_{\rm R}=180\,{\rm nm}\). This corresponds to the thickness of the TiN layer in which the hot electrons rethermalize with the lattice. The melting temperature of 3,203 K in this layer is achieved at an absorbed fluence of 0.23 J/cm2, which is more than an order of magnitude higher than the experimental threshold fluence of 0.021 J/cm2.
Surprisingly enough, even if we assume that the electrons thermalize with the lattice instantaneously, in which case the heat deposition length is determined by the (spectral-averaged) TiN absorption length \({x}_{\rm abs}=33\,{\rm nm}\), we still get an overestimated damage threshold fluence of 0.043 J/cm2. This suggests non-thermal damage, such as hot-electron blast force17,18. However, further study is required to confirm this hypothesis.
High harmonic generation
Despite being lower than predicted by a two-temperature model, TiN was still able to withstand an order of magnitude higher incident energy than gold for a similar laser pulse19. In addition to this, its relatively low reflection coefficient of 85% allowed us to reach high enough intensity inside the films to observe high harmonics.
The harmonic radiation was emitted in the specular direction to the impinging beam. We collected it in a vacuum ultraviolet (VUV) spectrometer (Fig. 1). With the laser polarization along the [100] crystal direction, we set the laser peak intensity to 12 TW/cm2 and recorded the resulting VUV spectrum, shown in Fig. 3 with an orange line. We calculate the spectral-averaged transmission of our 200 nm-thick film to be 10−4, eliminating the possible effect of underlying substrate. Harmonic orders HH5 and HH7 (8.4 and 11.8 eV photon energy, respectively) were observed at the intensities below the TiN damage threshold. They were similar in intensity to the reference harmonics from MgO (measured under the same conditions) (Fig. 3, blue line). In addition to HH5 and HH7, harmonic HH9 was also observed from MgO at the intensity range from 10 TW/cm2 to 15 TW/cm2.
Fig. 3: High harmonic spectra.
Both the TiN (orange line) and bare MgO substrate (blue line) spectra were taken at incident laser peak intensity of 12 TW/cm2.
Keeping the polarization direction fixed along the [100] crystal direction, we collected a set of spectra, varying the laser pulse attenuation with a wire grid polarizer. Figure 4 summarizes the intensity dependence of the integrated harmonic yield. HH5 and HH7 seem to follow the power laws I5 and I7 (dashed red and magenta lines in Fig. 4), respectively, as a function of the laser intensity I. At the intensity of 13 TW/cm2, marked with the green arrow, the monotonic increase of the TiN harmonics gives way to a decrease as material modification occurs. At intensities greater than 15 TW/cm2, marked with the red arrow, the laser radiation ablates the TiN film, revealing the underlying substrate. As a result, the signal at this intensity is dominated by harmonics generated from the MgO under the thinned-out and stripped TiN film at the bottom of the damage crater, and the HH7 curve is following the seventh harmonic intensity scaling we observe in bare MgO (attenuated due to partial absorption in the leftover TiN). The same effect is not observed for HH5 since the latter is too weak in MgO in the studied intensity range to overtake the harmonics emitted by the remaining TiN.
Fig. 4: Intensity scaling of the harmonics.
Spectrally integrated intensity of HH5 (squares) and HH7 (triangles), measured as a function of input laser intensity at a constant polarization along the [100] crystallographic direction. Empty markers correspond to intensities above the damage threshold, emphasized by the green arrow. Dashed lines are the power laws I5 (red) and I7 (magenta). The dotted lines are the reference MgO harmonics measurements, scaled by a factor of 0.075. At the laser intensities of 15 TW/cm2, marked with the red arrow, and higher, when we observe ablation of TiN film, the HH7 intensity behaves similarly to the MgO, suggesting the latter to be the source of the signal above damage.
In semiconductors and dielectrics the main high harmonic emission mechanism is interband transitions in which coherent electron-hole pairs, produced and driven by a strong laser field, recombine releasing their energy in form of UV photons3. In many transparent crystals, including MgO, this recollision process dominates over a co-existing intraband mechanism, stemming from the motion of the electrons in non-parabolic conduction bands20. However, as the conduction band population increases (e.g., through optical pre-excitation), the role of the interband processes decreases10, as the creation of coherent electron-hole pairs is hindered by electrons occupying states near the conduction band minimum.
In contrast, the intraband processes should become more and more important as the free-carrier population increases. (In highly-doped semiconductors, electron-hole creation and recollision at impurity centers still appears to play an important role21,22, despite the high carrier concentration.) While the photo-carrier density in semiconductors is typically limited at one or a few tens of percent of the conduction band by non-thermal melting23, metals have much higher electron densities, hinting at the dominant role of the nonlinear conduction current in the HHG process. Analytical theory developed for such current in a 1D 1-band conductor in a tight-binding approximation24 predicts a power-law intensity scaling for harmonics above the cut-off harmonic number \({m}_{\mathrm max}\approx e{A}_{0}a/{\hslash} \sim 1\), consistent with the observed behavior in Fig. 4. Here e is the elementary charge and a is the lattice constant. Similarly, expanding the field-dependent energy of a 1D single-band conductor in a power series of the crystal momentum k, it can be shown that the mth spectral component of the induced current has a leading term proportional to \({E}_{0}^{m}\), where E0 is the laser electric field amplitude25. The intensity of the m-th harmonic would therefore scale as Im, where I is the driving laser intensity, for low enough I.
Harmonics anisotropy
To gain insight into the origin of the TiN harmonics, we measured their angular dependence. We fixed the intensity and scanned the polarization angle relative to the crystal axes, rotating it with a half-wave plate in the (001) crystallographic plane. The results for input intensity of 11 TW/cm2 are shown in Fig. 5a. Both HH5 and HH7 showed similar anisotropic structure, with the preferable polarization direction along the [100] and symmetrically equivalent crystallographic directions. Comparing the angle dependence of TiN and MgO harmonics, also plotted in Fig. 5a with a dotted red line, identifies their distinctive origins.
Fig. 5: Harmonics anisotropy.
a HH5 (solid red) and HH7 (solid magenta) intensity, as a function of the laser polarization angle, at a fixed laser peak intensity of 11 TW/cm2. The dashed lines show calculation result. The modeled intensity was scaled up by 20%. For reference, we plot the angular scan of the HH5 intensity from MgO, measured at the same laser peak intensity, with a red dotted line. It demonstrates lower anisotropy, and peaks along [110] and symmetrically equivalent directions. b Highly anisotropic Fermi surface of the TiN conduction band. Gray lines represent the edges of the Brillouin zone of the FCC system.
We attribute the strong anisotropy of the harmonic yields to the anisotropic conduction band structure of the TiN, resulting in the angular dependence of the screening currents of the conduction electrons. This anisotropy is reflected in TiN's Fermi surface, shown in Fig. 5b. The band consists of 6 valleys, centered at X points of the Brillouin zone, elongated in the ΓX direction. This suggests a large difference in the electron dynamics, driven along ΓX and ΓK. However, due to the shape of the conduction band together with its high population, it is not immediately apparent why it would lead to a particular angular dependence plotted in Fig. 5a.
We solved the semiclassical equations of motion to predict the electronic response. We used Density Functional Theory (DFT) to retrieve the electronic bands of TiN. In a dielectric, electrons are mostly excited to the conduction band near a single k-point in the Brillouin zone, where the energy gap is the lowest. 1D calculations following the trajectories of the injected electrons are, therefore, often sufficient to describe high harmonics. For metals, on the other hand, where the electrons in the conduction band start their trajectories from everywhere in the Brillouin zone, full 3D calculations are necessary.
To calculate the harmonic spectra from the band energy \({{{{{{\rm{\varepsilon }}}}}}}_{{{{{{\boldsymbol{ k}}}}}}}\), we use the Boltzmann equation, that, in the absence of scattering or spatial variation electric field of the laser pulse E(t), has a solution \({f}_{{{{{{\bf{k}}}}}}}(t)={f}_{{{{{{\bf{k}}}}}}+e{{{{{\bf{A}}}}}}(t)/\hslash }^{0}\). Here, \({f}_{{{{{{\boldsymbol{k}}}}}}}(t)\) is a time-dependent electron distribution function, k is the electron crystal momentum, \({{{{{\bf{A}}}}}}(t)=-{\int }_{-\infty }^{t}dt^{\prime} {{{{{\bf{E}}}}}}(t^{\prime} )\) is the vector potential of the laser pulse, \({{f}_{{{{{{\bf{k}}}}}}}}^{0}=\frac{1}{\exp (\frac{{\varepsilon }_{{{{{{\bf{k}}}}}}}-{E}_{F}}{{k}_{\rm B}T})+1}\) is the Fermi-Dirac distribution, EF is the Fermi energy, kB is the Boltzmann's constant and T is the temperature. We then calculate the current density as:
$${{{{{\bf{j}}}}}}({{{t}}})=-{{{e}}}{\int }_{{{{{\rm BZ}}}}}\frac{{{{{{d}}}}}^{3}{{{{{\bf{k}}}}}}}{4\pi ^{3}}{{{f}}}_{{{{{{\bf{k}}}}}}}({{{{t}}}}){{{{{{\bf{v}}}}}}}_{{{{{{\bf{k}}}}}}},$$
where \({{{{{{\bf{v}}}}}}}_{{{{{{\bf{k}}}}}}}=\frac{1}{\hslash }{\nabla }_{{{{{{\bf{k}}}}}}}{\varepsilon }_{{{{{{\bf{k}}}}}}}\) is the electron velocity, \({\nabla }_{{{{{{\bf{k}}}}}}}\) is the gradient operator in reciprocal space, and the integration is carried out over a Brillouin zone.
An intense, linearly polarized pulse was numerically propagated through the vacuum/TiN interface, using its measured optical constants (see Methods), to find A(t) inside. This pulse was then substituted into Eq. (1). to calculate j(t). We averaged the resulting current density to account for the intensity profile of the pulse. We then compared the squared amplitude of its Fourier transform with the experiment (Fig. 5a). In agreement with the experimental data, the calculations showed four-fold structure, with a substantial increase of the harmonic yield along [100] and symmetrically equivalent directions.
We found TiN to have a damage threshold an order of magnitude higher than gold, but with evidence of non-thermal damage. The high damage threshold allowed us to observe high harmonics directly from a TiN film, thereby extending the list of high-harmonic generating solids to include metals. The observed spectrum stretched into the technologically important XUV region reaching 11 eV. The next step would be to scale the irradiating intensity to the single-shot damage threshold and beyond.
The measured high harmonics are consistent with intraband harmonics created by conduction band electrons, although we cannot exclude the effect of the higher bands. The harmonic yield is comparable to those generated from the dielectric, MgO, by the same intensity pulse.
Our experiment opens several important technological possibilities. Since TiN is used to make plasmonic devices for on-chip, refractory, and high-power applications26,27,28,29,30,31,32, it will be possible to enhance VUV generation using the field enhancement available with nano-plasmonic antennas33,34,35. One potentially important application is to produce a compact and stable VUV frequency comb. At present the standard way of generating frequency combs is to increase the amplitude of a weak IR frequency comb field in a power-buildup enhancement cavity36,37,38, until its intensity is high enough to generate XUV harmonics in a rare gas. We propose to replace the buildup cavity with a TiN nano-plasmonic antenna array and the gas with a dielectric such as MgO39,40.
Another opportunity is to use TiN as an epsilon-near-zero (ENZ9) material to locally enhance the electromagnetic-field and the nonlinear response9,41,42. This overcomes the low damage threshold of commonly used transparent conducting oxides such as indium tin oxide (ITO). Since the ENZ wavelength of TiN is around 480 nm43 and can be adjusted13,44,45,46, TiN could pave the way to drastically enhanced nonlinear response.
So far, in our experiments, we remained below the multi-shot modification threshold of TiN. Since the single-shot damage thresholds of TiN should be much higher, we will be able to test harmonic conversion efficiency at a much higher intensity by illuminating the sample with a single laser pulse and collecting the generated harmonics spectra. Furthermore, a single-cycle pulse will allow us to far exceed the single-shot damage threshold and still maintain the crystal structure of TiN. Inertially confined47 crystalline metals are an uncharted frontier where the many electrons of a metal can be used to efficiently transfer light from the infrared to the VUV.
At higher intensities, the high free carrier concentration in TiN will allow us to study a continuous transition between solid-state high harmonic generation, already linked with gas harmonics, to plasma harmonics, widely studied by the plasma physics community.
Crystal preparation
A TiN film was deposited using DC magnetron sputtering system (PVD Products) onto a 1 × 1 cm2 MgO substrate heated at a temperature of 800 °C. A 99.995% pure titanium target of a 2-inch diameter and a DC power of 200 W were used. To ensure high purity of the grown films, the chamber was pumped down to 3 × 10–8 Torr before deposition and backfilled to 5 × 10–3 Torr with argon during the sputtering process. The throw length of 20 cm ensured a uniform thickness of the grown TiN layer throughout the substrate. After heating, the pressure increased to 1.2 × 10–7 Torr. An argon-nitrogen mixture at a rate of 4 sccm/6 sccm was flowed into the chamber. The deposition rate was 2.2˚A/min. The surface quality of the grown films was assessed with an atomic force microscope. The films are atomically smooth, with a root-mean-square roughness of 0.4 nm. Their optical properties were characterized via spectroscopic ellipsometry at 50 and 70 degrees for wavelengths of 300 nm to 2000 nm and then fitted with a Drude–Lorentz model, with one Drude oscillator modeling the contribution of the free electrons and two Lorentz oscillators modeling the contribution of the bound electrons.
Optical setup
We spectrally broadened the 800 nm central wavelength, 1 kHz repetition rate, 1 mJ/pulse energy output of a Ti:Sa amplifier by passing it through an argon-filled hollow-core fiber. Pulses were then recompressed in a chirped-mirror compressor down to 6 fs FWHM duration, as measured with a dispersion scan technique48.
We focused the beam with a 500 mm focal length concave focusing mirror inside a vacuum chamber onto the TiN (Fig. 1) at a nearly normal incidence angle of 1.5◦. The harmonic radiation was emitted from the surface in the specular direction to the incident laser beam, passed through a 300 µm slit of an VUV spectrometer, dispersed by a 300 grooves/mm laminar-type replica diffraction grating (Shimadzu), and an imaging MCP followed by a CCD camera outside the vacuum chamber. We used two wire grid polarizers and a broadband half-wave plate placed outside the chamber to control laser intensity and its polarization. The beam profile at the focal spot was assessed with a CCD camera and found to have a waist radius of 70 µm.
Precise measurement of peak field intensity is difficult in the case of few-cycle pulses. The values reported in this work were calculated from the measured pulse power, beam profile and temporal characteristics of the pulse. The estimated error in pulse intensity was 10%.
Band structure calculations
Band structure calculations were performed using GPAW package49,50, employing a plane-wave basis and PBE exchange-correlation functional, that was found to yield good results in previous DFT studies of TiN51. Having performed the calculations on a rough 16 × 16 × 16 k-point grid we used Wannier interpolation to interpolate the band energy εk to a denser 256 × 256 × 256 one with wannier90 software52.
The resulting band structure had three energy branches crossing the Fermi level, consistent with previous studies51,53. Two of them had a minimum at the center of the Brillouin zone, Γ, contributing 0.08 and 0.13 × 1028 m−3 to the conduction band electron density. The third one, whose Fermi surface is shown in Fig. 5b, was highly anisotropic and minimized at the X point. Corresponding to the electron density 5.03 × 1028 m−3 it was dominant for generating high harmonics.
The datasets generated during and/or analyzed during the current study are available in the figshare repository, https://doi.org/10.6084/m9.figshare.c.5514561.v1.
Code availability
The code used for data analysis is available in the figshare repository, https://doi.org/10.6084/m9.figshare.c.5514561.v1.
Ghimire, S. et al. Observation of high-order harmonic generation in a bulk crystal. Nat. Phys. 7, 138–141 (2011).
Yoshikawa, N., Tamaya, T. & Tanaka, K. High-harmonic generation in graphene enhanced by elliptically polarized light excitation. Science 356, 736–738 (2017).
Article ADS MathSciNet CAS PubMed MATH Google Scholar
Vampa, G. et al. Linking high harmonics from gases and solids. Nature 522, 462–464 (2015).
Article ADS CAS PubMed Google Scholar
Corkum, P. B. Plasma perspective on strong field multiphoton ionization. Phys. Rev. Lett. 71, 1994–1997 (1993).
Ferray, M. et al. Multiple-harmonic conversion of 1064 nm radiation in rare gases. J. Phys. B: At. Mol. Opt. Phys. 21, L31–L35 (1988).
Schubert, O. et al. Sub-cycle control of terahertz high-harmonic generation by dynamical Bloch oscillations. Nat. Photonics 8, 119–123 (2014).
Sivis, M. et al. Tailored semiconductors for high-harmonic optoelectronics. Science 357, 303–306 (2017).
Liu, H. et al. High-harmonic generation from an atomically thin semiconductor. Nat. Phys. 13, 262–265 (2017).
Yang, Y. et al. High-harmonic generation from an epsilon-near-zero material. Nat. Phys. 15, 1022–1026 (2019).
Wang, Z. et al. The roles of photo-carrier doping and driving wavelength in high harmonic generation from a semiconductor. Nat. Commun. 8, 1686 (2017).
Article ADS PubMed PubMed Central CAS Google Scholar
Patsalas, P., Kalfagiannis, N. & Kassavetis, S. Optical properties and plasmonic performance of titanium nitride. Materials. 8, 3128–3154 (2015).
Article ADS CAS PubMed Central Google Scholar
Guler, U., Boltasseva, A. & Shalaev, V. M. Refractory plasmonics. Science 344, 263–264 (2014).
Gui, L. et al. Nonlinear refractory plasmonics with titanium nitride nanoantennas. Nano Lett. 16, 5708–5713 (2016).
Anisimov, S. I., Kapeliovich, B. L. & Perel'man, T. L. Electron emission from metal surfaces exposed to ultrashort laser pulses. J. Exp. Theor. Phys. 39, 375 (1974).
Corkum, P. B., Brunel, F., Sherman, N. K. & Srinivasan-Rao, T. Thermal response of metals to ultrashort-pulse laser excitation. Phys. Rev. Lett. 61, 2886–2889 (1988).
Dal Forno, S. & Lischner, J. Electron-phonon coupling and hot electron thermalization in titanium nitride. Phys. Rev. Mater. 3, 115203 (2019).
Falkovsky, L. A. & Mishchenko, E. G. Electron-lattice kinetics of metals heated by ultrashort laser pulses. J. Exp. Theor. Phys. 88, 84–88 (1999).
Chen, J. K., Beraun, J. E., Grimes, L. E. & Tzou, D. Y. Modeling of femtosecond laser-induced non-equilibrium deformation in metal films. Int. J. Solids Struct. 39, 3199–3216 (2002).
Nagel, P. M. et al. Surface plasmon assisted electron acceleration in photoemission from gold nanopillars. Chem. Phys. 414, 106–111 (2013).
You, Y. S. et al. Laser waveform control of extreme ultraviolet high harmonics from solids. Opt. Lett. 42, 1816 (2017).
Huang, T. et al. High-order-harmonic generation of a doped semiconductor. Phys. Rev. A. 96, 043425 (2017).
Yu, C., Hansen, K. K. & Madsen, L. B. Enhanced high-order harmonic generation in donor-doped band-gap materials. Phys. Rev. A. 99, 013435 (2019).
Rousse, A. et al. Non-thermal melting in semiconductors measured at femtosecond resolution. Nature 410, 65–68 (2001).
Pronin, K. A., Bandrauk, A. D. & Ovchinnikov, A. A. Harmonic generation by a one-dimensional conductor: Exact results. Phys. Rev. B. 50, 3473–3476 (1994).
Lü, L.-J. & Bian, X.-B. Multielectron interference of intraband harmonics in solids. Phys. Rev. B. 100, 214312 (2019).
Chirumamilla, M. et al. Large-area ultrabroadband absorber for solar thermophotovoltaics based on 3D titanium nitride nanopillars. Adv. Opt. Mater. 5, 1700552 (2017).
Briggs, J. A. et al. Fully CMOS-compatible titanium nitride nanoantennas. Appl. Phys. Lett. 108, 051110 (2016).
Briggs, J. A. et al. Temperature-dependent optical properties of titanium nitride. Appl. Phys. Lett. 110, 101901 (2017).
Saha, S. et al. On-chip hybrid photonic-plasmonic waveguides with ultrathin titanium nitride films. ACS Photonics 5, 4423–4431 (2018).
Guler, U. et al. Local heating with lithographically fabricated plasmonic titanium nitride nanoparticles. Nano Lett. 13, 6078–6083 (2013).
Li, W. et al. Refractory plasmonics with titanium nitride: broadband metamaterial absorber. Adv. Mater. 26, 7959–7965 (2014).
Guo, W. P. et al. Titanium nitride epitaxial films as a plasmonic material platform: alternative to gold. ACS Photonics 6, 1848–1854 (2019).
Kim, S. et al. High-harmonic generation by resonant plasmon field enhancement. Nature 453, 757–760 (2008).
Vampa, G. et al. Plasmon-enhanced high-harmonic generation from silicon. Nat. Phys. 13, 659–662 (2017).
Sivis, M., Duwe, M., Abel, B. & Ropers, C. Extreme-ultraviolet light generation in plasmonic nanostructures. Nat. Phys. 9, 304–309 (2013).
Cingöz, A. et al. Direct frequency comb spectroscopy in the extreme ultraviolet. Nature 482, 68–71 (2012).
Article ADS PubMed CAS Google Scholar
Jones, R. J., Moll, K. D., Thorpe, M. J. & Ye, J. Phase-coherent frequency combs in the vacuum ultraviolet via high-harmonic generation inside a femtosecond enhancement cavity. Phys. Rev. Lett. 94, 1–4 (2005).
Gohle, C. et al. A frequency comb in the extreme ultraviolet. Nature 436, 234–237 (2005).
Han, S. et al. High-harmonic generation by field enhanced femtosecond pulses in metal-sapphire nanostructure. Nat. Commun. 7, 13105 (2016).
Article ADS CAS PubMed PubMed Central Google Scholar
Du, T.-Y., Guan, Z., Zhou, X.-X. & Bian, X.-B. Enhanced high-order harmonic generation from periodic potentials in inhomogeneous laser fields. Phys. Rev. A. 94, 023419 (2016).
Reshef, O., De Leon, I., Alam, M. Z. & Boyd, R. W. Nonlinear optical effects in epsilon-near-zero media. Nat. Rev. Mater. 4, 535–551 (2019).
Kinsey, N., DeVault, C., Boltasseva, A. & Shalaev, V. M. Near-zero-index materials for photonics. Nat. Rev. Mater. 4, 742–760 (2019).
Diroll, B.T., Saha, S., Shalaev, V. M., Boltasseva, A., Schaller, R. D. Broadband ultrafast dynamics of refractory metals: TiN and ZrN. Adv. Opt. Mater. 8, 2000652 (2020).
Wang, Y., Capretti, A. & Dal, L. Negro, Wide tuning of the optical and structural properties of alternative plasmonic materials. Opt. Mater. Express 5, 2415 (2015).
Lu, Y. J. et al. Dynamically controlled Purcell enhancement of visible spontaneous emission in a gated plasmonic heterostructure. Nat. Commun. 8, 1–8 (2017).
Zgrabik, C. M. & Hu, E. L. Optimization of sputtered titanium nitride as a tunable metal for plasmonic applications. Opt. Mater. Express 5, 2786 (2015).
Strickland, D. T., Beaudoin, Y., Dietrich, P. & Corkum, P. B. Optical studies of inertially confined molecular iodine ions. Phys. Rev. Lett. 68, 2755–2758 (1992).
Miranda, M., Fordell, T., Arnold, C., L'Huillier, A. & Crespo, H. Simultaneous compression and characterization of ultrashort laser pulses using chirped mirrors and glass wedges. Opt. Express 20, 688–697 (2012).
Article ADS PubMed Google Scholar
Mortensen, J. J., Hansen, L. B. & Jacobsen, K. W. Real-space grid implementation of the projector augmented wave method. Phys. Rev. B 71, 1–11 (2005).
Enkovaara J. et al. Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method. J. Phys. Condens. Matter. 22 (2010), https://doi.org/10.1088/0953-8984/22/25/253202 (2010).
Marlo, M. & Milman, V. Density-functional study of bulk and surface properties of titanium nitride using different exchange-correlation functionals. Phys. Rev. B - Condens. Matter Mater. Phys. 62, 2899–2907 (2000).
Pizzi, G. et al. Wannier90 as a community code: new features and applications. J. Phys. Condens. Matter 32, 165902 (2020).
Haviland, D., Yang, X., Winzer, K., Noffke, J. & Eckardt, H. The de Haas-van Alphen effect and Fermi surface of TiN. J. Phys. C. Solid State Phys. 18, 2859–2869 (1985).
The work was funded by US Defense Threat Reduction Agency (DTRA) (HDTRA1-19-1-0026) and the University of Ottawa, NRC Joint Centre for Extreme Photonics; with contributions from the US Air Force Office of Scientific Research (AFOSR) FA9550-16-1-0109, FA9550-18-1-0002, FA9550-20-01-0124 and ONR grant N00014-20-1-2199; Canada Foundation for Innovation; Canada Research Chairs (CRC); and the Natural Sciences and Engineering Research Council of Canada (NSERC). We thank David Crane and Ryan Kroeker for their technical support, and are grateful for fruitful discussions with Andre Staudte, Giulio Vampa, Guilmot Ernotte and Marco Taucer.
Joint Attosecond Science Laboratory, National Research Council of Canada and University of Ottawa, Ottawa, ON, Canada
A. Korobenko, A. T. K. Godfrey, A. Yu. Naumov, D. M. Villeneuve & P. B. Corkum
Purdue University, School of Electrical & Computer Engineering and Birck Nanotechnology Center, West Lafayette, IN, USA
S. Saha, A. Boltasseva & V. M. Shalaev
National Research Council Canada, Ottawa, ON, Canada
M. Gertsvolf
A. Korobenko
S. Saha
A. T. K. Godfrey
A. Yu. Naumov
D. M. Villeneuve
A. Boltasseva
V. M. Shalaev
P. B. Corkum
S.S. synthesized and characterized linear properties of the TiN films. A.K. performed and analyzed DT and HHG measurements and carried out numerical calculations. A.T.K.G. conducted AFM characterization. P.B.C. supervised and directed the project. A.K., S.S., A.T.K.G., M.G., A.Y.uN., D.M.V., A.B., V.M.S., P.B.C. contributed to discussing the results and writing the manuscript.
Correspondence to A. Korobenko.
Peer review information Nature Communications thanks Xue-Bin Bian and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Korobenko, A., Saha, S., Godfrey, A.T.K. et al. High-harmonic generation in metallic titanium nitride. Nat Commun 12, 4981 (2021). https://doi.org/10.1038/s41467-021-25224-z
DOI: https://doi.org/10.1038/s41467-021-25224-z
Role of Van Hove singularities and effective mass anisotropy in polarization-resolved high harmonic spectroscopy of silicon
Pawan Suthar
František Trojánek
Communications Physics (2022)
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Hydrogen-poor Superluminous Supernovae with Late-time H alpha Emission: Three Events From the Intermediate Palomar Transient Factory
Yan, L, Lunnan, R, Perley, DA, Gal-Yam, A, Yaron, O, Roy, R, Quimby, R, Sollerman, J, Fremling, C, Leloudas, G, Cenko, SB, Vreeswijk, P, Graham, ML, Howell, DA, De Cia, A, Ofek, EO, Nugent, P, Kulkarni, SR, Hosseinzadeh, G, Masci, F et al, McCully, C, Rebbapragada, UD and Wozniak, P (2017) Hydrogen-poor Superluminous Supernovae with Late-time H alpha Emission: Three Events From the Intermediate Palomar Transient Factory. Astrophysical Journal, 848 (1). ISSN 0004-637X
Hydrogen-poor Superluminous Supernovae with Late-time H alpha Emission Three Events From the Intermediate Palomar Transient Factory.pdf - Accepted Version
Publisher URL: http://dx.doi.org/10.3847/1538-4357/aa8993
We present observations of two new hydrogen-poor superluminous supernovae (SLSN-I), iPTF15esb and iPTF16bad, showing late-time Hα emission with line luminosities of $(1\mbox{--}3)\times {10}^{41}$ erg s−1 and velocity widths of (4000–6000) km s−1. Including the previously published iPTF13ehe, this makes up a total of three such events to date. iPTF13ehe is one of the most luminous and the slowest evolving SLSNe-I, whereas the other two are less luminous and fast decliners. We interpret this as a result of the ejecta running into a neutral H-shell located at a radius of ~1016 cm. This implies that violent mass loss must have occurred several decades before the supernova explosion. Such a short time interval suggests that eruptive mass loss could be common shortly before core collapse, and more importantly helium is unlikely to be completely stripped off the progenitor and could be present in the ejecta. It is a mystery why helium features are not detected, even though nonthermal energy sources, capable of ionizing He, may exist as suggested by the O ii absorption series in the early-time spectra. Our late-time spectra (+240 days) appear to have intrinsically lower [O i] 6300 Å luminosities than that of SN2015bn and SN2007bi, which is possibly an indication of less oxygen (<10 M ⊙). The blueshifted Hα emission relative to the hosts for all three events may be in tension with the binary model proposed for iPTF13ehe. Finally, iPTF15esb has a peculiar light curve (LC) with three peaks separated from one another by ~22 days. The LC undulation is stronger in bluer bands. One possible explanation is ejecta-circumstellar medium interaction.
0201 Astronomical And Space Sciences, 0305 Organic Chemistry, 0306 Physical Chemistry (Incl. Structural)
Q Science > QC Physics
American Astronomical Society; IOP Publishing
10.3847/1538-4357/aa8993 | CommonCrawl |
Journal of Statistical Distributions and Applications
Generalized spherical distributions
Contours: tessellating, integrating and simulating
An R package for modeling and simulating generalized spherical and related distributions
John P. Nolan1Email authorView ORCID ID profile
Journal of Statistical Distributions and Applications20163:14
A flexible class of multivariate generalized spherical distributions with star-shaped level sets is developed. To work in dimension above two requires tools from computational geometry and multivariate numerical integration. An algorithm to approximately simulate from these star-shaped distributions is developed; it also works for simulating from more general tessellations. These techniques are implemented in the R package gensphere.
Star-shaped distributions
Mathematics Subject Classification (2000)
There is a need for tractable models for multivariate data with nonstandard dependence structures. Our motivation here was to be able to flexibly model distributions with star-shaped level sets. The R package gensphere has been developed that allows one to work with these classes of distributions: specifying flexible shapes for the level sets, computing densities, and simulating. A deliberate goal in this process is to have methods and programs that work in dimension d≥2, and this requires some methods from computational geometry. While the original intent focused on star-shaped regions, some of the tools developed here are useful for other problems, e.g. sampling from more general sets.
Fernández et al. (1995) proposed defining multivariate distributions for which the level sets are scaled versions of a contour \(\mathcal {C}\) (a simple closed curve/surface in \({\mathbb {R}}^{d}\)). We will specify a contour by a function \(c: {\mathbb {S}} \to [0,\infty)\):
$$\mathcal{C} = \{ c({\mathbf{s}}) ~ {\mathbf{s}} ~ : ~ {\mathbf{s}} \in {\mathbb{S}} \}. $$
Here \({\mathbb {S}} =\left \{ {\mathbf {s}} \in {\mathbb {R}}^{d} : |{\mathbf {s}}|=1 \right \}\) is the unit sphere in the Euclidean norm |·|, a (d−1)-dimensional surface. We assume throughout that c(s) is a piecewise continuous function, so measurability issues are automatically satisfied. Figure 1 shows a 2-dimensional example and Fig. 4 shows a 3-dimensional example of such contours.
Constructing a 2-dimensional contour. The top left plot shows a base of type 1, a circle of radius 1. The top right shows the base with one Gaussian bump of type 3 in direction \(\left (\sqrt {2}/2,\sqrt {2}/2\right)\), the bottom left shows the final contour with another Gaussian bump in direction (-1,0). The bottom right plot shows a sample of size n=1000 from this contour using the method described in Section 3
A motivating example for this work is to model fragment dispersion from an explosion. In such problems, the fragments disperse in three dimensions in patterns like those of Fig. 4. The ability to easily specify different contour functions by adding together multiple terms as in Section 2.1 is of practical importance for describing different types of explosive devices. The goal of this modeling is to design better body and vehicle armor to protect people.
Let g:[0,∞)→[0,∞) be a nonnegative function and define
$$ f({\mathbf{x}}) =\left\{ \begin{array}{ll} g \left(\frac {|{\mathbf{x}}|} {c({\mathbf{x}}/|{\mathbf{x}}|)} \right) & |{\mathbf{x}}| > 0 \\ g(0) & |{\mathbf{x}}|=0 \end{array} \right. $$
Under integrability conditions discussed below, this will give a probability density function on \({\mathbb {R}}^{d}\), and the level sets of such a distribution are scalar multiples of \(\mathcal {C}\). Such distributions are also called homothetic, see Balkema and Nolde (2010), Section 3.1 or Simon and Blume (1994), Section 20.4. We will call c(·) the contour function and g(·) the radial decay function of the distribution.
Our approach differs from Fernández et al. (1995) where they start with a function \(v : {\mathbb {R}}^{d} \to [0,\infty)\) that is homogeneous: v(a x)=|a|v(x). Such functions are called gauge functions or Minkowski functionals, and are well studied in convex analysis and functional analysis. The relationship between their v function and our contour function is v(x)=|x|/c(x/|x|). If c(s)=1, then \(\mathcal {C}\) is the unit sphere and v(x)=|x|, so the resulting classes of distributions are the spherical/isotropic distributions. If v(·) is convex, then v(·) is a norm on \({\mathbb {R}}^{d}\) and \(\mathcal {C}\) is the unit sphere in that norm, hence the name v-spherical distributions. When v(·) is not convex, e.g. the ℓ p quasi-norm with p<1, v(x) does not give a norm, so \(\mathcal {C}\) is not strictly speaking a unit sphere, but we will still call the resulting distributions v-spherical.
The purpose of this paper is to describe a method of defining a flexible class of generalized spherical distributions in any dimension d≥2, and to describe an R package gensphere that implements this method. The package gives the ability to
Define a flexible set of contours
Carefully tessellate a contour
Sample from a tessellation
Use a contour and a radial function g(·) to define a generalized spherical distribution
Compute the density f(·) given by (1)
Approximately simulate from a distribution with density f(·)
The third step above also provides a way to simulate from paths and surfaces unrelated to generalized spherical laws, giving new classes of probability distributions on paths and surfaces.
Other references on generalized spherical laws are Arnold et al. (2008), Kamiya et al. (2008), Rattihalli and Basugade (2009), Rattihalli and Patil (2010), and Balkema and Nolde (2010). These papers develop the idea of generalized spherical distributions, but do not provide general purpose software for working with these distributions and do not cover techniques for working with higher dimensional models. Richter (2014) gives a rigorous investigation of p-generalized elliptically contoured distributions, with a detailed analysis of the surface measure and a polar disintegration of the laws.
For (1) to be a proper density, it is required that (see equations (4) and (5) of Fernández et al. (1995))
$$ k_{\mathcal{C}}^{-1} := \int_{{\mathbb{S}}} c^{d}({\mathbf{s}}) d{\mathbf{s}} \in (0, \infty) $$
$$ \int_{0}^{\infty} r^{d-1} g(r) dr = k_{\mathcal{C}}. $$
We will assume c(·) is continuous on \({\mathbb {S}}\) and that c(s)≤c 0. This guarantees (2) is finite, though evaluating it may be difficult, especially when d>2. Section 3 discusses an approach to this problem that improves the accuracy of this computation for the types of contours considered here. Given any univariate probability density h(·) on the positive axis, the function \(g(r)=k_{\mathcal {C}} r^{1-d} h(r)\) is a valid radial decay function. This is the approach used in the rest of this paper and in the associated package.
To simulate values for a generalized spherical random vector, we are interested in a stochastic representation of the form
$$ {\mathbf{X}} {\stackrel{d}{=}} R {\mathbf{Z}}. $$
Choosing Z uniformly distributed (proportional to surface area) on the contour does not work in general. Richter (2014) shows this works in special circumstances, e.g. if the contour \(\mathcal {C}\) is an ℓ 2 ball, ℓ 1 ball, or ℓ ∞ ball. In Section 3 we develop a way to approximately simulate a wider class of distributions by using a piecewise linear approach: approximate the contour \(\mathcal {C}\) by a simplicial tessellation and use (4) on each piece.
2.1 Specification of a contour function
For modeling purposes, we want a flexible family of functions that can be used in a variety of problems.
To be able to include the distributions discussed by the authors cited above, we allow contour functions of the form
$$c({\mathbf{s}}) = \sum\limits_{j=1}^{N_{1}} c_{j} r_{j}({\mathbf{s}}) + {\frac {1} { \sum_{j=1}^{N_{2}} c^{*}_{j} r^{*}_{j}({\mathbf{s}}) }}, $$
where c j >0, \(c^{*}_{j} >0\), and r j (·) and r ∗(·) are one of the cases discussed below. N 1 and N 2 are non-negative integers telling how many terms of each type are used.
r(s)=1, which makes \(\mathcal {C}\) the Euclidean ball. Any isotropic/radially symmetric distribution can be modeled by using just this term in a contour function and the appropriate radial decay function.
r(s)=c(s|μ,θ) is a cone with peak 1 at center \(\boldsymbol {\mu } \in {\mathbb {S}}\) and height 0 at the base given by the circle \(\{{\mathbf {x}} \in {\mathbb {S}} : \boldsymbol {\mu } \cdot {\mathbf {x}} = \cos \theta \}\). It is assumed that |θ|≤π/2.
r(s)=c(s|μ,σ)= exp(−t(s)2/(2σ 2)) is a Gaussian bump centered at location \(\boldsymbol {\mu } \in {\mathbb {S}}\) and "standard deviation" σ>0. Here t(s) is the distance between μ and the projection of \({\mathbf {s}} \in {\mathbb {S}}\) linearly onto the plane tangent to \({\mathbb {S}}\) at μ.
\(r^{*}({\mathbf {s}}) = \vert \vert {\mathbf {s}} \vert \vert _{\ell ^{p}({\mathbb {R}}^{d})}\), p>0.
\(r^{*}({\mathbf {s}}) = \vert \vert A {\mathbf {s}} \vert \vert _{\ell ^{p}({\mathbb {R}}^{m})}\), p>0, A an (m×d) matrix. This allows a generalized p-norm. If A is d×d and orthogonal, then the resulting contour will be a rotation of the standard unit ball in ℓ p . If A is d×d and not orthogonal, then the contour will be sheared. If m>d, it will give the ℓ p norm on \({\mathbb {R}}^{m}\) of A s.
r ∗(s)=(s ⊤ A s)1/2, where A is a positive definite (d×d) matrix. Then the level curves of the distribution are ellipses. Any elliptically contoured distribution can be modeled by using just this term in a contour function and the appropriate radial decay function.
Sums of the first three types allow us to describe star-shaped contours, see Fig. 1. Inverses of sums of the last three types allow us to consider contours that are familiar unit balls, or generalized unit balls, or sums of such shapes. Specifying a radial decay function g(·) defines a density f(x) by (1) as in Fig. 2. An implementation of this construction is given in the R package gensphere. The R statements used in this example are given in the Appendix.
On the left is a sample of size n=1000 from the generalized spherical distribution based on the contour in Fig. 1 and a Γ(2,1) radial term. On the right, is a surface plot of the corresponding bivariate density f(x,y)
It is relatively easy to add new types of terms to this list if other contours are of interest. However this set of basic shapes can model a wide range of shapes, including contours supported on a cone. Figure 3 shows nine examples. The top row shows ℓ p balls with p=1/2, p=1, and p=5. The middle row starts with a contour made up of an ℓ p ball with a p=0.3 and a copy of that rotated by π/4, the rotation done by using a generalized ℓ p norm with A a rotation matrix. The next two plots show generalized ℓ p balls with A=(1,1;1,−4;1,3;5,−3) and p=1/2 (middle) and p=1.1 (right). The last row shows contours supported on a cone. The left plot is the sum of three Gaussian bumps of type 3, each centered at (cosθ, sinθ), θ=π/4,π/2,3π/2 and σ=0.3. The middle plot has two type 2 cones, at angles −π/6 and −π/3 with σ=0.4. The last graph also has two cones, centered at π/6 and π/3, with σ=0.25. Any of the contours that have a corner or cusp on a ray will generate a density surface with a ridge along that ray. A more complicated three dimensional example with 11 terms in the definition of c(·) is given in Fig. 4: an elliptical base of type 6 and 10 cones of type 2.
A selection of contours made from the different types of terms. See the text for a description
A 3D star-shaped region with one term of type 6 and 10 terms of type 2. The top plot shows the contour, the middle shows a sample of size 2500 from the contour, the bottom shows a sample of size 10000 from the generalized spherical distribution given by this contour and a Γ(3,1) radial term R
2.2 Choice of R
In general, g(r) can be any nonnegative integrable function. The radial decay of R determines the decay of f(·) on \({\mathbb {R}}^{d}\). In most applications one wants 0<g(0)<∞ and g(r) decreasing for r>0, but other possibilities may be of interest. If g(0)=0, the density surface given by (1) will have a "well" at the origin; if g(0)=+∞, then the density blows up at the origin. If g(·) oscillates, then the density surface will have radial "waves" emanating out from the origin. If R has bounded support, then X will have bounded support.
The gamma distributions give a family of distributions that can be used to get generalized spherical distributions with light tails. If a Γ(d,1) law is used for R, then h(r)=Γ(d)−1 r d−1 exp(−r), so \(g(r)=k_{\mathcal {C}} r^{1-d} h(r) = (k_{\mathcal {C}}/\Gamma (d)) \exp (-r)\), which is finite at the origin and monotonically decreasing. If one wants heavy tails for X, then some possibilities for R are Fréchet, Pareto and multivariate stable amplitude. (The latter is defined in Nolan (2013) by R=|Z|, where Z is radially symmetric/isotropic α-stable in d-dimensions. Numerical methods to calculate the density h(r) of R and simple ways to simulate are given in the reference).
Figure 5 shows the effect that the choice of R has. In all cases, the base contour is the unit ball in ℓ 1, a diamond shape. At the upper left, R is a uniform r.v. on (0, 1). In this case, g(0)=+∞ and the density has a spike at the origin and bounded support on the diamond. At the top right, R∼Γ(2,1), so g(0)=1 and the distribution has unbounded support with light tails. At the lower left, R is the α=1 stable amplitude in d=2 dimensions; here g(0) is finite and the distribution has heavy tails. The bottom right plot is with R∼Γ(5,1), so g(0)=0 and the distribution has a well at the origin and unbounded support with light tails.
Density surface for generalized spherical distributions with the same diamond shape contour and different radial term R as specified in the text
A large part of the technical complexity of working with generalized spherical laws is in representing the contours and evaluating the norming constant \(k_{\mathcal {C}}\) in (2) and simulating from the contour \(\mathcal {C}\). The gensphere package uses two other recent R packages for these problems: SphericalCubature Nolan (2015b) and mvmesh Nolan (2015a).
SphericalCubature numerically integrates a function on a d-dimensional sphere. Given a tessellation of the sphere in \({\mathbb {R}}^{d}\), it uses adaptive integration to integrate over the (d−1)-dimensional surface to evaluate \(k_{\mathcal {C}}\). If the integrand function is smooth and the tessellation is reasonable, then the numerical integration is accurate in modest dimensions, say d=2,3,4,5,6. However, when the integrand function has abrupt changes, numerical techniques can miss parts of the integral. This is even a problem in dimension 2, where the integration is a one dimensional problem. One way to deal with this is to work with tessellations that focus on the places where the integrand is not smooth. In complete generality, this is hard to do. However, in evaluating integral (2) for one of the contours described above, we have an implicit description of where the contour changes abruptly.
The mvmesh package is used to define multivariate meshes, e.g. a collection of vertices and grouping information that specify a list of simplices that approximate a contour. The first place where mvmesh is used in gensphere is to give a grid on the sphere \({\mathbb {S}}\) in d-dimensions, e.g. the top left plot in Fig. 1. mvmesh has a function UnitSphere that computes an approximately equal surface area approximation to a hypersphere in dimension d. It takes a parameter k to say how many recursive subdivisions are used in each octant; increasing this value will give a finer tessellation of the sphere. Then this tessellation is refined by adding points to the sphere centered on the places where the contour has bumps, e.g. the cone and Gaussian bumps (type 2 and 3). Then the new points are combined with the original tessellation of the sphere to get a refined tessellation of the sphere that includes these key points.
It is at this point that the SphericalCubature package is used to evaluate the integral (2). This is difficult to accurately evaluate in dimension greater than three if the contour is not smooth. In addition to the estimate of the integral, we use an option in the adaptive integration routine to return the partition used in the multivariate cubature, along with the estimated integral over each simplex. The reasoning is that the integration routine is subdividing regions where the integrand is changing quickly to get a better estimate of the integrand. This subdivision should make the tessellation more closely approximate the contour. We now have the final tessellation of the unit sphere, an estimate of the integral (2) over each of the simplices, and an estimate of the norming constant, e.g. sum of these just mentioned values.
Now the tessellation of the contour is defined by deforming the tessellation of the sphere to the contour: each partition point \({\mathbf {s}} \in {\mathbb {S}}\) gets mapped to c(s)s on the contour. The grouping information from the spherical tessellation is inherited by the contour tessellation. This tessellation is returned as an S3 object of class "mvmesh". This object contains the vertices, the grouping information, and a list of all the simplices S 1,S 2,…,S k in the tessellation. One advantage of this is that the plot method from the mvmesh package can plot the contours in 2 and 3 dimensions. This process of refining the tessellation has two purposes: (a) get a more accurate estimate of the norming constant by focusing the numerical integration routine on regions where the integrand changes rapidly and (b) get a more accurate tessellation of the contour. Each step of this process can add more simplices, with the goal of capturing key features of the contour. For example, the contour in Fig. 4 started with 512 simplices in the tessellation of the sphere in \({\mathbb {R}}^{3}\) with k=3, adding the points on the cones brought the number up to 888 simplices, and after the adaptive cubature routine subdivision there were 2284 simplices.
Exact simulation from a surface is a challenging problem and general methods are difficult to apply for complicated contours like our star-shaped regions. We now describe an approximate method based on the above tessellation. Recall that the above process gives us a list of simplices S 1,…,S m and associated weights w 1,…,w m , with w j an estimate of the surface area of the contour approximated by simplex S j .
The simulation routine to sample from the tessellation is straightforward:
Select an index j∈{1,…,m} with probability proportional to w j .
Simulate a point u that is uniformly distributed on the unit simplex in d-dimensions. This is standard: simulate u from a Dirichlet distribution with parameter α=(1,1,…,1), e.g. let E 1,…,E d i.i.d. standard exponential random variates and set \({\mathbf {u}}=(E_{1},\ldots,E_{d})/ \left (\sum _{i=1}^{d} E_{i} \right)\).
Map the point u to the simplex S j using the coordinates of u as barycentric coordinates: Z=u ⊤ S j .
Simulate R from the radial distribution with density h(r).
Return the value X=R Z.
This method works in any dimension and the first three steps are adaptable to a wide variety of shapes, more than just the contours described above. This gives a way to define distributions on paths and surfaces. Figure 6 illustrates some examples with different shapes and weights. In all cases the points Z are sampled from the approximating simplex faces; to work well the tessellation should be fine enough to closely approximate the shape of the surface of interest. This is controlled by the parameter k described above. The trefoil knot in the upper left plot is approximated by 101 line segments; for simulation, a line segment is sampled uniformly (w j =1/101) and then a point is picked randomly along that segment. In the second plot, the letters JSDA are constructed out of straight line segments, then embedded in \({\mathbb {R}}^{3}\). A line segment is selected with weight proportional to the lengths of the line segments making up the letters, and then a point is sampled uniformly along that segment. The bottom left plot subdivides the unit simplex x 1+x 2+x 3=1, x 1≥0, x 2≥0, x 3≥0 into 100 triangles of equal area (a k=10 edge subdivision) and weights are assigned to each triangle with weights proportional to w j = average of the density \(\exp \left (-20 |{\mathbf {x}}-\left (\frac 1 3, \frac 1 3,\frac 1 3\right) |^{2} \right)\) at the vertices of simplex j. The last plot shows a hollow tube approximated by 160 rectangles (5 subdivisions along the axis and 32 subdivisions around the cylinder) with rectangles sampled uniformly and points sampled uniformly from that rectangle.
Approximate simulation from general sets; details are given in the text. At top left is a trefoil knot with points sampled uniformly from the path. Top right has the letters JSDA constructed from line segments, then embedded in \({\mathbb {R}}^{3}\). Points are then sampled uniformly according to lengths of the line segments. Bottom left has points sampled on the unit simplex according to a density \(\exp \left (-20 |{\mathbf {x}}-\left (\frac 1 3, \frac 1 3,\frac 1 3\right) |^{2}\right)\). The last plot shows points sampled uniformly from a hollow tube
The subdivision process, including the numerical cubature is the slowest part of the process. This is done in the R function cfunc.finish, which finishes the definition of a contour by performing the above calculations and saving the results in an object of class "contour.function". For the example the 3-dimensional example in Fig. 4 took about half an hour1 to complete the construction.
In contrast, once the tessellation is produced, density calculations and simulations are quite fast: to evaluate a density at 10,000 points takes less than a second and to simulate 100,000 random vectors takes less than a second for this example.
In principle, the methods described here work in any dimension; in practice the numerical challenges, particularly evaluating the integral in (2) and the time needed to work limit us as the dimension increases. At the current time, these methods are useful for low dimension d=2, 3, or 4.
1 Times are for an Intel i5-4460 CPU at 3.20 GHz.
Here are the R statements used to produce Figs. 1 and 2 from the R package gensphere.
Additional file 1 contains the R commands to generate the other figures in this paper.
The author is grateful to the referees and associate editor who provided valuable suggestions on improving the paper and additional references.
Supported by contract W911NF-12-1-0385 from the Army Research Office.
I confirm that I have read SpringerOpen's guidance on competing interests and have no competing interests in the manuscript.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Additional file 1 R commands to generate Figs. 3, 4, 5 and 6. (R 11.2 kb)
Department of Mathematics and Statistics, American University, Washington, DC, USA
Arnold, BC, Castillo, E, Sarabia, JM: Multivariate distributions defined in terms of contours. J. Stat. Plan. Inf. 138, 4158–4171 (2008).MathSciNetView ArticleMATHGoogle Scholar
Balkema, G, Nolde, N: Asymptotic independence for unimodal densities. Adv. Appl. Prob. 42, 411–432 (2010).MathSciNetView ArticleMATHGoogle Scholar
Fernández, C, Osiewalski, J, Steel, MFJ: Modeling and inference with v-spherical distributions. J. Amer. Stat. Assoc. 90, 1331–1340 (1995).MathSciNetMATHGoogle Scholar
Kamiya, H, Takemura, A, Kuriki, S: Star-shaped distributions and their generalizations. J. Stat. Plan. Inf. 138, 3429–3447 (2008).MathSciNetView ArticleMATHGoogle Scholar
Nolan, JP: Multivariate elliptically contoured stable distributions: theory and estimation. Comp. Stat. 28, 2067–2089 (2013).MathSciNetView ArticleMATHGoogle Scholar
Nolan, JP: mvmesh: Multivariate Meshes and Histograms in Arbitrary Dimensions. R package version 1.1, on CRAN (2015a). https://CRAN.R-project.org/package=mvmesh. Accessed 16 May 2016.
Nolan, JP: SphericalCubature: Numerical Integration over Spheres and Balls in n-Dimensions. R package version 1.1, on CRAN (2015b). https://CRAN.R-project.org/package=SphericalCubature. Accessed 24 July 2016.
Rattihalli, RN, Basugade, AB: Generation of densities using contour transformations. J. Indian Stat. Assoc. 47, 63–90 (2009).MathSciNetGoogle Scholar
Rattihalli, RN, Patil, PY: Generalized v-spherical densities. Comm. Stat. Theory Methods. 39, 3568–3583 (2010).MathSciNetView ArticleMATHGoogle Scholar
Richter, WD: Geometric disintegration and star-shaped distributions. J. Stat. Distrib. Appl. 1, 20 (2014). doi:http://dx.doi.org/10.1186/s40488-014-0020-6.
Simon, C, Blume, L: Mathematics for Economists. Norton, New York (1994).Google Scholar | CommonCrawl |
Applications of Calculus to the physical world
Gradient as a measure or rate
Related rates (dy/dx=dy/dt x dt/dx)
Exponential Growth and Decay
Newton's law of cooling
Displacement, velocity and acceleration (mixed functions)
Velocity and acceleration as functions of x
Level 8 - NCEA Level 3
In this chapter, we use the relation
$\frac{\mathrm{d}y}{\mathrm{d}x}=\frac{\mathrm{d}y}{\mathrm{d}u}\times\frac{\mathrm{d}u}{\mathrm{d}x}$dydx=dydu×dudx
This is the chain rule in the notation of Leibniz. We use the chain rule to differentiate functions that are a function of a function. In particular, given a function $y=f\left(u(x)\right)$y=f(u(x)), its derivative is given by the expression above.
It may be that we know how to express a variable $y$y as a function of $u$u. Thus, we can differentiate $y$y with respect to $u$u to obtain $\frac{\mathrm{d}y}{\mathrm{d}u}$dydu. We may also have an expression for $u$u as a function of $x$x and therefore we can find $\frac{\mathrm{d}u}{\mathrm{d}x}$dudx.
Putting these together, we obtain $\frac{\mathrm{d}y}{\mathrm{d}x}$dydx as the product of the two derivatives. That is, we find the rate of change of $y$y with respect to $x$x.
It can happen that we know how to write the derivatives $\frac{\mathrm{d}y}{\mathrm{d}t}$dydt and $\frac{\mathrm{d}x}{\mathrm{d}t}$dxdt but we require $\frac{\mathrm{d}y}{\mathrm{d}x}$dydx. To make this fit into the framework of the chain rule as described above, we need an additional fact:
If we have the rate of change of $y$y with respect to $x$x, then the rate of change of $x$x with respect to $y$y is its reciprocal.
In the present example, we have $\frac{\mathrm{d}x}{\mathrm{d}t}$dxdt and we need $\frac{\mathrm{d}t}{\mathrm{d}x}$dtdx in order to use the chain rule. So, we use the fact that $\frac{\mathrm{d}t}{\mathrm{d}x}=\frac{1}{\frac{\mathrm{d}x}{\mathrm{d}t}}$dtdx=1dxdt.
A continuous supply of ink is seeping onto a porous plane surface so that a circular spot forms and grows over time. The area of the spot is increasing at the rate of $0.5\text{cm}^2$0.5cm2 per second. However, the radius of the spot is increasing at a rate that reduces as the spot grows. Find an expression for the time-rate of change of the radius and deduce the rate of change when the radius is $6\text{cm}$6cm.
The first thing to do in problems of this kind is to define some variables so that the problem can be expressed algebraically.
Let $A$A be the area of the spot, let $r$r be its radius and let $t$t be the elapsed time. The goal is to find $\frac{\mathrm{d}r}{\mathrm{d}t}$drdt when $r=6$r=6.
We are given that $\frac{\mathrm{d}A}{\mathrm{d}t}=0.5$dAdt=0.5, and we can make use of the fact that $A=\pi r^2$A=πr2 and hence, $\frac{\mathrm{d}A}{\mathrm{d}r}=2\pi r$dAdr=2πr.
The three derivatives combine to form $\frac{\mathrm{d}A}{\mathrm{d}r}.\frac{\mathrm{d}r}{\mathrm{d}t}=\frac{\mathrm{d}A}{\mathrm{d}t}$dAdr.drdt=dAdt. That is, $2\pi r\times\frac{\mathrm{d}r}{\mathrm{d}t}=0.5$2πr×drdt=0.5. On rearranging, this is $\frac{\mathrm{d}r}{\mathrm{d}t}=\frac{0.5}{2\pi r}=\frac{1}{4\pi r}$drdt=0.52πr=14πr.
Therefore, when $r=6$r=6, we have $\frac{\mathrm{d}r}{\mathrm{d}t}=\frac{1}{24\pi}\approx0.01\text{cm}/\text{s}$drdt=124π≈0.01cm/s.
The fuel supply to a certain rocket engine is regulated in such a way that the rocket travels with a constant acceleration of $20\text{m/}s^2$20m/s2 for $90$90 seconds after launch. What is the rate of change of velocity with respect to displacement when the velocity reaches $1000\text{km/s}$1000km/s?
Let $a$a be the acceleration, $v$v the velocity, $s$s the displacement, and let $t$t be the elapsed time. We are asked to find $\frac{\mathrm{d}v}{\mathrm{d}s}$dvds when $v=1000$v=1000.
Acceleration is the time rate of change of velocity. In symbols, $a=\frac{\mathrm{d}v}{\mathrm{d}t}$a=dvdt. But, according to the chain rule, this is $\frac{\mathrm{d}v}{\mathrm{d}s}.\frac{\mathrm{d}s}{\mathrm{d}t}$dvds.dsdt. We recall that velocity is the time rate of change of displacement, $v=\frac{\mathrm{d}s}{\mathrm{d}t}$v=dsdt. It follows that an alternative characterisation of acceleration is $a=v\frac{\mathrm{d}v}{\mathrm{d}s}$a=vdvds.
Therefore, $a=20=\frac{\mathrm{d}v}{\mathrm{d}s}\times1000$a=20=dvds×1000 and so, $\frac{\mathrm{d}v}{\mathrm{d}s}$dvds at $v=1000$v=1000 is $\frac{1}{50}\ \text{s}^{-1}$150 s−1. This means that for a brief time the velocity increases by $\frac{1}{50}\ \text{m/s}$150 m/s in the space of one metre.
Worked Examples
The volume $V$V of oxygen in a scuba diver's oxygen cylinder is given by $V=\frac{22}{P}$V=22P, where $P$P is the pressure inside the tank.
Find the rate of change of $V$V with respect to $P$P.
During a dive, the pressure $P$P inside the cylinder increases at $0.5$0.5 units per second. Find the rate of change of the volume of oxygen when $P=2$P=2.
Let $t$t represent time in seconds.
A spherical hot air balloon, whose volume and radius at time $t$t are $V$V m3 and $r$r m respectively, is filled with air at a rate of $4$4 m3/min.
At what rate is the radius of the balloon increasing when the radius is $2$2 m?
A point moves along the curve $y=5x^3$y=5x3 in such a way that the $x$x-coordinate of the point increases by $\frac{1}{5}$15 units per second.
Let $t$t be the time at which the point reaches $\left(x,y\right)$(x,y).
Find the rate at which the $y$y-coordinate is changing with respect to time when $x=9$x=9.
M8-11
Choose and apply a variety of differentiation, integration, and antidifferentiation techniques to functions and relations, using both analytical and numerical methods
Apply differentiation methods in solving problems | CommonCrawl |
Metal Processing Plant
Picture from Wikimedia Commons
Yulia works for a metal processing plant in Ekaterinburg. This plant processes ores mined in the Ural mountains, extracting precious metals such as chalcopyrite, platinum and gold from the ores. Every month the plant receives $n$ shipments of unprocessed ore. Yulia needs to partition these shipments into two groups based on their similarity. Then, each group is sent to one of two ore processing buildings of the plant.
To perform this partitioning, Yulia first calculates a numeric distance $d(i, j)$ for each pair of shipments $1 \le i \le n$ and $1 \le j \le n$, where the smaller the distance, the more similar the shipments $i$ and $j$ are. For a subset $S \subseteq \{ 1, \ldots , n\} $ of shipments, she then defines the disparity $D$ of $S$ as the maximum distance between a pair of shipments in the subset, that is,
\[ D(S) = \max _{i, j \in S} d(i, j). \]
Yulia then partitions the shipments into two subsets $A$ and $B$ in such a way that the sum of their disparities $D(A) + D(B)$ is minimized. Your task is to help her find this partitioning.
The input consists of a single test case. The first line contains an integer $n$ ($1 \le n \le 200$) indicating the number of shipments. The following $n - 1$ lines contain the distances $d(i,j)$. The $i^{th}$ of these lines contains $n - i$ integers and the $j^{th}$ integer of that line gives the value of $d(i, i+j)$. The distances are symmetric, so $d(j, i) = d(i, j)$, and the distance of a shipment to itself is $0$. All distances are integers between $0$ and $10^9$ (inclusive).
Display the minimum possible sum of disparities for partitioning the shipments into two groups.
Problem ID: metal
CPU Time limit: 2 seconds | CommonCrawl |
Regularity criteria of smooth solution to the incompressible viscoelastic flow
CPAA Home
Remarks on the approximation of the Navier-Stokes equations via the implicit Euler scheme
November 2013, 12(6): 2839-2872. doi: 10.3934/cpaa.2013.12.2839
Eigenvalues, bifurcation and one-sign solutions for the periodic $p$-Laplacian
Guowei Dai 1, , Ruyun Ma 1, and Haiyan Wang 2,
Department of Mathematics, Northwest Normal University, Lanzhou, 730070, China, China
Division of Mathematical and Natural Sciences, Arizona State University, Phoenix, AZ 85069-7100
Received December 2012 Revised February 2013 Published May 2013
In this paper, we establish a unilateral global bifurcation result for a class of quasilinear periodic boundary problems with a sign-changing weight. By the Ljusternik-Schnirelmann theory, we first study the spectrum of the periodic $p$-Laplacian with the sign-changing weight. In particular, we show that there exist two simple, isolated, principal eigenvalues $\lambda_0^+$ and $\lambda_0^-$. Furthermore, under some natural hypotheses on perturbation function, we show that $(\lambda_0^\nu,0)$ is a bifurcation point of the above problems and there are two distinct unbounded sub-continua $C_\nu^{+}$ and $C_\nu^{-}$, consisting of the continuum $C_\nu$ emanating from $(\lambda_0^\nu, 0)$, where $\nu\in\{+,-\}$. As an application of the above result, we study the existence of one-sign solutions for a class of quasilinear periodic boundary problems with the sign-changing weight. Moreover, the uniqueness of one-sign solutions and the dependence of solutions on the parameter $\lambda$ are also studied.
Keywords: One-sign solutions., Eigenvalues, Unilateral global bifurcation, Periodic $p$-Laplacian.
Mathematics Subject Classification: Primary: 34B18, 34C23; Secondary: 34D23, 34L0.
Citation: Guowei Dai, Ruyun Ma, Haiyan Wang. Eigenvalues, bifurcation and one-sign solutions for the periodic $p$-Laplacian. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2839-2872. doi: 10.3934/cpaa.2013.12.2839
M. S. Alber, R. Camassa, D. Holm and J. E. Marsden, The geometry of peaked solitons of a class of integrable PDE's,, Lett. Math. Phys., 32 (1994), 37. doi: 10.2307/2152750. Google Scholar
A. Ambrosetti, R. M. Calahorrano and F. R. Dobarro, Global branching for discontinuous problems,, Comment. Math. Univ. Carolin., 31 (1990), 213. Google Scholar
A. Ambrosetti and A. Malchiodi, "Nonlinear Analysis and Semilinear Elliptic Problems,", Cambridge Studies in Advanced Mathematics No. 104, (2007). Google Scholar
F. M. Atici and G. S. Guseinov, On the existence of positive solutions for nonlinear differential equations with periodic boundary conditions,, J. Comput. Appl. Math., 132 (2001), 341. Google Scholar
P. A. Binding and B. P. Rynne, The spectrum of the periodic $p$-Laplacian,, J. Differentiable Equations, 235 (2007), 199. Google Scholar
P. A. Binding and B. P. Rynne, Variational and non-variational eigenvalues of the $p$-Laplacian,, J. Differentiable Equations, 244 (2008), 24. Google Scholar
H. Brezis, "Analyse Fonctioneile. Theéorie et Applications,", Masson, (1983). Google Scholar
R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Lett., 71 (1993), 1661. Google Scholar
R. Camassa, D. Holm and J. Hyman, A new integrable shallow water equation,, Adv. Appl. Mech., 31 (1994), 1. Google Scholar
A. Constantin, On the spectral problem for the periodic Camassa-Holm equation,, J. Math. Anal. Appl., 210 (1997), 215. Google Scholar
A. Constantin, A general-weighted Sturm-Liouville problem,, Ann. Sci. \'Ec. Norm. Sup\'er., 24 (1997), 767. Google Scholar
A. Constantin and R. S. Johnson, Propagation of very long water waves, with vorticity, over variable depth, with applications to tsunamis,, Fluid Dynam. Res., 40 (2008), 175. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations,, Arch. Ration. Mech. Anal., 192 (2009), 165. Google Scholar
A. Constantin and H. P. McKean, A shallow water equation on the circle,, Commun. Pure Appl. Math., 52 (1999), 949. Google Scholar
M. Cuesta, Eigenvalue problems for the $p$-Laplacian wirh indefinite weights,, Electron. J. Differential Equations, 33 (2001), 1. Google Scholar
G. Dai, Bifurcation and nodal solutions for $p$-Laplacian problems with non-asymptotic nonlinearity at 0 or $\infty$,, Appl. Math. Lett., 26 (2013), 46. Google Scholar
G. Dai and R. Ma, Unilateral global bifurcation phenomena and nodal solutions for $p$-Laplacian,, J. Differential Equations, 252 (2012), 2448. Google Scholar
G. Dai, R. Ma and Y. Lu, Bifurcation from infinity and nodal solutions of quasilinear problems without signum condition,, J. Math. Anal. Appl., 397 (2013), 119. Google Scholar
G. Dai, et al., Global bifurcation and nodal solutions of $N$-dimensional $p$-Laplacian in unit ball,, Appl. Anal., (2013). doi: 10.1080/00036811.2012.678333. Google Scholar
E. N. Dancer, On the structure of solutions of non-linear eigenvalue problems,, Indiana U. Math J., 23 (1974), 1069. Google Scholar
E. N. Dancer, Bifurcation from simple eigenvalues and eigenvalues of geometric multiplicity one,, Bull. London Math. Soc., 34 (2002), 533. Google Scholar
K. Deimling, "Nonlinear Functional Analysis,", Springer-Verlag, (1987). Google Scholar
M. Del Pino, M. Elgueta and R. Manásevich, A homotopic deformation along $p$ of a Leray-Schauder degree result and existence for $(|u'|^{p-2}u')'+f(t,u)=0$, $u(0)=u(T)=0$, $p>1$,, J. Differential Equations, 80 (1989), 1. Google Scholar
M. Del Pino and R. Manásevich, Global bifurcation from the eigenvalues of the $p$-Lapiacian,, J. Differential Equations, 92 (1991), 226. Google Scholar
L. C. Evans, "Partial Differential Equations,", AMS, (1998). Google Scholar
X. L. Fan and X. Fan, A Knobloch-type result for $p(t)$-Laplacian systems,, J. Math. Anal. Appl., 282 (2003), 453. Google Scholar
X. L. Fan and Q. H. Zhang, Existence of solutions for $p(x)$-Laplacian Dirichlet problems,, Nonlinear Anal., 52 (2003), 1843. Google Scholar
J. R. Graef, L. Kong and H. Wang, Existence, multiplicity, and dependence on a parameter for a periodic boundary value problem,, J. Differential Equations, 245 (2008), 1185. Google Scholar
B. Im, E. Lee and Y. H. Lee, A global bifurcation phenomena for second order singular boundary value problems,, J. Math. Anal. Appl., 308 (2005), 61. Google Scholar
D. Jiang et. al., Multiple positive solutions to superlinear periodic boundary value problems with repulsive singular forces,, J. Math. Anal. Appl., 286 (2003), 563. Google Scholar
R. S. Johnson, Camassa-Holm, Korteweg-de Vries and related models for water waves,, J. Fluid Mech., 455 (2002), 63. Google Scholar
M. Lakshmanan, Integrable nonlinear wave equations and possible connections to tsunami dynamics,, in, (2007), 31. Google Scholar
Y. H. Lee and I. Sim, Global bifurcation phenomena for singular one-dimensional $p$-Laplacian,, J. Differential Equations, 229 (2006), 229. Google Scholar
Y. Li, Positive doubly periodic solutions of nonlinear telegraph equations,, Nonlinear Anal., 55 (2003), 245. Google Scholar
W. Li and X. Liu, Eigenvalue problems for second-order nonlinear dynamic equations on time scales,, J. Math. Anal. Appl., 318 (2006), 578. Google Scholar
X. Liu and W. Li, Existence and uniqueness of positive periodic solutions of functional differential equations,, J. Math. Anal. Appl., 293 (2004), 28. Google Scholar
J. L\'opez-Gómez, "Spectral Theory and Nonlinear Functional Analysis,", Chapman and Hall/CRC, (2001). Google Scholar
R. Ma, Global behavior of the components of nodal solutions of asymptotically linear eigenvalue problems,, Appl. Math. Lett., 21 (2008), 754. Google Scholar
R. Ma and Y. An, Global structure of positive solutions for nonlocal boundary value problems involving integral conditions,, Nonlinear Anal., 71 (2009), 4364. Google Scholar
R. Ma, J. Xu and X. Han, Global bifurcation of positive solutions of a second-order periodic boundary value problem with indefinite weight,, Nonlinear Anal., 74 (2011), 3379. Google Scholar
R. Ma, J. Xu and X. Han, Global structure of positive solutions for superlinear second-order periodic boundary value problems,, Appl. Math. Comput., 218 (2012), 5982. Google Scholar
J. Mawhin and M.Willem, "Critical Point Theory and Hamiltonian Systems,", Springer, (1989). Google Scholar
M. Montenego, Strong maximum principles for super-solutions of quasilinear elliptic equations,, Nonlinear Ana1., 37 (1999), 431. Google Scholar
D. O'Regan and H. Wang, Positive periodic solutions of systems of second order ordinary differential equations,, Positivity, 10 (2006), 285. Google Scholar
I. Peral, "Multiplicity of Solutions for the $p$-Laplacian,", ICTP SMR 990/1, (1997). Google Scholar
P. H. Rabinowitz, Some global results for nonlinear eigenvalue problems,, J. Funct. Anal., 7 (1971), 487. Google Scholar
P. H. Rabinowitz, On bifurcation from infinity,, J. Funct. Anal., 14 (1973), 462. Google Scholar
B. P. Rynne, $p$-Laplacian problems with jumping nonlinearities,, J. Differential Equations, 226 (2006), 501. Google Scholar
A. Szulkin, Ljusternik-Schnirelmann theory on $C^1$-manifolds,, Ann. I. H. Poincar\'e, 5 (1988), 119. Google Scholar
P. Torres, Existence of one-signed periodic solutions of some second-order differential equations via a Krasnosel'skii fixed point theorem,, J. Differential Equations, 190 (2003), 643. Google Scholar
E. Zeidler, "Nonlinear Functional Analysis and Its Applications,", Vol. II/B, (1985). Google Scholar
Z. Zhang and J. Wang, On existence and multiplicity of positive solutions to periodic boundary value problems for singular nonlinear second order differential equations,, J. Math. Anal. Appl., 281 (2003), 99. Google Scholar
Guowei Dai. Bifurcation and one-sign solutions of the $p$-Laplacian involving a nonlinearity with zeros. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5323-5345. doi: 10.3934/dcds.2016034
Guowei Dai, Ruyun Ma. Unilateral global bifurcation for $p$-Laplacian with non-$p-$1-linearization nonlinearity. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 99-116. doi: 10.3934/dcds.2015.35.99
Marta García-Huidobro, Raul Manásevich, J. R. Ward. Vector p-Laplacian like operators, pseudo-eigenvalues, and bifurcation. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 299-321. doi: 10.3934/dcds.2007.19.299
Michael Filippakis, Alexandru Kristály, Nikolaos S. Papageorgiou. Existence of five nonzero solutions with exact sign for a $p$-Laplacian equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (2) : 405-440. doi: 10.3934/dcds.2009.24.405
Shao-Yuan Huang. Global bifurcation and exact multiplicity of positive solutions for the one-dimensional Minkowski-curvature problem with sign-changing nonlinearity. Communications on Pure & Applied Analysis, 2019, 18 (6) : 3267-3284. doi: 10.3934/cpaa.2019147
Leszek Gasiński, Nikolaos S. Papageorgiou. Three nontrivial solutions for periodic problems with the $p$-Laplacian and a $p$-superlinear nonlinearity. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1421-1437. doi: 10.3934/cpaa.2009.8.1421
Leandro M. Del Pezzo, Julio D. Rossi. Eigenvalues for a nonlocal pseudo $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6737-6765. doi: 10.3934/dcds.2016093
Lorenzo Brasco, Enea Parini, Marco Squassina. Stability of variational eigenvalues for the fractional $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1813-1845. doi: 10.3934/dcds.2016.36.1813
Samir Adly, Daniel Goeleven, Dumitru Motreanu. Periodic and homoclinic solutions for a class of unilateral problems. Discrete & Continuous Dynamical Systems - A, 1997, 3 (4) : 579-590. doi: 10.3934/dcds.1997.3.579
Wenguo Shen. Unilateral global interval bifurcation for Kirchhoff type problems and its applications. Communications on Pure & Applied Analysis, 2018, 17 (1) : 21-37. doi: 10.3934/cpaa.2018002
Marek Galewski, Renata Wieteska. Multiple periodic solutions to a discrete $p^{(k)}$ - Laplacian problem. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2535-2547. doi: 10.3934/dcdsb.2014.19.2535
Yuxiang Zhang, Shiwang Ma. Some existence results on periodic and subharmonic solutions of ordinary $P$-Laplacian systems. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 251-260. doi: 10.3934/dcdsb.2009.12.251
Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin. Singular periodic solutions for the p-laplacian ina punctured domain. Communications on Pure & Applied Analysis, 2017, 16 (2) : 373-392. doi: 10.3934/cpaa.2017019
Wenbin Liu, Zhaosheng Feng. Periodic solutions for $p$-Laplacian systems of Liénard-type. Communications on Pure & Applied Analysis, 2011, 10 (5) : 1393-1400. doi: 10.3934/cpaa.2011.10.1393
Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska. Periodic solutions to differential equations with a generalized p-Laplacian. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2593-2601. doi: 10.3934/dcdsb.2014.19.2593
Shanming Ji, Jingxue Yin, Yutian Li. Positive periodic solutions of the weighted $p$-Laplacian with nonlinear sources. Discrete & Continuous Dynamical Systems - A, 2018, 38 (5) : 2411-2439. doi: 10.3934/dcds.2018100
Po-Chun Huang, Shin-Hwa Wang, Tzung-Shin Yeh. Classification of bifurcation diagrams of a $P$-Laplacian nonpositone problem. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2297-2318. doi: 10.3934/cpaa.2013.12.2297
Francisco Odair de Paiva, Humberto Ramos Quoirin. Resonance and nonresonance for p-Laplacian problems with weighted eigenvalues conditions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1219-1227. doi: 10.3934/dcds.2009.25.1219
K. D. Chu, D. D. Hai. Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem. Communications on Pure & Applied Analysis, 2020, 19 (1) : 241-252. doi: 10.3934/cpaa.2020013
Giuseppina Barletta, Roberto Livrea, Nikolaos S. Papageorgiou. A nonlinear eigenvalue problem for the periodic scalar $p$-Laplacian. Communications on Pure & Applied Analysis, 2014, 13 (3) : 1075-1086. doi: 10.3934/cpaa.2014.13.1075
Guowei Dai Ruyun Ma Haiyan Wang | CommonCrawl |
Altruistic aging: The evolutionary dynamics balancing longevity and evolvability
MBE Home
Global stability of a multistrain SIS model with superinfection
April 2017, 14(2): 437-453. doi: 10.3934/mbe.2017027
Detecting phase transitions in collective behavior using manifold's curvature
Kelum Gajamannage , and Erik M. Bollt
Department of Mathematics, Clarkson University, Potsdam, NY-13699, USA
*Corresponding author
Received September 23, 2015 Revised July 19, 2016 Published October 2016
Fund Project: The authors were supported by the NSF grant CMMI-1129859. Erik M. Bollt was supported by the Army Research Office grant W911NF-12-1-276 and Office of Naval Research grant N00014-15-2093.
If a given behavior of a multi-agent system restricts the phase variable to an invariant manifold, then we define a phase transition as a change of physical characteristics such as speed, coordination, and structure. We define such a phase transition as splitting an underlying manifold into two sub-manifolds with distinct dimensionalities around the singularity where the phase transition physically exists. Here, we propose a method of detecting phase transitions and splitting the manifold into phase transitions free sub-manifolds. Therein, we firstly utilize a relationship between curvature and singular value ratio of points sampled in a curve, and then extend the assertion into higher-dimensions using the shape operator. Secondly, we attest that the same phase transition can also be approximated by singular value ratios computed locally over the data in a neighborhood on the manifold. We validate the Phase Transition Detection (PTD) method using one particle simulation and three real world examples.
Keywords: Phase transition, manifold, collective behavior, dimensionality reduction, curvature.
Mathematics Subject Classification: Primary: 53C15, 53C21; Secondary: 58D15.
Citation: Kelum Gajamannage, Erik M. Bollt. Detecting phase transitions in collective behavior using manifold's curvature. Mathematical Biosciences & Engineering, 2017, 14 (2) : 437-453. doi: 10.3934/mbe.2017027
Birds flying away, shutterstock. Available from: https://www.shutterstock.com/video/clip-3003274-stock-footage-birds-flying-away.html?src=search/Yg-XYej1Po2F0VO3yykclw:1:19/gg. Google Scholar
Data set of detection of unusual crowd activity available at robotics and vision laboratory, Department of Computer Science and Engineering, University of Minnesota. Available from: http://mha.cs.umn.edu/proj_events.shtml. Google Scholar
Data set of pet2009 at Computational Vision Group, University of Reading, 2009. Available from: http://ftp.pets.reading.ac.uk/pub/. Google Scholar
N. Abaid, E. Bollt and M. Porfiri, Topological analysis of complexity in multiagent systems, Physical Review E, 85 (2012), 041907. doi: 10.1103/PhysRevE.85.041907. Google Scholar
G. Alfred, Modern Differential Geometry of Curves and Surfaces with Mathematica, CRC press, 1998. Google Scholar
I. R. de Almeida and C. R. Jung, Change detection in human crowds, in Graphics, Patterns and Images (SIBGRAPI), 2013 26th SIBGRAPI-Conference on, IEEE, (2013), 63-69. doi: 10.1109/SIBGRAPI.2013.18. Google Scholar
E. L. Andrade, S. Blunsden and R. B. Fisher, Hidden markov models for optical flow analysis in crowds, in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, IEEE, 1(2006), 460-463. doi: 10.1109/ICPR.2006.621. Google Scholar
M. Ballerini, N. Cabibbo, R. Candelier, A. Cavagna, E. Cisbani, I. Giardina, A. Orlandi, G. Parisi, A. Procaccini and M. Viale, Empirical investigation of starling flocks: A benchmark study in collective animal behavior, Animal Behaviour, 76 (2008), 201-215. doi: 10.1016/j.anbehav.2008.02.004. Google Scholar
C. Becco, N. Vandewalle, J. Delcourt and P. Poncin, Experimental evidences of a structural and dynamical transition in fish school, Physica A: Statistical Mechanics and its Applications, 367 (2006), 487-493. doi: 10.1016/j.physa.2005.11.041. Google Scholar
M. Beekman, D. J. T. Sumpter and F. L. W. Ratnieks, Phase transition between disordered and ordered foraging in pharaoh's ants, Proceedings of the National Academy of Sciences, 98 (2001), 9703-9706. doi: 10.1073/pnas.161285298. Google Scholar
A. C. Bovik, Handbook of Image and Video Processing, Academic press, 2010. Google Scholar
R. Bracewell, Fourier Analysis and Imaging, Springer Science & Business Media, 2010. doi: 10.1007/978-1-4419-8963-5. Google Scholar
I. D. Couzin, Collective cognition in animal groups, Trends in cognitive sciences, 13 (2009), 36-43. doi: 10.1016/j.tics.2008.10.002. Google Scholar
I. D. Couzin, J. Krause, N. R. Franks and S. A. Levin, Effective leadership and decision-making in animal groups on the move, Nature, 433 (2005), 513-516. doi: 10.1038/nature03236. Google Scholar
A. Deutsch, Principles of biological pattern formation: Swarming and aggregation viewed as self organization phenomena, Journal of Biosciences, 24 (1999), 115-120. doi: 10.1007/BF02941115. Google Scholar
J. H. Friedman, J. L. Bentley and R. A. Finkel, An algorithm for finding best matches in logarithmic expected time, ACM Transactions on Mathematical Software, 3 (1977), 209-226. doi: 10.1145/355744.355745. Google Scholar
K. Gajamannage, S. Butailb, M. Porfirib and E. M. Bollt, Model reduction of collective motion by principal manifolds, Physica D: Nonlinear Phenomena, 291 (2015), 62-73. doi: 10.1016/j.physd.2014.09.009. Google Scholar
K. Gajamannage, S. Butailb, M. Porfirib and E. M. Bollt, Identifying manifolds underlying group motion in Vicsek agents, The European Physical Journal Special Topics, 224 (2015), 3245-3256. doi: 10.1140/epjst/e2015-50088-2. Google Scholar
J. J. Gerbrands, On the relationships between SVD, KLT and PCA, Pattern recognition, 14 (1981), 375-381. doi: 10.1016/0031-3203(81)90082-0. Google Scholar
R. Gerlai, High-throughput behavioral screens: The first step towards finding genes involved in vertebrate brain function using zebra fish, Molecules, 15 (2010), 2609-2622. doi: 10.3390/molecules15042609. Google Scholar
G. H. Golub and C. Reinsch, Singular value decomposition and least squares solutions, Numerische Mathematik, 14 (1970), 403-420. doi: 10.1007/BF02163027. Google Scholar
D. Helbing, J. Keltsch and P. Molnar, Modelling the evolution of human trail systems, Nature, 388 (1997), 47-50. Google Scholar
J. M. Lee, Riemannian Manifolds: An Introduction to Curvature, volume 176, Springer, 1997. doi: 10.1007/b98852. Google Scholar
J. M. Lee, Introduction to Smooth Manifolds, Graduate Texts in Mathematics, 218. Springer-Verlag, New York, 2003. doi: 10.1007/978-0-387-21752-9. Google Scholar
R. Mehran, A. Oyama and M. Shah, Abnormal crowd behavior detection using social force model, in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, IEEE, (2009), 935-942. doi: 10.1109/CVPR.2009.5206641. Google Scholar
M. M. Millonas, Swarms, Phase Transitions, and Collective Intelligence, Technical report, Los Alamos National Lab., New Mexico, USA, 1992. Google Scholar
S. R. Musse and D. Thalmann, A model of human crowd behavior: Group inter-relationship and collision detection analysis, in Computer Animation and Simulation, Springer, (1997), 39-51. doi: 10.1007/978-3-7091-6874-5_3. Google Scholar
M. Nagy, Z. Ákos, D. Biro and T. Vicsek, Hierarchical group dynamics in pigeon flocks, Nature, 464 (2010), 890-893. doi: 10.1038/nature08891. Google Scholar
[29] B. O'neill, Elementary Differential Geometry, Academic press, New York, 1966. Google Scholar
T. Papenbrock and T. H. Seligman, Invariant manifolds and collective motion in many-body systems, AIP Conf. Proc. , 597(2001), p301, arXiv: nlin/0206035. doi: 10.1063/1.1427476. Google Scholar
B. L. Partridge, The structure and function of fish schools, Scientific American, 246 (1982), 114-123. doi: 10.1038/scientificamerican0682-114. Google Scholar
W. Rappel, A. Nicol, A. Sarkissian, H. Levine and W. F. Loomis, Self-organized vortex state in two-dimensional dictyostelium dynamics, Physical Review Letters, 83 (1999), p1247. Google Scholar
E. M. Rauch, M. M. Millonas and D. R. Chialvo, Pattern formation and functionality in swarm models, Physics Letters A, 207 (1995), 185-193. doi: 10.1016/0375-9601(95)00624-C. Google Scholar
V. Y. Rovenskii, Topics in Extrinsic Geometry of Codimension-one Foliations, Springer, 2011. doi: 10.1007/978-1-4419-9908-5. Google Scholar
S. T. Roweis and L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science, 290 (2000), 2323-2326. doi: 10.1126/science.290.5500.2323. Google Scholar
R. V. Solé, S. C. Manrubia, B. Luque, J. Delgado and J. Bascompte, Phase transitions and complex systems: Simple, nonlinear models capture complex systems at the edge of chaos, Complexity, 1 (1996), 13-26. Google Scholar
D. Somasundaram, Differential Geometry: A First Course, Alpha Science Int'l Ltd., 2005. Google Scholar
D. Sumpter, J. Buhl, D. Biro and I. Couzin, Information transfer in moving animal groups, Theory in Biosciences, 127 (2008), 177-186. doi: 10.1007/s12064-008-0040-1. Google Scholar
J. B. Tenenbaum, V. De Silva and J. C. Langford, A global geometric framework for nonlinear dimensionality reduction, Science, 290 (2000), 2319-2323. doi: 10.1126/science.290.5500.2319. Google Scholar
E. Toffin, D. D. Paolo, A. Campo, C. Detrain and J. Deneubourg, Shape transition during nest digging in ants, Proceedings of the National Academy of Sciences, 106 (2009), 18616-18620. doi: 10.1073/pnas.0902685106. Google Scholar
C. M. Topaz and A. L. Bertozzi, Swarming patterns in a two-dimensional kinematic model for biological groups, SIAM Journal on Applied Mathematics, 65 (2004), 152-174. doi: 10.1137/S0036139903437424. Google Scholar
T. Vicsek, A. Cziró, E. Ben-Jacob, I. Cohen and O. Shochet, Novel type of phase transition in a system of self-driven particles, Physical Review Letters, 75 (1995), 1226-1229. doi: 10.1103/PhysRevLett.75.1226. Google Scholar
E. Witten, Phase transitions in m-theory and f-theory, Nuclear Physics B, 471 (1996), 195-216. doi: 10.1016/0550-3213(96)00212-X. Google Scholar
P. N. Yianilos, Data structures and algorithms for nearest neighbor search in general metric spaces, in Proceedings of the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, (1993), 311-321. Google Scholar
T. Zhao and R. Nevatia, Tracking multiple humans in crowded environment, in Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, IEEE, (2004), Ⅱ-406. doi: 10.1109/CVPR.2004.1315192. Google Scholar
2]. (a) The first phase of the motion (walking) is embedded onto the blue colored manifold, while the second phase (running) is embedded onto the red colored manifold. Two snapshots showing walking and running at time steps $t_1$ and $t_2$ are embedded onto points $\boldsymbol{p}^{(t_1)}$ and $\boldsymbol{p}^{(t_2)}$, respectively, in the corresponding manifolds. The locus of singularities ($\mathcal{L}$) is represented by orange color. (b) The scaled residual variance versus the dimensionality, that gives the dimensionality of the underlying manifold by an elbow, is obtained by running Isomap upon frames in each phase with 6 nearest neighbors. Embedding dimensionalities of sub-manifolds representing walking (blue circle) and running (red square) of the crowd are three and four, respectively">Figure 1. An abrupt phase change of the crowd behavior where a walking crowd suddenly starts running [2]. (a) The first phase of the motion (walking) is embedded onto the blue colored manifold, while the second phase (running) is embedded onto the red colored manifold. Two snapshots showing walking and running at time steps $t_1$ and $t_2$ are embedded onto points $\boldsymbol{p}^{(t_1)}$ and $\boldsymbol{p}^{(t_2)}$, respectively, in the corresponding manifolds. The locus of singularities ($\mathcal{L}$) is represented by orange color. (b) The scaled residual variance versus the dimensionality, that gives the dimensionality of the underlying manifold by an elbow, is obtained by running Isomap upon frames in each phase with 6 nearest neighbors. Embedding dimensionalities of sub-manifolds representing walking (blue circle) and running (red square) of the crowd are three and four, respectively
Figure 2. (a) Superimposing a neighborhood of the curve $C$ at point $\boldsymbol{p}$ with an arc $\boldsymbol{p}_1\boldsymbol{p}\boldsymbol{p}_2$ which subtend an small angle of $2T$ at the origin of a translating circle. (b) Zoomed and rotated secular sector $\boldsymbol{p}_1\boldsymbol{p}_2\boldsymbol{O}$ such that the blue arrow is horizontal
Figure 3. Local distribution of data around the point $\boldsymbol{p}$ on a two dimensional manifold ($\mathcal{M}^2$). Principal sections $\Pi^{(1)}_{\boldsymbol{p}}$ and $\Pi^{(2)}_{\boldsymbol{p}}$ are created by using the shape operator at $\boldsymbol{p}$, and curves $C^{(1)}$ and $C^{(2)}$ are produced by intersecting $\Pi^{(1)}_{\boldsymbol{p}}$ and $\Pi^{(2)}_{\boldsymbol{p}}$ with $\mathcal{M}^2$, respectively
Figure 4. (a) A three dimensional sombrero-hat of 2000 points consisting two sub-manifolds (blue and green) and locus of singularities (red) is intersected with the plane $\{\beta_1\hat{\boldsymbol{i}}+\beta_2\hat{\boldsymbol{k}} \vert \beta_1, \beta_2 \in \mathbb{R}\}$ to produce (b) a curve in $\mathbb{R}^3$. (c) Isomap residual plots those show embedding dimensionalities by elbows reveal that the dimensionalities of two sub-manifolds are two while the dimensionality of the locus is three
Figure 5. Detecting phase transitions in a particle swarm simulated using the Vicsek model with alternating noise levels. (a) The distribution of $(\sigma_4/\sigma_1)_{n}$ versus frame numbers. Therein, the range of frames for each sub-manifold is represented by a left-right arrow and the frame number at each phase transition is represented by a red circle along with the frame number associated. (b) The plot of 20 largest phase changes including frame numbers of three phase transitions. (c) Isomap residual variance versus dimensionality of each sub-manifold
3]. (a) The distribution of $(\sigma_3/\sigma_1)_{n}$ versus frame numbers. Therein, while the snapshots show instances of the crowd in each phase, left-right arrows and the red circle represent ranges of frames in each sub-manifold and the frame at the phase transition, respectively. (b) The plot of 20 largest phase changes representing the phase transition in red along with its frame number">Figure 6. Detecting a phase transition between phases of walking and running in a human crowd [3]. (a) The distribution of $(\sigma_3/\sigma_1)_{n}$ versus frame numbers. Therein, while the snapshots show instances of the crowd in each phase, left-right arrows and the red circle represent ranges of frames in each sub-manifold and the frame at the phase transition, respectively. (b) The plot of 20 largest phase changes representing the phase transition in red along with its frame number
1]. (a) The distribution of $(\sigma_3/\sigma_1)_{n}$ shows ranges of frames representing two sub-manifolds by left-right arrows and instances phases of the flock by snapshots. (b) The plot of 20 largest phase changes. The frame at the phase transition is represented by red in Figures (a) and (b)">Figure 7. Detecting a transition in a bird flock between phases sitting and flying [1]. (a) The distribution of $(\sigma_3/\sigma_1)_{n}$ shows ranges of frames representing two sub-manifolds by left-right arrows and instances phases of the flock by snapshots. (b) The plot of 20 largest phase changes. The frame at the phase transition is represented by red in Figures (a) and (b)
Figure 8. Detecting phase transitions in a fish school. (a) The distribution of $(\sigma_6/\sigma_1)_{n}$ with left-right arrows showing ranges of frames in sub-manifolds and red dots showing frames at phase transitions. (b) Snapshots of the school before (left) and after (right) each phase transition. (c) The plot of 20 largest phase changes consisting four phase transitions marked in red with their frame numbers
Figure 9. Two dimensional saddle surface $\mathcal{M}^2$, described by the Equation (28) for $x_1, x_2 \in \mathbb{U} [-2,2]$
Roman Czapla, Vladimir V. Mityushev. A criterion of collective behavior of bacteria. Mathematical Biosciences & Engineering, 2017, 14 (1) : 277-287. doi: 10.3934/mbe.2017018
Michael Blank. Emergence of collective behavior in dynamical networks. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 313-329. doi: 10.3934/dcdsb.2013.18.313
Matteo Novaga, Enrico Valdinoci. The geometry of mesoscopic phase transition interfaces. Discrete & Continuous Dynamical Systems, 2007, 19 (4) : 777-798. doi: 10.3934/dcds.2007.19.777
Alain Miranville. Some mathematical models in phase transition. Discrete & Continuous Dynamical Systems - S, 2014, 7 (2) : 271-306. doi: 10.3934/dcdss.2014.7.271
Laurent Boudin, Francesco Salvarani. The quasi-invariant limit for a kinetic model of sociological collective behavior. Kinetic & Related Models, 2009, 2 (3) : 433-449. doi: 10.3934/krm.2009.2.433
Parker Childs, James P. Keener. Slow manifold reduction of a stochastic chemical reaction: Exploring Keizer's paradox. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1775-1794. doi: 10.3934/dcdsb.2012.17.1775
Jun Yang. Coexistence phenomenon of concentration and transition of an inhomogeneous phase transition model on surfaces. Discrete & Continuous Dynamical Systems, 2011, 30 (3) : 965-994. doi: 10.3934/dcds.2011.30.965
Mauro Garavello, Benedetto Piccoli. Coupling of microscopic and phase transition models at boundary. Networks & Heterogeneous Media, 2013, 8 (3) : 649-661. doi: 10.3934/nhm.2013.8.649
Emanuela Caliceti, Sandro Graffi. An existence criterion for the $\mathcal{PT}$-symmetric phase transition. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 1955-1967. doi: 10.3934/dcdsb.2014.19.1955
Pavel Krejčí, Jürgen Sprekels. Long time behaviour of a singular phase transition model. Discrete & Continuous Dynamical Systems, 2006, 15 (4) : 1119-1135. doi: 10.3934/dcds.2006.15.1119
I-Liang Chern, Chun-Hsiung Hsia. Dynamic phase transition for binary systems in cylindrical geometry. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 173-188. doi: 10.3934/dcdsb.2011.16.173
Mauro Garavello. Boundary value problem for a phase transition model. Networks & Heterogeneous Media, 2016, 11 (1) : 89-105. doi: 10.3934/nhm.2016.11.89
Mauro Garavello, Francesca Marcellini. The Riemann Problem at a Junction for a Phase Transition Traffic Model. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5191-5209. doi: 10.3934/dcds.2017225
Maya Briani, Benedetto Piccoli. Fluvial to torrential phase transition in open canals. Networks & Heterogeneous Media, 2018, 13 (4) : 663-690. doi: 10.3934/nhm.2018030
Pierluigi Colli, Antonio Segatti. Uniform attractors for a phase transition model coupling momentum balance and phase dynamics. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 909-932. doi: 10.3934/dcds.2008.22.909
A.V. Borisov, A.A. Kilin, I.S. Mamaev. Reduction and chaotic behavior of point vortices on a plane and a sphere. Conference Publications, 2005, 2005 (Special) : 100-109. doi: 10.3934/proc.2005.2005.100
Hayato Chiba, Georgi S. Medvedev. The mean field analysis of the kuramoto model on graphs Ⅱ. asymptotic stability of the incoherent state, center manifold reduction, and bifurcations. Discrete & Continuous Dynamical Systems, 2019, 39 (7) : 3897-3921. doi: 10.3934/dcds.2019157
Claudio Giorgi. Phase-field models for transition phenomena in materials with hysteresis. Discrete & Continuous Dynamical Systems - S, 2015, 8 (4) : 693-722. doi: 10.3934/dcdss.2015.8.693
Francesca Marcellini. Existence of solutions to a boundary value problem for a phase transition traffic model. Networks & Heterogeneous Media, 2017, 12 (2) : 259-275. doi: 10.3934/nhm.2017011
Raffaele Esposito, Yan Guo, Rossana Marra. Stability of a Vlasov-Boltzmann binary mixture at the phase transition on an interval. Kinetic & Related Models, 2013, 6 (4) : 761-787. doi: 10.3934/krm.2013.6.761
HTML views (59)
Kelum Gajamannage Erik M. Bollt | CommonCrawl |
Home | Psychology | * Psychology and Neuroscience * Psychology and Neuroscience (PDF) A Society of Victims Asperger's By Proxy Asperger's By Proxy (PDF) Asperger's By Proxy : Feedback Building Science Building Science : Feedback Children of Narcissus Children of Narcissus: Feedback How to Raise the Asperger's Child Is Psychology a Science? Is Psychology a Science? : Feedback No Theory, No Science On Being Perfect Psychology and Neuroscience : Feedback Reddit: Psychology is a science! Science of Mind Science of Mind: Feedback The Myth of Mental Illness The Myth of Mental Illness: Feedback The Scientific Paradigm The Scientific Paradigm : Feedback The Trouble with Psychology The Trouble with Psychology : Feedback Share This Page
Asperger's By Proxy
Copyright © 2005-2022, Paul Lutus — Message Page
Reader Feedback Page
Click here for the (preferred) PDF version of this article.
This article evaluates modern psychological theory and practice. It explains how opportunistic psychological diagnoses are created and destroyed, fueled by public credulousness and the absence of scientific discipline within psychology. A case history is included to show the consequences of these trends and practices.
1 Science
1.2 Britannica: Falsifiability
1.3 Explanation versus Description
1.4 Legal Precedents
1.4.1 Daubert Standard
1.4.2 McLean v. Arkansas Board of Education
1.5 Royal Society
1.6 Skepticism
1.7 Science and Pseudoscience
1.8 Role of Theory
1.9 Dried Gourd Science
1.10 Summary
1.10.1 Isaac Newton
1.11 Magical Thinking
2 Psychology
2.1 Theoretical Unification
2.2 Psychology's Critics
2.2.1 Sigmund Freud
2.2.2 Karl Popper
2.2.3 Sigmund Koch
2.2.4 Richard P. Feynman
2.2.5 Ronald F. Levant
2.2.6 Thomas R. Insel
2.3 Historical Highlights
2.3.1 Drapetomania
2.3.2 Lobotomy
2.3.3 Homosexuality
2.3.4 Refrigerator Mother
2.3.5 Recovered Memory Therapy
2.3.6 Asperger Syndrome
2.3.7 Not Otherwise Specified
2.3.8 Cognitive-Behavioral Therapy
2.4 Analysis
2.4.1 Mental Illness Defined
2.4.2 Evolution of Diagnoses
2.4.3 Asperger Syndrome Diagnosis Benefits
2.4.4 Abandonment of Asperger Syndrome
2.5 Psychiatry
2.6 Neuroscience
2.7 Objective Diagnosis
3 Case History
3.2 First Meeting
3.3 Starting Out
3.4 Factitious Disorder/Munchausen Syndrome by Proxy
3.5 The Family Outing
3.6 Hell Hath no Fury
3.7 The M-Word
3.8 The Plea
3.9 Dangerous Lies
3.10 Double Down
3.10.1 Joan's Game Plan
3.10.2 Legal Advice
3.11 Visiting the Grown-Up's Table
3.12 Update
4.1 Why Asperger's is Gone
4.2 Mind versus Brain
4.3 Newtonian Gravitation
4.3.1 Relevance to Science
4.3.2 Comparison with Psychology
1 Isaac Newton in thought
2 The "Family Outing" site
Before discussing psychology's relationship with science, we must first define science.
The central goal of science is to understand the natural world. To meet this goal, science crafts explanations — "theories" — that are compared to nature, and if the comparison fails, the theory must be discarded. This requirement to compare theories to nature is called falsifiability1, and falsifiability is the cornerstone on which science is built.
The falsifiability criterion forges an essential link between scientific fields, scientific theories, and nature:
A field of study that has no empirically testable, falsifiable theories is unscientific.
A theory that cannot be compared to nature is unscientific.
A theory that fails comparison with nature must be revised or abandoned.
The online Encyclopedia Britannica entry for the term falsifiability2 says that falsifiability is "... a standard of evaluation of putatively scientific theories, according to which a theory is genuinely scientific only if it is possible in principle to establish that it is false." The entry then offers counterexamples: "According to [Karl] Popper[3], some disciplines that have claimed scientific validity — e.g., astrology, metaphysics, Marxism, and psychoanalysis — are not empirical sciences, because their subject matter cannot be falsified in this manner."
A scientific theory that makes general statements based on specific observations, that predicts phenomena not yet observed, is said to explain some aspect of reality. Another class of theory, one that merely describes reality without offering an explanation, isn't scientific on the ground that one cannot falsify general principles that haven't been articulated or predictions that haven't been made. One can only contradict the original observation, but contradictions aren't falsifications because a contradiction can itself be contradicted in turn, ad infinitum, with no chance for resolution or contribution to the corpus of human knowledge. Here's an example:
If I say, "The night sky is filled with tiny points of light," I've offered a description. Another observer might contradict my description, for example by emerging from his cave on an overcast night and not seeing points of light, but as explained above, the contradicting observation can itself be contradicted on the next clear night, without any chance for resolution. So, apart from being shallow, inconclusive and trivial, this process is not science.
If I say, "Those points of light are distant thermonuclear furnaces like our sun," I've offered an explanation, one that makes predictions about phenomena not yet observed and that's falsifiable by empirical test. On the basis of this explanation we might build a small-scale star (a fusion reactor) to see if our experiment shows any similarity to the spectra and behavior of stars. This deep explanation represents a theoretical claim that's linked to other areas of human knowledge, predicts phenomena not yet observed and is conclusively falsifiable by comparison with reality (our fusion reactor might fail to imitate the stars). It's science.
Because of science's important role in modern society, and because of the many science pretenders at large, it has come to pass that, in the interest of justice, the legal system has defined science as it relates to expert testimony. As one such example, in Daubert v. Merrell Dow Pharmaceuticals, Inc.4, the U.S. Supreme Court produced an influential ruling now known as the Daubert standard5. At risk of oversimplification, Daubert says that scientific expert testimony must derive from scientific methodology, using a list of requirements that closely resembles the definition of science provided above, including the phrase "Empirical testing: whether the theory or technique is falsifiable, refutable, and/or testable."
An earlier legal ruling6, whose purpose is to keep religion out of public school science classrooms, defined science this way:
It is guided by natural law;
It has to be explanatory by reference to natural law;
It is testable against the empirical world;
Its conclusions are tentative, i.e. are not necessarily the final word; and
It is falsifiable1.
Apart from improving society's understanding of what constitutes science and distinguishing religion from science, this ruling's existence reveals how a scientific standing confers validation to ideas. This makes that standing a prized possession, to the degree that the legal system must sometimes step in and draw a line in the sand.
Science's focus on empirical evidence means there's no role for authority in science, contrary to appearances, and this has been true across the history of science. The Royal Society7, the oldest scientific institution in the world (founded in 1660 CE), chose as their motto Nullius in Verba or "Take nobody's word for it"8. The society explains their motto this way:
It is an expression of the determination of Fellows to withstand the domination of authority and to verify all statements by an appeal to facts determined by experiment.9
This addresses one of the more pervasive public misunderstandings of science — that it relies on authority and expertise. This is quite false — as shown in the above quotation, science explicitly rejects authority.
About this issue Richard P. Feynman10 said, "Science is the organized skepticism in the reliability of expert opinion."
Science's attitude toward authority and expertise can be summarized by saying that the greatest amount of scientific eminence is trumped by the smallest amount of scientific evidence.
An important corollary to science's focus on empirical evidence is an attitude of skepticism toward untested claims. This skeptical outlook is formally recognized in the null hypothesis11, the idea that there's no relationship between a cause and an effect until empirical evidence supports it. The null hypothesis is a cornerstone of scientific experimental design — properly designed studies presume there's no relationship between two phenomena under study, and require that evidence contradict this default assumption.
An example may show the importance of the null hypothesis and of skepticism:
To the claim "Bigfoot exists," a scientist, guided by the null hypothesis, assumes the claim has no merit until empirical evidence supports it.
To the same claim, a pseudoscientist12 assumes the opposite — that the claim is true until Bigfoot can be proven not to exist.
But proving Bigfoot's nonexistence would require a search of the entire universe, an impossible burden of evidence and a requirement for proof of a negative, which in the general case is a logical error named argument from ignorance13.
To summarize this point, to a scientist, Bigfoot's existence hinges solely on empirical evidence, while to a pseudoscientist, Bigfoot exists because it hasn't been proven not to exist. And because no one can possibly prove Bigfoot's nonexistence, the pseudoscientist is secure in his belief.
To summarize the above sections, scientific fields are defined by theories — theories that:
Explain some aspect of nature, of reality.
Are based on empirical observations.
Survive sincere efforts at falsification.
The most reliable and robust theories express their principles in mathematical equations.
Properties of a scientific theory include an intellectual framework that makes general statements derived from specific observations, as well as the ability to predict empirical phenomena not yet observed — for example, Charles Darwin's14 theory of natural selection anticipated much of modern biology with a handful of empirically testable principles. Most important of all, a scientific theory must be open to unambiguous falsification by way of empirical tests, or in other words, by a meaningful comparison with reality.
Readers unfamiliar with science may question whether the above requirements are too strict — aren't we defining science too rigidly? Might we sometimes throw away useful observations and theories by applying overly strict rules? To answer, let me offer my cure for the common cold.
In my cure I shake dried gourds over the cold sufferer until his symptoms abate. The cure might take three days, maybe a week, but it always works. It's 100% effective, it's repeatable with different subjects and different gourds, it can be replicated in different laboratories, it's empirical, it might have been falsified but wasn't, so where's my Nobel Prize?
Here's another question — what's wrong with this cure? Here's a short list:
The cure's description lacks essential elements: skepticism and critical thinking. A skeptical thinker might wonder whether the treatment has anything to do with the outcome, and how we might find out.
It's only a description, not an explanation (a theory). Science requires theories, generalizations that explain observations and predict phenomena not yet observed.
Because I don't try to explain my miracle cure (i.e. by crafting a theory), I'm relying on a shallow observation of its apparent effectiveness without wondering whether I'm overlooking other possible reasons for the experiment's outcome.
The null hypothesis11, an essential element in modern scientific discipline, is missing from this experiment. If it were present, I would be obliged to make the default assumption that there's no connection between the treatment and the outcome until persuasive evidence suggests otherwise.
About the class of pseudoscience described here, Richard P. Feynman10 said, "The first principle is that you must not fool yourself — and you are the easiest person to fool."15
This pseudoscience example is meant to show how an absence of theory, skepticism and critical thinking can lead to perfect nonsense masquerading as science.
Based on these points it's possible to broadly say what science is and is not. If a field produces explanations, if it shapes theories that connect seemingly unrelated observations, makes unambiguous theoretical claims preferably in the form of mathematical equations, accurately predicts phenomena not yet observed, and can be falsified by empirical observation, it's science. If a field only offers descriptions, descriptions that can be trivially and inconclusively contradicted by other descriptions, without ever rising to the level of theory shaping and empirical tests, it's not science.
Figure 1: Isaac Newton in thought
For a picture of science, imagine Isaac Newton16, who observed a falling apple, then looked at the moon and saw a connection between the motions of the apple and the moon, then wrote a mathematical equation17 that explained the falling apple, the orbit of the moon, and the motion of every other massive object in the universe. That's science.
Let me connect the above points to two modern social problems — magical thinking and phony victimization.
In medieval times, what we now call pseudoscience ruled supreme — claims that couldn't be proven false were true. This idea, now called "magical thinking"18, was the foundation for religious and authoritarian rule.
In modern times and since about 1600 C.E., science is the new evidentiary standard, which means claims that cannot be proven true are false. This standard applies to both intellectual activities and law (innocent until proven guilty19).
I ask my readers to remember this distinction between magical thinking (true until proven false) and scientific thinking (false until proven true) for examples that follow in which people seek victim status by describing imaginary crimes.
Human psychology is defined as study of the mind and behavior20. This represents a serious obstacle to meaningful science, because the mind is not a thing, it's an idea, consequently it cannot be a source of empirical evidence. This, in turn, means psychology must get along without falsifiable theories — or, for that matter, any empirically testable theories at all.
In an informal ranking of sciences based on scientific substance, psychology lies between biology21 and astrology22. Biology has empirically testable, falsifiable theories like evolution23 and natural selection24, so it's firmly based in science. Astrology has testable theories, but they've been proven false, so astrology might be described as a failed science — unless someone takes it seriously, in which case it's a pseudoscience. Because psychology has no testable empirical theories it cannot be described as a science, but because of its many avid practitioners and followers it has acquired a wholly undeserved scientific reputation.
Another way to say this is that, because it cannot craft or test scientific theories, psychology is forced to operate at the dried-gourdDriedGourd level of science — it thinks its methods work, but it cannot test this assumption using reliable science.
Readers may object that all of psychology shouldn't be judged by the dismal state of clinical psychology and psychiatry, that using a single word to describe them all is misleading. But when applied to a science, a single word is sufficient. If an airplane disintegrates in flight, an investigation might discover that its designers ignored theories from aeronautics' parent field (i.e. physics), or that the accident reveals a new physical principle that theoretical physics needs to accept (like the role played by metal fatigue25 in the in-flight failure of the deHavilland Comet aircraft26).
Because physics is a science, its empirical, falsifiable theories unify the field and its applications. Physical theory and practice are joined — a new theoretical finding has immediate effect on practice, and unexpected results arising in practice have an effect on theory.
In the same way, because biology is a science (because scientific biological principles unify theory and practice), new theoretical findings like epigenetics27 inform all of biology as well as coordinating theory and practice.
In medicine, clinics cannot apply treatments that haven't been tested for efficacy and safety or that don't conform to medical and biological theories. This is possible because modern medicine is an application of biological science.
The above principles would apply to psychology except for the fact that the mind cannot be a source for reliable empirical evidence, so psychologists cannot create scientific theories about it and the unifying effect of theory is absent. This theory vacuum explains why there are so many small divisions within the field — the American Psychological Association (APA)34 lists 54 divisions — that's more divisions than an ice cream store has flavors.
Among the psychologists who have analyzed their own field, the views expressed in this article are by no means out of the ordinary. Starting with Sigmund Freud and extending to the present, many critics have made the same points in different ways. Here's a representative sample:
In his 1895 unpublished work "Entwurf einer Psychologie" (draft of a psychology), later translated to English as "Project for a Scientific Psychology", Sigmund Freud28 reluctantly came to the conclusion that the chasm separating the mind from physical reality could not be bridged, and therefore that psychology could not become scientific. About this effort Freud later said, "Why I cannot fit it together [the organic and the psychological] I have not even begun to fathom."29
Aware of the negative implications of this work for his field and his personal standing, Freud directed that the book not be published during his lifetime, so its release was delayed until 1950.
Originally trained as a psychologist and earning a Ph.D., philosopher of science Karl Popper3 eventually came to the conclusion that psychology cannot be science for lack of a grounding in empirical evidence and falsifiability. About this change in outlook Popper said, "I began to feel more and more dissatisfied with these three theories — the Marxist theory of history, psychoanalysis, and individual psychology; and I began to feel dubious about their claims to scientific status."30 Popper went on to identify the defect these fields have in common — their theories can't be falsified.
A notable psychology critic and philosopher of science, psychologist Sigmund Koch31 was selected to edit a major work titled "Psychology: A Study of a Science" (Koch, 1959-63)32, which became a six-volume series. About this work, Koch came to these conclusions:
The hope of a psychological science became indistinguishable from the fact of psychological science. The entire subsequent history of psychology can be seen as a ritualistic endeavor to emulate the forms of science in order to sustain the delusion that it already is a science.
The truth is that psychological statements which describe human behavior or which report results from tested research can be scientific. However, when there is a move from describing human behavior to explaining it there is also a move from science to opinion31.
Well-known for his irreverence and wit, Nobel Prizewinner Richard P. Feynman10 often criticized psychology for its scientific pretensions. In a now-famous address entitled "Cargo Cult Science", Feynman said:
I think the educational and psychological studies I mentioned are examples of what I would like to call Cargo Cult Science. In the South Seas there is a Cargo Cult of people. During the war they saw airplanes land with lots of good materials, and they want the same thing to happen now. So they've arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennas—he's the controller—and they wait for the airplanes to land. They're doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn't work. No airplanes land. So I call these things Cargo Cult Science, because they follow all the apparent precepts and forms of scientific investigation, but they're missing something essential, because the planes don't land.33
While president of the American Psychological Association34, Ronald F. Levant35 began an initiative to move clinical psychology toward an evidence-based practice model and away from its reliance on anecdote and narrative. It seems psychologists weren't ready for this change — about their response, Levant said:
Some APA members have asked me why I have chosen to sponsor an APA Presidential Initiative on Evidence-Based Practice (EBP) in Psychology, expressing fears that the results might be used against psychologists by managed-care companies and malpractice lawyers.
To respond, I would start by drawing attention to the larger societal context in which we live. The EBP movement in U.S. society is truly a juggernaut, racing to achieve accountability in medicine, psychology, education, public policy and even architecture. The zeitgeist is to require professionals to base their practice to whatever extent possible on evidence. Thus, psychology needs to define EBP in psychology or it will be defined for us. We cannot afford to sit on the sidelines.36
Levant's critics were right — modern psychological practice is entirely unscientific and an initiative such as he proposed would only have focused public attention on that fact, with significant legal exposure and little compensating advantage. So Levant's initiative failed.
While director of the National Institute of Mental Health (NIMH)37 (2002-2015), Thomas Insel advocated for a shift toward science-based mental health treatments. About the version of the Diagnostic and Statistical Manual of Mental Disorders (DSM)44 that had just been released (version 5), Insel said:
The goal of this new manual, as with all previous editions, is to provide a common language for describing psychopathology. While DSM has been described as a "Bible" for the field, it is, at best, a dictionary, creating a set of labels and defining each. The strength of each of the editions of DSM has been "reliability" – each edition has ensured that clinicians use the same terms in the same ways. The weakness is its lack of validity.
Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever. Indeed, symptom-based diagnosis, once common in other areas of medicine, has been largely replaced in the past half century as we have understood that symptoms alone rarely indicate the best choice of treatment. Patients with mental disorders deserve better.38
Over time, as serious problems with the new DSM version became more apparent, the NIMH ruled that its categories would no longer be accepted as the basis for scientific research proposals,39 for the reason that the DSM has no scientific content.
Insel's predecessor at the NIMH (Steven E. Hyman40) and his successor (Joshua A. Gordon41) both also advocate for a transition to science in psychology.
Because psychology can't craft or test empirical theories, a review of its history shows an aimless drift from one fad to another, each abandoned after either inspiring public outrage or proving itself to have no practical value. Here are some highlights:
Before the U.S. Civil War psychologists invented Drapetomania42, a mental illness diagnosis that presumed to explain why slaves ran away from their masters. Drapetomania was used to justify the racist policies of the era and force free men and women back into the hands of their "owners." There was no corresponding mental illness to explain why slave owners believed it was moral to own a human being, but the slave owners, not the slaves, paid the psychologists. Unlike the other examples in this list, psychologists now accept that Drapetomania was pure pseudoscience.
Outcome: abandoned.
In the 1930s psychologists invented a simple procedure that greatly improved the behavior of mental patients. Before the procedure, patients might rant and yell for hours, making life miserable for everyone. After the procedure, patients became much more docile and manageable. The procedure involved inserting an icepick into the patient's prefrontal cortex and moving it around, slicing through brain tissues. This produced a dramatic improvement in behavior, but as a side effect the patient lost any resemblance to a human being. Called "Lobotomy"43, the procedure reached its peak popularity in the 1950s, was eventually applied to 40,000 people, but has since been abandoned. The Wikipedia Lobotomy article43 includes this quote: "The purpose of the operation was to reduce the symptoms of mental disorder, and it was recognized that this was accomplished at the expense of a person's personality and intellect."
In the mid-20th century homosexuality was formally identified as a mental illness and various treatments were devised including chemical castration. Since then two things have changed: the public has begun to accept homosexuality, and even psychologists realized their "treatments" weren't working. Eventually homosexuality was removed from the DSM44, psychology's standard diagnostic manual, but this hasn't prevented some psychologists from offering ineffective and harmful Conversion Therapy45 treatments. Because of its potential for harm this therapy has been declared illegal in many regions46.
Over the decades some organic ailments have been misidentified as mental illnesses amenable to psychological treatments, among which were the various forms of autism. At the height of psychology's popularity, autism was widely blamed on "refrigerator mothers"47, emotional cripples unable to bond with their children. Fortunately for many innocent and caring parents this fad didn't last and autism was eventually identified as an organic, not mental, ailment.
In the 1990s a fad psychological treatment called Recovered Memory Therapy48 (hereafter RMT) became popular. In this therapy, psychology clients "remembered" being victims of horrible crimes that were supposedly suppressed from the conscious mind. Recovered memory therapy seemed to bring hidden traumatic memories into conscious recall, but the role of fantasy and invention — in both therapist and client — seems not to have been adequately guarded against. The result was that many people were accused of imaginary crimes.
The apparent goal of RMT was to confer victim status to people who, for one reason or another, couldn't function in modern times — people who demanded sympathy and money for imagined wrongs. But to work as intended, RMT relied on a pseudoscientific standard of evidence — claims were assumed true until proven false. Unfortunately for the phony victims, this collided with today's scientific and legal standard in which claims are assumed false until proven true (innocent until proven guilty).
The legal system required some time to awaken, but before too many lives were destroyed, it caught on. About the time virgins began reporting imaginary rapes49, the courts realized they were being played, the wrongly accused were released, the phony victims got no more attention and the therapy lost its popularity.
Even though it's been abandoned, Asperger Syndrome50, also known as "Asperger's", is regarded by many as the perfect mental illness diagnosis. With a minimum of acting ability nearly anyone could get the diagnosis, it produced sympathy, special education funds and attention, and a number of important historical figures (Isaac Newton, Thomas Jefferson, Albert Einstein and Bill Gates among others) were assigned the diagnosis. These factors made Asperger's the first genuinely attractive mental illness, it resulted in an epidemic of phony diagnoses and nearly bankrupted some school districts who were obliged to provide special education funds for the victims of this cruel ailment.
Asperger's was popular with overcontrolling parents, who would assign it to their above-average children in order to shame them into acting more "normal." But it was also popular with youngsters — after all, wouldn't you like to have the same mental illness as Albert Einstein or Bill Gates?
In resonse to public outrage, and to limit further damage to psychology, Asperger's was removed from the standard diagnostic manual (the DSM44), but because psychologists aren't obliged to honor the DSM's contents, Asperger therapy, like Conversion Therapy46 and others, might reappear as public tastes change.
Until recently the DSM44 contained a catch-all "diagnosis" of Not Otherwise Specified (NOS)51. Psychologists applied it to people who couldn't be easily assigned another diagnosis. Its apparent purpose was to avoid ever having to tell someone, "There's nothing wrong with you — go home and enjoy your life."
Imagine an actual medical doctor telling his patient, "You have a bad case of Not Otherwise Specified. Take two aspirin and call me in the morning."
In the most recent DSM (version 5) (2013)52, examples of "Not Otherwise Specified" have been either dropped or renamed "Not Elsewhere Classified (NEC)."
Outcome: abandoned/renamed.
Cognitive-Behavioral Therapy53 (hereafter CBT) is a widely practiced therapeutic method in psychiatry and clinical psychology. In spite of its questionable evidentiary basis it's been a mainstay of psychological practice for many decades. Many therapists are confident that CBT is effective and distinct from other therapies, in spite of the many studies that contradict this belief. In a recent meta-analysis54, CBT and other therapeutic methods were carefully compared but no statistically significant difference was detected between them. In another study55 CBT was broken down into its component parts to see which were most effective. This study showed a similar result — the separately applied components produced nearly identical clinical responses, and more important, the responses appeared before any of the tested components should have been able to distinguish themselves.
Faced with these outcomes, a skeptical scientist would suggest that these therapies represent examples of the Placebo Effect56, where any plausible faux therapy might produce the same result, but psychologists seem unwilling to consider this possibility.
Outcome: still widely practiced.
These are only a few highlights in psychology's history, examples that show a pattern of opportunism, lack of discipline and disregard for the null hypothesis11 that a more thorough reading of psychology's history only confirms.
When psychology has been set aside in favor of neuroscience, when it's become a historical footnote with no living proponents (true now for alchemy), historians will write a more complete and detailed history than appears here. Those historians will have the advantage of seeing present-day psychology through the lens of neuroscience's future achievements — they will know which "mental illnesses" turned out to be physical illnesses with mental symptoms, which were pure invention, and they will know there are no true mental illnesses as that term should be defined:
A true mental illness would be one that exists only in the mind, not the brain or the body, and can be unambiguously and objectively diagnosed, treated and cured by mental health practitioners in such a way that (as with cancer and heart disease) all competent practitioners concur with the original diagnosis, the selected treatment, and the outcome.
Writers in that future time will have an advantage we do not — the existence of reliable science based on theory and observation, of topics presently studied by psychologists. Why is that important? Well, when a psychologist says there's an ailment called Asperger Syndrome, evidence-based critics can't say there's no such ailment. Because there's no theoretical support or reliable evidence, the psychologist can't claim the ailment is real (although many do), but for the same reason critics can't say Asperger's is not real. This is the burden faced by people who struggle against pseudoscience — in some cases, the pseudoscience is so far divorced from reality that there's no science to counter the nonsense. This is certainly true for psychology — apart from not being a science itself, the field is disconnected from legitimate scientific fields that might either lend weight to its conclusions or support evidence-based criticism.
Having said that, on surveying the fads that punctuate the history of psychology, it becomes apparent that they represent a learning process. Identifying homosexuality as a mental illness was a low point even for psychology, and it was quickly undermined both by biological studies and changing public attitudes. The "Refrigerator Mother" idea, apart from having no supporting evidence, victimized a large fraction of the population with no apparent purpose. Recovered Memory Therapy was far worse, in both its scale and effects — using narratives that in some cases were completely absurd (virgins accusing family members of rape), it victimized both the accusers and the accused.
But psychologists learn from their mistakes, and Asperger Syndrome proves it — it was a remarkable pseudoscientific achievement. There was never such an appealing diagnosis, and there may never be again:
Because an Asperger Syndrome diagnosis relies on self-reporting1, any bright person who wanted the diagnosis could acquire it by either having a personality that naturally exhibits the symptoms associated with Asperger's57, or by being coached in certain behaviors popularly associated with the condition.
If they chose, those receiving the diagnosis could abandon any responsibility for personal advancement in school or work — after all, they're officially mentally ill, therefore they're victims of fickle nature and they deserve our sympathy and support.
Those receiving the diagnosis joined the company of many famous and admirable people, living and dead, whom opportunistic psychologists also "diagnosed" with Asperger's2 — a list including Isaac Newton, Thomas Jefferson, Albert Einstein and Bill Gates59. This roster of spectacularly successful Aspies3 gives mental illness a whole new meaning.
Those receiving the diagnosis became eligible for thousands of dollars in special education funds, which school districts were compelled to provide regardless of specific case-by-case circumstances60.
Because Asperger's was included in the Autism spectrum, the family of one receiving the diagnosis became eligible for Social Security disability payments that continued until the "victim" became an adult — after which (s)he became eligible for similar disability payments intended for adults61.
When analyzing a controversial social issue, one normally presents a two-column list showing both advantages and drawbacks, but with respect to Asperger Syndrome, there are only advantages — unless you're a taxpayer, or you have a measure of personal integrity, or you want your children to succeed at an activity apart from playing the system, or you possess self-respect and want your children to acquire that trait, or you are a scientist and expect society to be guided by reason and evidence.
As psychological diagnoses go, Asperger Syndrome was spectacularly successful, but it became a victim of its own success — too many people acquired the diagnosis, the burden on taxpayers became too great, and the diagnosis deprived too many children of a sense of personal responsibility and purpose. Eventually public outrage over these outcomes caused the diagnosis to be discredited and removed from DSM-562 (psychology's "bible").
This doesn't mean Asperger Syndrome has been declared false. That's not how psychology works — because of an absence of science and reliable evidence, old diagnoses tend to be abandoned in place, not refuted. A prominent psychologist and professor, one of those who voted Asperger Syndrome out of the DSM63, acknowledged this in an interview64, saying about Asperger's, "We don't want to say that no one can ever use this word ... It's not an evidence-based term. It may be something people would like to use to describe how they see themselves fitting into the spectrum."
On that basis, and acknowledging the interviewee's professional status4, we can infer that Asperger Syndrome's status as a mental illness isn't based on evidence or science, it's caused a lot of public controversy, professionals now discourage the diagnosis, but if people would like to say they have it, let them. Professor Lord might as well have added, "And why not? It's all make-believe anyway."
For contrast, imagine a medical doctor saying, "You don't have cancer, but if you would like to say you have it, no problem." Reality doesn't work that way66, but psychology does.
Those in psychology who proclaim a scientific standing for their field are misleading the public, but psychiatry67, by crafting a deceptive association between psychology and medicine, is particularly deplorable. Psychiatry (both training and practice) seems designed to misleadingly suggest that the mind can be treated using reliable, evidence-based medical methods. But this is false — psychiatry is not a legitimate field of medicine, and contrary to all appearances, there are no mind doctors. Psychiatrists are psychologists who have acquired a medical degree.
Over time psychology will be entirely replaced by neuroscience68, the scientific study of the brain and nervous system. By studying tangible physical things neuroscience has the enormous advantage over psychology that it can produce reliable empirical evidence and falsifiable theories. It has the drawback that the human brain is very complex, such that a deep understanding of its workings may be decades away.
Once reliable brain models have been developed, in partnership with advanced brain scanning methods they should make objective, non-invasive clinical diagnostic methods possible for the first time. I imagine this conversation in a future neurological clinic:
Patient: "Let me tell you what I think is wrong with me."
Clinician: "Please don't — we'll locate the problem with these instruments. Like a blood test or an X-ray, they can produce an objective diagnosis without relying on what you think is wrong with you. In fact, your self-report would only confuse the process. Remember psychology?"
To close this section, we can compare psychology to science by saying that, if the null hypothesis11 were to be enforced in psychology, if empirical evidence and falsifiable1 theories were required, the field would collapse.
I include this section for two reasons:
It shows how psychologists can be maneuvered into supporting/enabling the activities of sociopaths and psychopaths.
It shows the harm psychology can create in the lives of individuals and families.
This first-person narrative provides a real-life account of psychology's terrible effect on the people it's meant to help, by enabling bad actors and by burdening children with bogus diagnoses and treatments.
I'm the first-person in this narrative. In my adventure-filled life I've been in danger any number of times — an armed standoff with pirates in the Indian Ocean during my solo world sail69, many grizzly bear close encounters during Alaska expeditions70, and a few close calls during my years as a stunt pilot. But my most dangerous personal experience resulted from naively accepting a housewife's plea that I befriend her intelligent son. In retrospect I would prefer to meet a grizzly bear in a dark wood — even if the bear tore me to pieces, at least I would understand his behavior.
Although this account is true, for legal reasons all names and some events are changed.
Among other things71 I'm a successful computer programmer, author of some well-known programs72. This means parents, usually mothers, sometimes contact me to ask for advice and guidance for their children. Over decades I've become less enthusiastic and more guarded about this sort of cold contact — parents tend to have unrealistic assessments of their childrens' intellectual abilities and to be frank, some women have motives apart from enriching the lives of their children.
The woman I will call Joan (all names are changed) contacted me and asked me to befriend her son, whom she described as very bright, misunderstood and isolated. I was immediately skeptical — what kind of mother calls a perfect stranger and, with no preliminaries, encourages him to befriend her son? And how often is a son accurately described by his mother? I declined Joan's request and closed the contact.
This only increased Joan's fervor. For the next few months she contacted me repeatedly, by telephone and email, and I declined repeatedly. I would have demanded that she stop contacting me but we had a mutual acquaintance I didn't want to offend.
Seven months later, on the occasion of a public appearance, Joan showed up and presented her son Jim (all names changed), who turned out to be very bright after all, but entirely isolated. We immediately began discussing some pretty advanced topics — logic, mathematics, computer programming.
In retrospect I should have noticed some weird aspects of the situation — how was such a bright, personable kid so completely isolated? Bright kids his own age, interested in technical activities, would have been a much better choice than me, and in a normal family he would already have such companions.
In a conversation about the outdoors Joan said something I found strange. She said, "I don't like the desert." I have a hard time imagining someone not liking the desert, but I didn't understand what she meant until much later.
Jim and I began to enjoy each other's company, for a number of reasons including the fact that until we became friends Jim had never been treated with understanding and respect. During this phase it came out that Joan was fully immersed in psychology — therapists were wise guides in the trackless wilderness of adult life, psychology explained reality, that sort of nonsense.
But Joan didn't just read the trash pop-psychology books that lined her shelves. When an issue came up that Joan couldn't decipher, she would call a therapist and get a ruling5. On one particularly stressful occasion she called two therapists, then announced the outcome as though a scientific discovery had been made. It never occurred to her that because she paid the therapists and telegraphed her preferences through various unsubtle mechanisms, the outcome was entirely predictable.
Some of my readers may anticipate the next revelation — Joan acquired bogus mental illness diagnoses for each of her children, forced them into therapy and spent much time discussing psychological ailments and therapeutic methods. Jim got an Asperger Syndrome diagnosis — a diagnosis, now abandoned73, that could be applied to any bright kid. As bright as he was, Jim didn't see through the psychology charade, consequently he saw himself as defective, in need of mental correction. This resulted partly from his loyalty to his mother, partly from inexperience with basic life issues.
For Joan, psychology wasn't about understanding or self-improvement, it was about authority and control, and in an earlier era religion would have served the same purpose6. By acquiring pseudo-medical diagnoses for her children, Joan created a control strategy unavailable to parents who expect to interact with their children through reason and mutual respect. If Joan saw a behavior she didn't like, it was a symptom of mental illness. Her children didn't have the life experience required to see through her machinations and over time they became her emotional hostages.
My friendship with Jim changed all that. He knew I respected him, admired his intellectual ability, and this began to undermine Joan's authority-based control scheme. John, Jim's father, saw a change in Jim but didn't fully grasp its implications, saying in an email, "During this one year of your interaction, [Jim] has grown up from a child to a teenager, and I credit you with part of his positive outlook on life today."74 By contrast, as she saw her faux authority wither away, Joan began a slow burn.
About this time Joan wrote me, saying, "It's nice that my son has a friend who understands the words he uses."75 Well, in fact it was nice — but over time, less so for Joan.
I've already described how Joan acquired diagnoses for each of her children and consulted with therapists about pedestrian issues, but as time passed I began to see more peculiar behaviors. One was the enthusiasm with which Joan described her childrens' psychological diagnoses and symptoms, another was her recitations of their "medical plans". I resisted explaining to Joan that psychology isn't a medical field and doesn't have medical plans.
Even more oddly, when her children drifted away from the aberrant behavior she expected, behavior consistent with their diagnoses, if they instead showed neurotypical behavior, Joan became anxious, as though something had gone wrong. In other words Joan exhibited the opposite of normal parental behavior — for some dark personal reason and oblivious to how she looked to outsiders, she pushed her children toward abnormal behavior.
About this time, in a conversation with a doctor and without revealing any identities, I described Joan's behavior. The doctor promptly said, "That's Munchausen76, and it's dangerous." I decided to look into this. According to the Wikipedia entry:
Factitious disorder imposed on another (FDIA), also known as Munchausen syndrome by proxy (MSbP)7, is a condition where a caregiver creates the appearance of health problems in another person, typically their child. This may include injuring the child or altering test samples. They then present the person as being sick or injured. This occurs without a specific benefit to the caregiver. Permanent injury or death of the child may occur.
The cause is unknown. The primary motive may be to gain attention.76
In old-style MSP, a mother8 would use poison to induce symptoms in her children and gain attention. In modern MSP, psychological diagnoses often stand in for poison: "In factitious disorder imposed on another, a caregiver makes a dependent person appear mentally or physically ill in order to gain attention."76
It won't surprise my readers to hear that psychologists have a diagnosis for this behavior — the title of this section — but in spite of how dangerous it is ("MSP isn't just a condition; it's child abuse and it's a crime."77), the diagnosis is rarely assigned to anyone. Medical practitioners have no problem identifying MSP (as was true in this case), but psychologists are reluctant to make it a formal diagnosis. The reason? In modern times most therapists are women, and an even higher percentage of therapy clients are women. If word got out that therapists were willing to diagnose MSP, clinical psychology would collapse. Another reason is once the diagnosis is made and because of the danger, many jurisdictions require a police report.
And worse, mental health practitioners are sometimes maneuvered into assisting MSP perpetrators: "... unique to this form of abuse is the role that health care providers play by actively, albeit unintentionally, enabling the abuse."76 This means the mental health business to some extent relies on "diagnosing" factitious disorders in their clients' children.
At this point readers might wonder, given how much weirdness I was seeing, why didn't I just withdraw? Easily answered — everyone could see Jim was benefiting from my friendship and I didn't want to abandon him, allow him to fall under Joan's spell once again. But in retrospect, I underestimated how dangerous Joan would become once Jim began to doubt his mother's world view.
John, Jim's father, was as dysfunctional as Joan but in a different way (Joan was a simpleminded loon, but John had a history of violent behavior). One day John proposed a family outing that included climbing a hill. By that time I had begun to doubt everything about these people, so I visited the site in advance and discovered a nearly vertical cliff, a technical climbing site equipped with a safety line (see Figure 2).
Figure 2: The "Family Outing" site
This was the first serious disagreement. I showed Joan and John a picture of the cliff and argued that it wasn't remotely appropriate for a family outing — the climb's advanced physical demands would place the children in danger. I wasn't just speculating — I have a lifetime of outdoor experience and some knowledge of technical rock climbing78 — the parents should have listened to me. The ascent was too steep for normal hiking and required climbers to grip a safety line while climbing.
Joan and Jim refused to reconsider the outing, for reasons I couldn't fathom at the time — the outing was to go forward. I went along, not because I had changed my mind about the danger, but because I knew I would be the only person with the skills required to rescue someone on a steep slope.
My plan was to stay below the children for the entire ascent and descent. During the descent, as I dreaded, Jim lost his grip on the safety line and fell. I had been maneuvering to stay below the children and happened to be perfectly positioned to grab Jim out of the air as he sailed past.
Those with little life experience might think, "Wow! Thanks for rescuing my son!" would be an appropriate response, but this isn't what happened — in fact Joan was barely able to conceal her resentment. After some thought I was forced to the conclusion that my rescuing Jim thwarted Joan's twisted plan to create an injured or permanently handicapped child, one who would never escape her orbit.
One more thing — to rescue Jim, I had to touch him, and Joan noticed. Why do I mention this seemingly unimportant detail? Read on, pilgrim.
This section will only make sense if I explain that in Joan's world, there was no visible daylight between "I want" and "I deserve." There's a common psychological term for this that I'll resist using.
Some time after the cliff rescue (which was never mentioned again) it came out that John was having an affair, which precipitated in Joan what can only be described as a full break from planet Earth. Joan's grip on reality had never been that secure, and this revelation caused her anchor to come loose. But even while drifting in a parallel universe, Joan continued to scheme.
In her twisted mind Joan had begun to see me as a replacement for her unfaithful mate, and the news about the affair only served to thrust this fantasy into the foreground. Moving beyond her increasingly affectionate emails, Joan chose a moment sitting next to her husband to announce she loved me, in a manner and tone of voice that could only sanction an equally overwrought reply. Instead I replied, "Thank you, that's very nice."
My goal was to clearly say I wanted to be friends with Jim and everything else was background noise I would happily do without. Expressed another way and for me, the only interesting thing that issued from Joan was her son. I could satisfy Jim's voracious intellectual curiosity9, validate his identity as a person, and see the world through his eyes. Any one of these would have justified our investment in time, but together they produced friendship.
I wasn't unaware of the risk in clearly expressing myself — as a single man I've been in any number of hell-hath-no-fury79 episodes with women who expected to be able to change my marital status. But I had failed to take into account the possibility that Joan was a psychopath.
Made furious by my rejection and oblivious to any other issues, Joan decided to drive me away in such a way that Jim wouldn't be able to figure out why I had withdrawn. In this plan Joan managed to underestimate both Jim and myself.
At that point Joan's imagination began writing checks her intellect couldn't cash. In an email she encouraged me to remain friends with her son as long as I liked, then expressed her belief that a child sitting on an adult's lap constituted molestation in and of itself.
When I read Joan's claim equating lap-sitting and molestation, I saw at once that she intended to apply magical thinkingMT to acquire for herself the coveted status of victim, using her son as a proxy. Joan had missed out on the Recovered Memory Therapy2.3.5 era, where this kind of fantasy temporarily made its way into courtrooms, and it seems she wanted to resurrect that unfortunate era.
I took a deep breath. Entirely out of touch with reality, Joan had introduced the M-word into our written communications and, despite that it was part of an absurd claim and bore no relevance to my friendship with her son, I had no choice but to withdraw.
In the weeks after my departure Joan put on a show of trying to get me to resume my friendship with her son, saying things like, "Your continued presence in his life is more than welcome."80 But that wasn't going to happen — I finally realized how dangerous she was.
On realizing I would no longer be visiting, Jim called me on the telephone and asked for a resumption of our friendship. His plea was perfectly logical: there was nothing inappropriate in our friendship, we both benefited from it, therefore it should continue.
By then I knew what was at stake. If I withdrew, Jim would have to sit alone in his room, doubt his sanity and personal value, and resent my having abandoned a valuable friendship. If I didn't withdraw, Joan had clearly telegraphed that she would make a false and dangerous accusation of wrongdoing with a child. But instead of revealing these details, I explained to Jim that Joan's life was ruled by belief, not evidence, and some of her beliefs were dangerous10.
To make my point I quoted Joan's lapsitting-as-molestation belief and explained what the word "molestation" means to an adult — how dangerous it is when spoken in malice by an irresponsible person.
Until then my interactions with Jim had been informed by a high regard for truth and candor, mutual respect, and logic. But this conversation represented a sudden descent into the messiness and uncertainly of adult life, and Jim began to panic. He said there was no way his mother would do something so terrible as lie about a crime, therefore I was acting irrationally and throwing away our friendship for no reason. I realized that, because of his limited life experience, I wouldn't be able to explain the situation to him, so I said something I hated hearing when I was Jim's age: "When you're older, you'll understand." I made it clear that I valued our friendship, but Joan was dangerous.
After a long, tense pause, Jim and I switched to a more pleasant topic — we discussed the Riemann Zeta Function81 for a few minutes, then signed off just as though we would be speaking the following day (when in truth, it would be five years before our next contact).
On hearing of our phone conversation and still hoping to get me to reverse my decision, Joan emailed an objection, saying, "You've said I have told [Jim] you have intentions about him that will/can harm him [...]. That's your own imagination, not my position."82 Thinking this message might prove particularly useful, I added it to the Joan archive.
I imagined that would end things, which meant I still didn't understand Joan.
It soon came to Joan that, despite her campaign of narcissistic fury and righteous entitlement, I wasn't coming back. So she moved to the last phase of her game plan (see Joan's Game Plan) — she swore out a civil court petition, making the exact accusation I had expected based on her M-word email11. That was the day I realized Joan was a textbook psychopath — malicious, indifferent to anything but her immediate twisted needs, profoundly dysfunctional, and unable to imagine the consequences of her actions.
When I received her petition I wondered whether Joan knew the difference between criminal and civil law — that people who make criminal accusations in civil court only make themselves look opportunistic and ignorant. And she clearly didn't realize I had archived her emails.
Joan's primary error was to assume that, because she discarded emails once read, therefore others did the same. That was naive — on witnessing her behavior, a rational correspondent would archive every word she put in writing. In this case, those emails proved she was mentally unbalanced and a transparent liar.
It was a short hearing — I testified that I possessed a complete archive of Joan's emails and those written messages flatly contradicted her statements under oath. I had prepared some examples, but on hearing about the email archive, Joan fell silent and offered no defense. The Court realized she was lying and ruled accordingly. Elapsed time ninety seconds.
Joan hoped to keep Jim in the dark about the hearing, but by applying a bit of computer expertise he thwarted her childish scheme and got hold of the petition. On seeing the written record of Joan's treachery and betrayal, Jim's perception of his mother changed immediately and permanently. His idealism now in tatters, he resolved never to speak to Joan again. He realized I had been exactly right about Joan and that my withdrawing was the only choice. This was Jim's first step into adulthood.
After the hearing I ordered a criminal background check and discovered Joan had accused other men in much the same way — she had a history of lying under oath about sex crimes12. On reading the report I felt perfectly stupid for not ordering an advance background check and for trusting someone so obviously untrustworthy.
Joan's criminal background report showed a perverse, escalating pattern of lies that I should have uncovered before getting entangled in her web. Over time Joan got tired of pedestrian lies, such that — not unlike an addict's accelerating hunger for drugs — she needed bigger lies, about bigger crimes, in more courtrooms.
That background revealed a predatory game plan that I immediately recognized:
Persuade the victim to meet her child — "How could you pass up a chance to meet my brilliant son?"
Become furious at rejection of amorous overtures, swear revenge.
Try to elicit compromising responses to leading inquiries — "[Jim] now has pubic hair, haven't you noticed?" ("Don't know, don't care, haven't you noticed?")
Darkly hint that something unsavory is taking place and reject any defense — "People who defend themselves only look guilty!"
Deceitfully delay the victim's departure — "... just your imagination, not my position."
After the victim departs, swear out a civil court petition that accuses him of something horrible — "... preying on my mentally retarded son."
The entire game plan was present in this case, the quotations above are real but edited for length.
I would like to have gotten Joan's dangerous behavior into the local public record so my neighbors would be forewarned/forearmed, but I'm not one of those people who think instituting legal actions is a good idea. Also, just thinking about Joan provoked a wave of disgust. So I let it pass.
But Joan volunteered for the next step in her public exposure. Six months later, still furious at me for seeing through her and rejecting her romantic overtures, Joan swore out a new civil court petition that tried to blame me for destroying her relationship with her son. It appears to have escaped Joan's attention that Jim's furious reaction to her original accusation13 proved she had lied. That connection was beyond Joan's reasoning abilities, but not the Court's, and more important, Joan hadn't the slightest idea how her behavior looked to others.
Joan also objected to an earlier version of this all-names-changed article, which details her toxic behavior to a wide Internet audience. But her argument was a courtroom classic, known to all legal professionals — in order to object to this fictional-names article, Joan would have to argue that it is about her, while arguing that it isn't about her. She clearly hadn't given that any deep thought, which in her life was a recurring theme.
It seems Joan's past victims had been reluctant to publicly expose her false testimony under oath, which although understandable, only encouraged her to choose another victim and escalate her lies. I understood these issues when I wrote this article, and I published it as a matter of principle. My point? When Joan lied about me, she chose the wrong victim.
In my prepared courtroom remarks I explained that Joan spent months trying and failing to get me together with her son, then brought him to a place she knew I would be. After Jim and I became friends she drove me away by trying to move things in a romantic direction, then insisted that I resume my friendship with her son. When I refused she accused me of something vile and false, under oath, in a claim flatly contradicted by her prior written words. I added that Joan had a history of lying under oath about sex crimes, and she was severely dysfunctional.
Joan had been served with my remarks in advance and could have offered any defense she cared to, but it seems she got some sage legal advice from an unknown source. Apparently Joan was warned not to object to being described as severely dysfunctional. The reason? If she objected to that description Joan would make it material to the proceeding, and on that basis I was ready to present expert testimony that she was a Munchausen by Proxy76 perpetrator, a danger to her children, and she would likely lose custody (Child Protective Services was investigating Joan at the time). If she instead stood mute, her dysfunctions would become a stipulation84 (an issue on which both sides agree) and she would no longer be able to enter into legally binding contracts. Both were bad outcomes, but the second less so.
And to think — Joan could instead have hired an attorney who would have reviewed the facts, then warned her what would happen if she filed another petition.
In a repeat of the first hearing, after my testimony Joan fell mute (thus conceding the truth of all my claims) and the Court once again realized she was lying. He congratulated me on my presentation, dismissed her petition and gaveled the hearing to a close14.
As I left the courtroom I remembered that Joan once said, "I don't like the desert." I finally understood what she meant — someone told her you can't sue nature.
To a degree I had never seen before, Joan didn't live in reality, and rarely visited. In her plastic-teacup world she paid therapists to accept anything she cared to say, like the virgin rape victims in Recovered Memory Therapy2.3.5 — "Yes, dear, whatever you say, dear." But in her courtroom appearances, including those revealed by the criminal background check, grown-ups found no connection between Joan's words and reality.
In the two hearings I witnessed, the Court listened to her claims, then waited for Joan to back up her words with evidence. But after briefly speaking Joan stood smug and content, expecting her fantasies to stand on their own, as they did with therapists. The core problem was that Joan didn't do reality, she did psychology, and therapy means paying someone to be on your side.
The courts always ruled against her, but to Joan, courtroom appearances represented a victory for the sort of intellectual mediocrity psychologists dispense to people trapped in perpetual infancy. Do you have a child intelligent enough to move out of your understanding and control? No problem — we'll thwart his personal development by branding him mentally ill, then force him into therapy — therapy often paid for with public funds. Does your son's face light up when a friend visits? And do they touch each other15? Maybe it's molestation — it can't possibly mean mathematics is interesting, or that mutual respect is its own reward.
I'm a seasoned world traveler69, but despite my wide experience Joan was the vilest organism I've ever encountered — predatory and parasitic in equal measure. She was a textbook psychopath — she would repeatedly engage in the most destructive behavior with her children and with people in the community around her, then complain about how unfairly she was treated by her victims.
One could say that Joan, a pseudoperson with no reliable principles, found herself attracted to psychology, a pseudoscience with no reliable theories. They were made for each other.
I imagine a scene in Joan's elementary school. Each girl is asked to describe what she wants to be when she grows up. Joan's classmates express their ambition to be astronauts, explorers, scientists. Now it's Joan's turn — she stands up and says, "I'm going to tell horrible lies about men and make them give me money ... wait, why are you all looking at me that way?"
In spite of Joan's fervent wishes and valiant efforts, Jim caught on and left her house. As an adult he contacted me and we resumed our friendship.
Jim now writes computer programs, something I've done in the past72. We share an enthusiasm for logic and mathematics85. Where we differ is that Jim expects psychology to produce something worthwhile. I think in time he'll grow out of that.
I might have helped a little, but as a child Jim rescued himself from Joan's toxic fantasy world by learning logic and mathematics, which among other things represents a source of certainty in an uncertain world. In psychology, you're right because you pay a therapist to agree with you, but in mathematics, you're right because you've turned a conjecture86 into a theorem87 (a true statement). The difference? In psychology, as in religion, to stay right you have to pay more money. In mathematics, a useful theorem speaks truth to the ages.
Jim has more or less patched up his relationship with his mother. He can visit her house and be civil, but he doesn't have unrealistic expectations.
As for Joan, she's been told that, despite her obvious mental defects, if she makes another false accusation under oath, she'll be prosecuted and/or committed — she must curb her dangerous impulses or she'll be taken off the streets.
As for myself, when people call and ask me to meet their children, I tell them to read this article, then I hang up.
One of my reasons for writing this article was to satisfy an obvious unmet need. When I met Joan there was no available account of how many felony crimes a housewife can get away with if she happens also to be a psychopath. Now that this article exists, future victims of future Joans have no excuse.
My other reason, and consistent with a legal requirement for articles describing living persons, is that no one will know who Joan is, but everyone will know what she is.
In a positive development, earlier versions of this article, as well as public outrage, contributed to the abandonment of the Asperger Syndrome diagnosis. This outcome shouldn't surprise anyone, because in time all mental illness diagnoses are abandonedHistorical Highlights. Imagine this exchange:
Q: How many mental illness diagnoses are eventually abandoned?
A: All of them.
Q: Wait ... what about schizophrenia?
A: Schizophrenia isn't a mental illness, it's a physical illness with mental symptoms. This is shown by the fact that it's linked to genetics and runs in families.88
A trained actor can be given a list of symptoms of a mind dysfunction such as Asperger Syndrome, Posttraumatic stress disorder (PTSD)89 or another mental dysfunction, rehearse for a while, then visit a psychologist, acquire the diagnosis and begin collecting disability payments. This is true because most psychological diagnoses rely entirely on self-reporting90, on what a person says about himself, consequently they're very unreliable. (Those who doubt this should read about the Rosenhan Experiment91.)
It's been suggested that some of the more popular and lucrative diagnoses (Asperger Syndrome92 in particular) are given out based on credible acting performances rather than objective dysfunctions — but this can only be suggested, not established scientifically, because there are no laboratory tests that can objectively confirm or deny the presence of these conditions, or even their existence.
Our trained actor can be given a list of symptoms of a brain dysfunction such as a stroke or a tumor, rehearse for a while, then present himself at a medical clinic, but the outcome will not be the same — an actor cannot pretend to have a medical dysfunction, because medicine doesn't rely on self-reporting, it relies on science.
That's the difference between mind and brain, and between psychology and science.
In a famous and probably fanciful anecdote, Isaac Newton16 was said to be sitting under an apple tree when an apple hit him on the head and caused him to begin thinking about gravitation. By reflecting on the motions of small nearby masses and large distant ones, Newton arrived at what we now call Newtonian gravitation93, a relatively simple theory that describes the forces and motions of all massive objects, based on this equation94:
$$F = G \frac{m_1 m_2}{r^2}$$
F = force with units of Newtons.
G = the Gravitational Constant95.
$m_1$, $m_2$ = masses of objects 1 and 2, kilograms.
$r$ = distance between $m_1$ and $m_2$, meters.
This is a very important equation in physics and, even though there is a more accurate gravitational treatment in modern relativity theory, this simple form is still used for problems involving velocities much less than that of light, and is central to the problem of calculating spacecraft trajectories.
Because this equation applies to all masses, it's been subject to rigorous empirical testing, and because it's expected to apply to all masses in the same way, it has resulted in new scientific discoveries like Dark Matter96 and Dark Energy97, examples where applying Newton's equation produced unexpected outcomes.
Newton's equation is an example of science at its best. It uses mathematics to make a clear, objective theoretical statement that can be empirically tested, compared to nature, and if this comparison were to fail, the theory would be regarded as falsified. Newton's theory applies to all masses in the universe, and it's easy to test in such a way that differently equipped observers are forced into agreement about its meaning and its effect on nature.
In science, a theory's credibility relies only on the degree to which it agrees with nature, with reality. A theory might be discarded, falsified, if a key theoretical test fails — if nature disagrees. Agreement between scientists is relatively easy, because all scientists can examine the same evidence, and science is steered, not by eminence, but by evidence.
In psychology, an idea's credibility relies on the number of votes cast by psychologists. An idea might be abandoned (without ever being falsified) in the same way — a count of votes among psychologists. Asperger Syndrome92 acquired gravitas and acceptance by psychologists casting votes. It was abandoned the same way — after much public controversy it lost out in a vote of experts. Apart from panels of voting experts, agreement between psychologists is nonexistent, because psychology is steered by opinion and fads, not by evidence, and certainly not by science.
See appendix Mind versus Brain for more about self-reporting.
Psychologists compiled this list in spite of the so-called "Goldwater rule" (section 7.3 of the APA code of ethics58), which declares it unethical to diagnose people not personally interviewed.
"Aspie" is a popular slang expression referring to one having the Asperger Syndrome diagnosis.
Catherine Lord65, Professor of Psychology in Psychiatry and founding Director of the Center for Autism and the Developing Brain.
No, really — a therapist. The profession with the lowest status and highest unemployment rate of any line of work for which a college degree is required.
People attracted to psychology tend to be too smart for religion but not smart enough for science.
Some sources use the abbreviation MSbP, some use MSP, I've chosen the shorter of the two.
95% of MSP perpetrators are women.
Only because of his age and inexperience, certainly not as the adult he became.
I didn't reveal the MSP diagnosis, knowing that could only make things worse.
This kind of behavior is typical of MSP perperators83.
These earlier events took place at a time when Jim was too young to understand what was happening.
In retrospect it's tough to decide which false accusation made Jim angrier — that I was exploiting him, or that he was retarded.
In civil court, lying under oath is rarely prosecuted. Instead, judges rule against the liars.
A fact proven by the cliff rescue.
1 Falsifiability — with respect to a theory, the falsifiability criterion means the theory can in principle be proven false in empirical tests.
2 Criterion of falsifiability (Britannica) — a definition of science that requires falsifiability.
3 Karl Popper (Britannica) — philosopher of science.
4 Daubert v. Merrell Dow Pharmaceuticals, Inc. — an influential U.S. Supreme Court ruling that changes the standards for scientific testimony.
5 Daubert standard — the prevailing U.S. standard for scientific expert testimony.
6 McLean v. Arkansas Board of Education — an influential legal ruling that relies on a definition of science.
7 Royal Society, the oldest scientific organization in the world, dating to 1660 CE.
8 Nullius in verba — motto of the Royal Society.
9 Royal Society : History — includes an explanation of the Society's motto.
10 Richard P. Feynman — scientist and philosopher of science.
11 Null hypothesis — the scientific precept that there is no relationship between a cause and an effect until empirical evidence supports it.
12 Pseudoscience — claims, beliefs and/or theories that assert scientific validity but that fail one or more of science's requirements.
13 Argument from ignorance — a logical error having to do with proof of a negative.
14 Charles Darwin — scientist, co-originator of the theory of natural selection.
15 Cargo Cult Science — Richard P. Feynman's 1974 Caltech commencement address.
16 Isaac Newton — very influential scientist.
17 Newton's law of universal gravitation — the gravitational theory that predated relativity.
18 Magical thinking — in psychiatry, an irrational belief that thinking about something makes it true.
19 Presumption of innocence — the modern legal standard of evidence that resembles science.
20 Psychology — study of mind and behavior.
21 Biology — the study of life and living organisms.
22 Astrology — a pseudoscience that claims to divine information about human affairs and terrestrial events by studying the movements and relative positions of celestial objects.
23 Evolution — the theory that species evolve by means of natural selection.
24 Natural selection — a theory describing the mechanism by which evolution works.
25 Material Fatigue — the weakening of a material caused by repeatedly applied loads.
26 de Havilland Comet — a notorious example of metal fatigue leading to an in-flight failure.
27 Epigenetics — the study of heritable phenotype changes that do not involve alterations in the DNA sequence.
28 Sigmund Freud — very influential founder of psychoanalysis.
29 Metapsychology — a study of psychological theory as opposed to psychology itself.
30 Karl Popper on The Line Between Science and Pseudoscience — A summary of Popper's evolving outlook on the definition of science.
31 Sigmund Koch — Psychology's Antihero — an influential psychology critic and philosopher of science.
32 Psychology: A Study of a Science — editor: Sigmund Koch
33 Cargo Cult Science — a now-famous address by Richard P. Feynman about psychological and other kinds of pseudoscience.
34 American Psychological Association — a professional psychological organization.
35 Ronald F. Levant — past president of the American Psychological Association, critic of psychology's unscientific practice.
36 Evidence-based practice in psychology — Levant, 2005.
37 National Institute of Mental Health — the primary U.S. government agency with responsibility for mental health issues.
38 Transforming Diagnosis — Thomas R. Insel, former NIMH director.
39 NIMH funding to shift away from DSM categories — an important change in the NIMH's attitude toward science.
40 Steven E. Hyman — NIMH director (1996-2001).
41 Joshua A. Gordon — current (at time of writing) NIMH director.
42 Drapetomania — a faux mental illness diagnosis that presumed to explain why slaves ran away from their masters.
43 Lobotomy — a now-infamous invasive procedure meant to treat mental disorders.
44 DSM — the Diagnostic and Statistical Manual of Mental Disorders, psychology's "Bible".
45 Conversion Therapy — a discredited and dangerous clinical practice meant to change a person's sexual orientation.
46 House Democrats seek to ban gay conversion therapy nationwide
47 Refrigerator mother — a phony psychological diagnosis.
48 Recovered Memory Therapy — a bogus therapy that claimed to uncover what were often fantasy memories.
49 Family Airing Its Trauma to Aid Others — an account of the Beth Rutherford virgin-rape episode.
50 Asperger Syndrome — an abandoned pseudoscientific diagnosis.
51 Not Otherwise Specified (NOS) — a mental illness "diagnosis" that was applied when no more specific description seemed appropriate. Its apparent purpose was to avoid having to tell people they weren't mentally ill.
52 DSM-5 — the current DSM version at the time of writing.
53 Cognitive-Behavioral Therapy — a very popular form of talk therapy.
54 Cognitive-behavioral therapy versus other therapies: redux. — a meta-analysis that finds no difference between CBT and other therapies.
55 A component analysis of cognitive-behavioral treatment for depression — this study found no difference between separately applied elements of CBT.
56 Placebo — a discussion of the Placebo Effect, in which an ineffective agent produces measurable results.
57 Diagnosis of Asperger syndrome — a description of Asperger Syndrome and its associated symptoms.
58 Goldwater rule — an informal name given to Section 7.3 of the American Psychiatric Association's (APA) code of ethics, which declares it unethical to diagnose people not personally interviewed.
59 The Benefits of Asperger's Syndrome — a listing of famous people given the Asperger Syndrome label.
60 Rising autism numbers a challenge for public schools
61 Asperger's Syndrome and obtaining social security disability benefits
62 DSM-V And How It Affects The Diagnosis Of Asperger's Disorder
63 Diagnostic and Statistical Manual of Mental Disorders — psychology's "bible".
64 A Powerful Identity, a Vanishing Diagnosis — about the abandonment of Asperger Syndrome.
65 Catherine Lord, Ph.D. — Professor of Psychology in Psychiatry and founding Director of the Center for Autism and the Developing Brain.
66 Auburn Woman Sentenced to Prison for Fraud — an egregious account of diagnosis fakery.
67 Psychiatry — a specialty that tries to build a bridge between psychology and medicine.
68 Neuroscience — the scientific study of the nervous system.
69 Confessions of a Long-Distance Sailor — an account of my four-year solo sail around the world.
70 The Day of Seven Bears — a rather dangerous encounter with seven Alaska brown bears all in close proximity.
71 Biographical Note — my brief C.V..
72 Apple Writer — my best-known program, an early word processor.
74 Personal written communication, 12.15.2004.
75 Personal written communication.
76 Factitious disorder imposed on another, also known as Munchausen Syndrome by Proxy (MSP).
77 Munchausen's Syndrome by Proxy : "MSP isn't just a condition; it's child abuse and it's a crime."
78 Climbing — one of my rock climbing articles.
79 Hell Hath No Fury — an abbreviation of the famous line "Hell hath no fury like a woman scorned," by William Congreve (1670-1729), an English playwright and poet.
81 Riemann zeta function — a fascinating topic, but a bit technical.
83 MSP : Warning Signs — a list of the behaviors typical of MSP perpetrators.
84 Stipulation — an agreement between parties in a legal dispute.
85 Introduction to Calculus — my tutorial for beginners.
86 Conjecture — in mathematics, a proposition for which no proof or disproof has yet been found.
87 Theorem — in mathematics, a statement that is formally proven based on other statements, theorems and axioms.
88 The Role of Genetics in the Etiology of Schizophrenia : National Library of Medicine
89 Posttraumatic stress disorder (PTSD) — a dysfunction said to result from traumatic experiences.
90 Self-report inventory — a summary of the problems associated with self-reporting in psychology.
91 Rosenhan Experiment — a now-famous experiment in which a group of mental health professionals gained admittance to a mental hospital by faking symptoms.
93 Newton's law of universal gravitation — a fundamental part of pre-relativistic physics.
94 Newton's Equation : Modern Form
95 Gravitational constant — an important physical constant.
96 Dark matter — a theory describing the anomalous motions of galaxies.
97 Dark Energy — a theory describing an anomaly in the expansion of the universe. | CommonCrawl |
Physics and Astronomy (26)
Materials Research (15)
Proceedings of the Nutrition Society (18)
MRS Online Proceedings Library Archive (12)
The Journal of Laryngology & Otology (5)
The Canadian Entomologist (4)
Journal of Fluid Mechanics (2)
Parasitology (2)
Bird Conservation International (1)
British Actuarial Journal (1)
International Journal of Astrobiology (1)
Journal of the Australian Mathematical Society (1)
Journal of the Marine Biological Association of the United Kingdom (1)
Mathematika (1)
Proceedings of the Prehistoric Society of East Anglia (1)
Nestle Foundation - enLINK (20)
Entomological Society of Canada TCE ESC (4)
Malaysian Society of Otorhinolaryngologists Head and Neck Surgeons (3)
The Royal College of Psychiatrists (3)
The Australian Society of Otolaryngology Head and Neck Surgery (2)
test society (2)
Australian Mathematical Society Inc (1)
BLI Birdlife International (1)
European Psychiatric Association (1)
Institute and Faculty of Actuaries (1)
MBA Online Only Members (1)
Nutrition Society (1)
Weed Science Society of America (1)
Associations between brain morphology and outcome in schizophrenia in a general population sample
E. Jääskeläinen, P. Juola, J. Kurtti, M. Haapea, M. Kyllönen, J. Miettunen, P. Tanskanen, G.K. Murray, S. Huhtaniska, A. Barnes, J. Veijola, M. Isohanni
Journal: European Psychiatry / Volume 29 / Issue 7 / September 2014
To analyse associations between brain morphology and longitudinal and cross-sectional measures of outcomes in schizophrenia in a general population sample.
The sample was the Northern Finland 1966 Birth Cohort. In 1999–2001, structural brain MRI and measures of clinical and functional outcomes were analysed for 54 individuals with schizophrenia around the age of 34. Sex, total grey matter, duration of illness and the use of antipsychotic medication were used as covariates.
After controlling for multiple covariates, increased density of the left limbic area was associated with less hospitalisations and increased total white matter volume with being in remission. Higher density of left frontal grey matter was associated with not being on a disability pension and higher density of the left frontal lobe and left limbic area were related to better functioning. Higher density of the left limbic area was associated with better longitudinal course of illness.
This study, based on unselected general population data, long follow-up and an extensive database, confirms findings of previous studies, that morphological abnormalities in several brain structures are associated with outcome. The difference in brain morphology in patients with good and poor outcomes may reflect separable aetiologies and developmental trajectories in schizophrenia.
Impact of space weather on climate and habitability of terrestrial-type exoplanets
V. S. Airapetian, R. Barnes, O. Cohen, G. A. Collinson, W. C. Danchi, C. F. Dong, A. D. Del Genio, K. France, K. Garcia-Sage, A. Glocer, N. Gopalswamy, J. L. Grenfell, G. Gronoff, M. Güdel, K. Herbst, W. G. Henning, C. H. Jackman, M. Jin, C. P. Johnstone, L. Kaltenegger, C. D. Kay, K. Kobayashi, W. Kuang, G. Li, B. J. Lynch, T. Lüftinger, J. G. Luhmann, H. Maehara, M. G. Mlynczak, Y. Notsu, R. A. Osten, R. M. Ramirez, S. Rugheimer, M. Scheucher, J. E. Schlieder, K. Shibata, C. Sousa-Silva, V. Stamenković, R. J. Strangeway, A. V. Usmanov, P. Vergados, O. P. Verkhoglyadova, A. A. Vidotto, M. Voytek, M. J. Way, G. P. Zank, Y. Yamashiki
Journal: International Journal of Astrobiology / Volume 19 / Issue 2 / April 2020
The search for life in the Universe is a fundamental problem of astrobiology and modern science. The current progress in the detection of terrestrial-type exoplanets has opened a new avenue in the characterization of exoplanetary atmospheres and in the search for biosignatures of life with the upcoming ground-based and space missions. To specify the conditions favourable for the origin, development and sustainment of life as we know it in other worlds, we need to understand the nature of global (astrospheric), and local (atmospheric and surface) environments of exoplanets in the habitable zones (HZs) around G-K-M dwarf stars including our young Sun. Global environment is formed by propagated disturbances from the planet-hosting stars in the form of stellar flares, coronal mass ejections, energetic particles and winds collectively known as astrospheric space weather. Its characterization will help in understanding how an exoplanetary ecosystem interacts with its host star, as well as in the specification of the physical, chemical and biochemical conditions that can create favourable and/or detrimental conditions for planetary climate and habitability along with evolution of planetary internal dynamics over geological timescales. A key linkage of (astro)physical, chemical and geological processes can only be understood in the framework of interdisciplinary studies with the incorporation of progress in heliophysics, astrophysics, planetary and Earth sciences. The assessment of the impacts of host stars on the climate and habitability of terrestrial (exo)planets will significantly expand the current definition of the HZ to the biogenic zone and provide new observational strategies for searching for signatures of life. The major goal of this paper is to describe and discuss the current status and recent progress in this interdisciplinary field in light of presentations and discussions during the NASA Nexus for Exoplanetary System Science funded workshop 'Exoplanetary Space Weather, Climate and Habitability' and to provide a new roadmap for the future development of the emerging field of exoplanetary science and astrobiology.
Development and clinimetric assessment of a nurse-administered screening tool for movement disorders in psychosis
Bettina Balint, Helen Killaspy, Louise Marston, Thomas Barnes, Anna Latorre, Eileen Joyce, Caroline S. Clarke, Rosa De Micco, Mark J. Edwards, Roberto Erro, Thomas Foltynie, Rachael M. Hunter, Fiona Nolan, Anette Schrag, Nick Freemantle, Yvonne Foreshaw, Nicholas Green, Kailash P. Bhatia, Davide Martino
Journal: BJPsych Open / Volume 4 / Issue 5 / September 2018
Movement disorders associated with exposure to antipsychotic drugs are common and stigmatising but underdiagnosed.
To develop and evaluate a new clinical procedure, the ScanMove instrument, for the screening of antipsychotic-associated movement disorders for use by mental health nurses.
Item selection and content validity assessment for the ScanMove instrument were conducted by a panel of neurologists, psychiatrists and a mental health nurse, who operationalised a 31-item screening procedure. Interrater reliability was measured on ratings for 30 patients with psychosis from ten mental health nurses evaluating video recordings of the procedure. Criterion and concurrent validity were tested comparing the ScanMove instrument-based rating of 13 mental health nurses for 635 community patients from mental health services with diagnostic judgement of a movement disorder neurologist based on the ScanMove instrument and a reference procedure comprising a selection of commonly used rating scales.
Interreliability analysis showed no systematic difference between raters in their prediction of any antipsychotic-associated movement disorders category. On criterion validity testing, the ScanMove instrument showed good sensitivity for parkinsonism (90%) and hyperkinesia (89%), but not for akathisia (38%), whereas specificity was low for parkinsonism and hyperkinesia, and moderate for akathisia.
The ScanMove instrument demonstrated good feasibility and interrater reliability, and acceptable sensitivity as a mental health nurse-administered screening tool for parkinsonism and hyperkinesia.
Coordinated Microanalysis of Phosphates in High-Titanium Lunar Basalts
J. J. Barnes, M. S. Thompson, F. M. McCubbin, J. Y. Howe, Z. Rahman, S. Messenger, T. Zega
International outbreak of multiple Salmonella serotype infections linked to sprouted chia seed powder – USA and Canada, 2013–2014
R. R. HARVEY, K. E. HEIMAN MARSHALL, L. BURNWORTH, M. HAMEL, J. TATARYN, J. CUTLER, K. MEGHNATH, A. WELLMAN, K. IRVIN, L. ISAAC, K. CHAU, A. LOCAS, J. KOHL, P. A. HUTH, D. NICHOLAS, E. TRAPHAGEN, K. SOTO, L. MANK, K. HOLMES-TALBOT, M. NEEDHAM, A. BARNES, B. ADCOCK, L. HONISH, L. CHUI, M. TAYLOR, C. GAULIN, S. BEKAL, B. WARSHAWSKY, L. HOBBS, L. R. TSCHETTER, A. SURIN, S. LANCE, M. E. WISE, I. WILLIAMS, L. GIERALTOWSKI
Journal: Epidemiology & Infection / Volume 145 / Issue 8 / June 2017
Published online by Cambridge University Press: 20 March 2017, pp. 1535-1544
Salmonella is a leading cause of bacterial foodborne illness. We report the collaborative investigative efforts of US and Canadian public health officials during the 2013–2014 international outbreak of multiple Salmonella serotype infections linked to sprouted chia seed powder. The investigation included open-ended interviews of ill persons, traceback, product testing, facility inspections, and trace forward. Ninety-four persons infected with outbreak strains from 16 states and four provinces were identified; 21% were hospitalized and none died. Fifty-four (96%) of 56 persons who consumed chia seed powder, reported 13 different brands that traced back to a single Canadian firm, distributed by four US and eight Canadian companies. Laboratory testing yielded outbreak strains from leftover and intact product. Contaminated product was recalled. Although chia seed powder is a novel outbreak vehicle, sprouted seeds are recognized as an important cause of foodborne illness; firms should follow available guidance to reduce the risk of bacterial contamination during sprouting.
Multistate outbreak of Listeria monocytogenes infections linked to whole apples used in commercially produced, prepackaged caramel apples: United States, 2014–2015
K. M. ANGELO, A. R. CONRAD, A. SAUPE, H. DRAGOO, N. WEST, A. SORENSON, A. BARNES, M. DOYLE, J. BEAL, K. A. JACKSON, S. STROIKA, C. TARR, Z. KUCEROVA, S. LANCE, L. H. GOULD, M. WISE, B. R. JACKSON
Journal: Epidemiology & Infection / Volume 145 / Issue 5 / April 2017
Whole apples have not been previously implicated in outbreaks of foodborne bacterial illness. We investigated a nationwide listeriosis outbreak associated with caramel apples. We defined an outbreak-associated case as an infection with one or both of two outbreak strains of Listeria monocytogenes highly related by whole-genome multilocus sequence typing (wgMLST) from 1 October 2014 to 1 February 2015. Single-interviewer open-ended interviews identified the source. Outbreak-associated cases were compared with non-outbreak-associated cases and traceback and environmental investigations were performed. We identified 35 outbreak-associated cases in 12 states; 34 (97%) were hospitalized and seven (20%) died. Outbreak-associated ill persons were more likely to have eaten commercially produced, prepackaged caramel apples (odds ratio 326·7, 95% confidence interval 32·2–3314). Environmental samples from the grower's packing facility and distribution-chain whole apples yielded isolates highly related to outbreak isolates by wgMLST. This outbreak highlights the importance of minimizing produce contamination with L. monocytogenes. Investigators should perform single-interviewer open-ended interviews when a food is not readily identified.
Collaborative visual analytics of radio surveys in the Big Data era
Dany Vohl, Christopher J. Fluke, Amr H. Hassan, David G. Barnes, Virginia A. Kilborn
Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S325 / October 2016
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform – allowing the research process to continue wherever you are.
A Comparison of Classifiers for Solar Energetic Events
Graham Barnes, Nicole Schanche, K. D. Leka, Ashna Aggarwal, Kathy Reeves
We compare the results of using a Random Forest Classifier with the results of using Nonparametric Discriminant Analysis to classify whether a filament channel (in the case of a filament eruption) or an active region (in the case of a flare) is about to produce an event. A large number of descriptors are considered in each case, but it is found that only a small number are needed in order to get most of the improvement in performance over always predicting the majority class. There is little difference in performance between the two classifiers, and neither results in substantial improvements over simply predicting the majority class.
M. Ashcroft, R. Austin, K. Barnes, D. MacDonald, S. Makin, S. Morgan, R. Taylor, P. Scolley
Journal: British Actuarial Journal / Volume 21 / Issue 2 / July 2016
Expert judgement has been used since the actuarial profession was founded. In the past, there has often been a lack of transparency regarding the use of expert judgement, even though those judgements could have a very significant impact on the outputs of calculations and the decisions made by organisations. The lack of transparency has a number of dimensions, including the nature of the underlying judgements, as well as the process used to derive those judgements. This paper aims to provide a practical framework regarding expert judgement processes, and how those processes may be validated. It includes a worked example illustrating how the process could be used for setting a particular assumption. It concludes with some suggested tools for use within expert judgement. Although primarily focussed on the insurance sector, the proposed process framework could be applied more widely without the need for significant changes.
Effect of vitamin D3 supplementation on serum 25-hydroxyvitamin D status among adolescents aged 14–18 years: a dose-response, randomised placebo-controlled trial
T. Smith, L. Tripkovic, C. T. Damsgaard, C. Mølgaard, S. Wilson-Barnes, K. Dowling, Á. Hennessey, K. Cashman, M. Kiely, S. Lanham-New, K. Hart
Journal: Proceedings of the Nutrition Society / Volume 75 / Issue OCE3 / 2016
Published online by Cambridge University Press: 24 November 2016, E117
By Rony A. Adam, Gloria Bachmann, Nichole M. Barker, Randall B. Barnes, John Bennett, Inbar Ben-Shachar, Jonathan S. Berek, Sarah L. Berga, Monica W. Best, Eric J. Bieber, Frank M. Biro, Shan Biscette, Anita K. Blanchard, Candace Brown, Ronald T. Burkman, Joseph Buscema, John E. Buster, Michael Byas-Smith, Sandra Ann Carson, Judy C. Chang, Annie N. Y. Cheung, Mindy S. Christianson, Karishma Circelli, Daniel L. Clarke-Pearson, Larry J. Copeland, Bryan D. Cowan, Navneet Dhillon, Michael P. Diamond, Conception Diaz-Arrastia, Nicole M. Donnellan, Michael L. Eisenberg, Eric Eisenhauer, Sebastian Faro, J. Stuart Ferriss, Lisa C. Flowers, Susan J. Freeman, Leda Gattoc, Claudine Marie Gayle, Timothy M. Geiger, Jennifer S. Gell, Alan N. Gordon, Victoria L. Green, Jon K. Hathaway, Enrique Hernandez, S. Paige Hertweck, Randall S. Hines, Ira R. Horowitz, Fred M. Howard, William W. Hurd, Fidan Israfilbayli, Denise J. Jamieson, Carolyn R. Jaslow, Erika B. Johnston-MacAnanny, Rohna M. Kearney, Namita Khanna, Caroline C. King, Jeremy A. King, Ira J. Kodner, Tamara Kolev, Athena P. Kourtis, S. Robert Kovac, Ertug Kovanci, William H. Kutteh, Eduardo Lara-Torre, Pallavi Latthe, Herschel W. Lawson, Ronald L. Levine, Frank W. Ling, Larry I. Lipshultz, Steven D. McCarus, Robert McLellan, Shruti Malik, Suketu M. Mansuria, Mohamed K. Mehasseb, Pamela J. Murray, Saloney Nazeer, Farr R. Nezhat, Hextan Y. S. Ngan, Gina M. Northington, Peggy A. Norton, Ruth M. O'Regan, Kristiina Parviainen, Resad P. Pasic, Tanja Pejovic, K. Ulrich Petry, Nancy A. Phillips, Ashish Pradhan, Elizabeth E. Puscheck, Suneetha Rachaneni, Devon M. Ramaeker, David B. Redwine, Robert L. Reid, Carla P. Roberts, Walter Romano, Peter G. Rose, Robert L. Rosenfield, Shon P. Rowan, Mack T. Ruffin, Janice M. Rymer, Evis Sala, Ritu Salani, Joseph S. Sanfilippo, Mahmood I. Shafi, Roger P. Smith, Meredith L. Snook, Thomas E. Snyder, Mary D. Stephenson, Thomas G. Stovall, Richard L. Sweet, Philip M. Toozs-Hobson, Togas Tulandi, Elizabeth R. Unger, Denise S. Uyar, Marion S. Verp, Rahi Victory, Tamara J. Vokes, Michelle J. Washington, Katharine O'Connell White, Paul E. Wise, Frank M. Wittmaack, Miya P. Yamamoto, Christine Yu, Howard A. Zacur
Edited by Eric J. Bieber, Joseph S. Sanfilippo, University of Pittsburgh, Ira R. Horowitz, Emory University, Atlanta, Mahmood I. Shafi
Book: Clinical Gynecology
Print publication: 23 April 2015, pp viii-xiv
The Murchison Widefield Array Correlator
Murchison Widefield Array
S. M. Ord, B. Crosse, D. Emrich, D. Pallot, R. B. Wayth, M. A. Clark, S. E. Tremblay, W. Arcus, D. Barnes, M. Bell, G. Bernardi, N. D. R. Bhat, J. D. Bowman, F. Briggs, J. D. Bunton, R. J. Cappallo, B. E. Corey, A. A. Deshpande, L. deSouza, A. Ewell-Wice, L. Feng, R. Goeke, L. J. Greenhill, B. J. Hazelton, D. Herne, J. N. Hewitt, L. Hindson, N. Hurley-Walker, D. Jacobs, M. Johnston-Hollitt, D. L. Kaplan, J. C. Kasper, B. B. Kincaid, R. Koenig, E. Kratzenberg, N. Kudryavtseva, E. Lenc, C. J. Lonsdale, M. J. Lynch, B. McKinley, S. R. McWhirter, D. A. Mitchell, M. F. Morales, E. Morgan, D. Oberoi, A. Offringa, J. Pathikulangara, B. Pindor, T. Prabu, P. Procopio, R. A. Remillard, J. Riding, A. E. E. Rogers, A. Roshi, J. E. Salah, R. J. Sault, N. Udaya Shankar, K. S. Srivani, J. Stevens, R. Subrahmanyan, S. J. Tingay, M. Waterson, R. L. Webster, A. R. Whitney, A. Williams, C. L. Williams, J. S. B. Wyithe
The Murchison Widefield Array is a Square Kilometre Array Precursor. The telescope is located at the Murchison Radio–astronomy Observatory in Western Australia. The MWA consists of 4 096 dipoles arranged into 128 dual polarisation aperture arrays forming a connected element interferometer that cross-correlates signals from all 256 inputs. A hybrid approach to the correlation task is employed, with some processing stages being performed by bespoke hardware, based on Field Programmable Gate Arrays, and others by Graphics Processing Units housed in general purpose rack mounted servers. The correlation capability required is approximately 8 tera floating point operations per second. The MWA has commenced operations and the correlator is generating 8.3 TB day−1 of correlation products, that are subsequently transferred 700 km from the MRO to Perth (WA) in real-time for storage and offline processing. In this paper, we outline the correlator design, signal path, and processing elements and present the data format for the internal and external interfaces.
Effectiveness of a tier 3 weight management programme for children and young people in a deprived area
R. P. G. Hayhoe, M. Haddow, J. Shareef, S. Barnes, A. A. Welch
Published online by Cambridge University Press: 15 April 2015, E14
The relationship between body mass index and vitamin D status in children attending a paediatric tier 3 weight management programme
R.P.G. Hayhoe, M. Haddow, J. Shareef, S. Barnes, A.A. Welch
By Michael H. Allen, Leora Amira, Victoria Arango, David W. Ayer, Helene Bach, Christopher R. Bailey, Ross J. Baldessarini, Kelsey Ball, Alan L. Berman, Marian E. Betz, Emily A. Biggs, R. Warwick Blood, Kathleen T. Brady, David A. Brent, Jeffrey A. Bridge, Gregory K. Brown, Anat Brunstein Klomek, A. Jacqueline Buchanan, Michelle J. Chandley, Tim Coffey, Jessica Coker, Yeates Conwell, Scott J. Crow, Collin L. Davidson, Yogesh Dwivedi, Stacey Espaillat, Jan Fawcett, Steven J. Garlow, Robert D. Gibbons, Catherine R. Glenn, Deborah Goebert, Erica Goldstein, Tina R. Goldstein, Madelyn S. Gould, Kelly L. Green, Alison M. Greene, Philip D. Harvey, Robert M. A. Hirschfeld, Donna Holland Barnes, Andres M. Kanner, Gary J. Kennedy, Stephen H. Koslow, Benoit Labonté, Alison M. Lake, William B. Lawson, Steve Leifman, Adam Lesser, Timothy W. Lineberry, Amanda L. McMillan, Herbert Y. Meltzer, Michael Craig Miller, Michael J. Miller, James A. Naifeh, Katharine J. Nelson, Charles B. Nemeroff, Alexander Neumeister, Matthew K. Nock, Jennifer H. Olson-Madden, Gregory A. Ordway, Michael W. Otto, Ghanshyam N. Pandey, Giampaolo Perna, Jane Pirkis, Kelly Posner, Anne Rohs, Pedro Ruiz, Molly Ryan, Alan F. Schatzberg, S. Charles Schulz, M. Katherine Shear, Morton M. Silverman, April R. Smith, Marcus Sokolowski, Barbara Stanley, Zachary N. Stowe, Sarah A. Struthers, Leonardo Tondo, Gustavo Turecki, Robert J. Ursano, Kimberly Van Orden, Anne C. Ward, Danuta Wasserman, Jerzy Wasserman, Melinda K. Westlund, Tracy K. Witte, Kseniya Yershova, Alexandra Zagoloff, Sidney Zisook
Edited by Stephen H. Koslow, University of Miami, Pedro Ruiz, University of Miami, Charles B. Nemeroff, University of Miami
Book: A Concise Guide to Understanding Suicide
MALT90: The Millimetre Astronomy Legacy Team 90 GHz Survey
J. M. Jackson, J. M. Rathborne, J. B. Foster, J. S. Whitaker, P. Sanhueza, C. Claysmith, J. L. Mascoop, M. Wienen, S. L. Breen, F. Herpin, A. Duarte-Cabral, T. Csengeri, S. N. Longmore, Y. Contreras, B. Indermuehle, P. J. Barnes, A. J. Walsh, M. R. Cunningham, K. J. Brooks, T. R. Britton, M. A. Voronkov, J. S. Urquhart, J. Alves, C. H. Jordan, T. Hill, S. Hoq, S. C. Finn, I. Bains, S. Bontemps, L. Bronfman, J. L. Caswell, L. Deharveng, S. P. Ellingsen, G. A. Fuller, G. Garay, J. A. Green, L. Hindson, P. A. Jones, C. Lenfestey, N. Lo, V. Lowe, D. Mardones, K. M. Menten, V. Minier, L. K. Morgan, F. Motte, E. Muller, N. Peretto, C. R. Purcell, P. Schilke, Schneider-N. Bontemps, F. Schuller, A. Titmarsh, F. Wyrowski, A. Zavagno
The Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey aims to characterise the physical and chemical evolution of high-mass star-forming clumps. Exploiting the unique broad frequency range and on-the-fly mapping capabilities of the Australia Telescope National Facility Mopra 22 m single-dish telescope 1 , MALT90 has obtained 3′ × 3′ maps towards ~2 000 dense molecular clumps identified in the ATLASGAL 870 μm Galactic plane survey. The clumps were selected to host the early stages of high-mass star formation and to span the complete range in their evolutionary states (from prestellar, to protostellar, and on to $\mathrm{H\,{\scriptstyle {II}}}$ regions and photodissociation regions). Because MALT90 mapped 16 lines simultaneously with excellent spatial (38 arcsec) and spectral (0.11 km s−1) resolution, the data reveal a wealth of information about the clumps' morphologies, chemistry, and kinematics. In this paper we outline the survey strategy, observing mode, data reduction procedure, and highlight some early science results. All MALT90 raw and processed data products are available to the community. With its unprecedented large sample of clumps, MALT90 is the largest survey of its type ever conducted and an excellent resource for identifying interesting candidates for high-resolution studies with ALMA.
By Ioannis P. Androulakis, Djillali Annane, Gérard Audibert, Lisa L. Barnes, Paolo Bartolomeo, Walter S. Bartynski, David A. Bennett, Nicolas Bruder, Nathan E. Brummel, Steve E. Calvano, Alain Cariou, F. Chretien, Jan Claassen, Colm Cunningham, Souhayl Dahmani, Robert Dantzer, Dimitry S. Davydow, Sanjay V. Desai, E. Wesley Ely, Frédéric Faugeras, Karen J. Ferguson, Brandon Foreman, Sadanand M. Gaikwad, Rebecca F. Gottesman, Maura A. Grega, Richard D. Griffiths, Marion Griton, Stefan D. Gurney, Hebah M. Hefzy, Michael T. Heneka, Dustin M. Hipp, Ramona O. Hopkins, Christopher G. Hughes, James C. Jackson, Christina Jones, Peter W. Kaplan, Keith W. Kelley, Raymond C. Koehler, Matthew A. Koenig, Jan Pieter Konsman, Felix Kork, John P. Kress, Stephen F. Lowry, Alawi Luetz, David Luis, Alasdair M. J. MacLullich, Guy M. McKhann, Jean Mantz, Panteleimon D. Mavroudis, Mervyn Maze, Bruno Mégarbane, Lionel Naccache, Dale M. Needham, Pratik P. Pandharipande, Jean-Francois Payen, V. Hugh Perry, Margaret Pisani, C. Rauturier, Benjamin Rohaut, Jennifer Ryan, Robert D. Sanders, Jeremy D. Scheff, Frederic Sedel, Ola A. Selnes, Tarek Sharshar, Martin Siegemund, Yoanna Skrobik, Jamie W. Sleigh, Romain Sonneville, Claudia D. Spies, Luzius A. Steiner, Robert D. Stevens, Raoul Sutter, Fabio Silvio Taccone, Richard E. Temes, Willem A. van Gool, Christel C. Vanbesien, F. Verdonk, Odile Viltart, Julia Wendon, Catherine N. Widmann, Robert S. Wilson
Edited by Robert D. Stevens, Tarek Sharshar, E. Wesley Ely, Vanderbilt University, Tennessee
Book: Brain Disorders in Critical Illness
Print publication: 19 September 2013, pp viii-xii
Characterisation of the MALT90 Survey and the Mopra Telescope at 90 GHz
J. B. Foster, J. M. Rathborne, P. Sanhueza, C. Claysmith, J. S. Whitaker, J. M. Jackson, J. L. Mascoop, M. Wienen, S. L. Breen, F. Herpin, A. Duarte-Cabral, T. Csengeri, Y. Contreras, B. Indermuehle, P. J. Barnes, A. J. Walsh, M. R. Cunningham, T. R. Britton, M. A. Voronkov, J. S. Urquhart, J. Alves, C. H. Jordan, T. Hill, S. Hoq, K. J. Brooks, S. N. Longmore
Published online by Cambridge University Press: 10 July 2013, e038
We characterise the Millimetre Astronomy Legacy Team 90 GHz Survey (MALT90) and the Mopra telescope at 90 GHz. We combine repeated position-switched observations of the source G300.968+01.145 with a map of the same source in order to estimate the pointing reliability of the position-switched observations and, by extension, the MALT90 survey; we estimate our pointing uncertainty to be 8 arcsec. We model the two strongest sources of systematic gain variability as functions of elevation and time-of-day and quantify the remaining absolute flux uncertainty. Corrections based on these two variables reduce the scatter in repeated observations from 12%–25% down to 10%–17%. We find no evidence for intrinsic source variability in G300.968+01.145. For certain applications, the corrections described herein will be integral for improving the absolute flux calibration of MALT90 maps and other observations using the Mopra telescope at 90 GHz.
By Janine B. Adams, Kirsten B. Barnes, Guy C. Bate, Greg A. Botha, Meyrick B. Bowker, Sarah J. Bownes, Nicola K. Carrasco, Clinton P. Chrystal, Robynne A. Chrystal, Xander Combrink, Allan D. Connell, Digby P. Cyrus, Colleen T. Downs, William N. Ellery, Anthony T. Forbes, Nicolette T. Forbes, Caroline Fox, Nuette Gordon, Michael C. Grenfell, Suzanne E. Grenfell, Sylvi Haldorsen, Marc S. Humphries, Hendrik L. Jerling, Bruce E. Kelbe, C. Fiona MacKay, Christopher M. Maine, Andrew Z. Maro, Andrew A. Mather, Nelson A. F. Miranda, David G. Muir, Holly A. Nel, Sibulele Nondoda, Renzo Perissinotto, Deena Pillay, Naomi Porat, Roger N. Porter, Sean N. Porter, Justin J. Pringle, Ursula M. Scharler, Derek D. Stretch, Ricky H. Taylor, Jane Turpie, Jonathan K. Warner, Alan K. Whitfield
Edited by Renzo Perissinotto, University of KwaZulu-Natal, South Africa, Derek D. Stretch, University of KwaZulu-Natal, South Africa, Ricky H. Taylor
Book: Ecology and Conservation of Estuarine Ecosystems
Print publication: 16 May 2013, pp xiii-xvi
Science with the Murchison Widefield Array
Judd D. Bowman, Iver Cairns, David L. Kaplan, Tara Murphy, Divya Oberoi, Lister Staveley-Smith, Wayne Arcus, David G. Barnes, Gianni Bernardi, Frank H. Briggs, Shea Brown, John D. Bunton, Adam J. Burgasser, Roger J. Cappallo, Shami Chatterjee, Brian E. Corey, Anthea Coster, Avinash Deshpande, Ludi deSouza, David Emrich, Philip Erickson, Robert F. Goeke, B. M. Gaensler, Lincoln J. Greenhill, Lisa Harvey-Smith, Bryna J. Hazelton, David Herne, Jacqueline N. Hewitt, Melanie Johnston-Hollitt, Justin C. Kasper, Barton B. Kincaid, Ronald Koenig, Eric Kratzenberg, Colin J. Lonsdale, Mervyn J. Lynch, Lynn D. Matthews, S. Russell McWhirter, Daniel A. Mitchell, Miguel F. Morales, Edward H. Morgan, Stephen M. Ord, Joseph Pathikulangara, Thiagaraj Prabu, Ronald A. Remillard, Timothy Robishaw, Alan E. E. Rogers, Anish A. Roshi, Joseph E. Salah, Robert J. Sault, N. Udaya Shankar, K. S. Srivani, Jamie B. Stevens, Ravi Subrahmanyan, Steven J. Tingay, Randall B. Wayth, Mark Waterson, Rachel L. Webster, Alan R. Whitney, Andrew J. Williams, Christopher L. Williams, J. Stuart B. Wyithe
Published online by Cambridge University Press: 16 April 2013, e031
Significant new opportunities for astrophysics and cosmology have been identified at low radio frequencies. The Murchison Widefield Array is the first telescope in the southern hemisphere designed specifically to explore the low-frequency astronomical sky between 80 and 300 MHz with arcminute angular resolution and high survey efficiency. The telescope will enable new advances along four key science themes, including searching for redshifted 21-cm emission from the EoR in the early Universe; Galactic and extragalactic all-sky southern hemisphere surveys; time-domain astrophysics; and solar, heliospheric, and ionospheric science and space weather. The Murchison Widefield Array is located in Western Australia at the site of the planned Square Kilometre Array (SKA) low-band telescope and is the only low-frequency SKA precursor facility. In this paper, we review the performance properties of the Murchison Widefield Array and describe its primary scientific objectives. | CommonCrawl |
APPLICATION OF EIGENVALUES AND EIGENVECTORS IN STATISTICS
You are here: Home > Wendoree Park > Application of eigenvalues and eigenvectors in statistics
Practical Uses for Eigenvalues Physics Forums. Eigenvalues and Eigenvectors. The eigenvalues of a given n x n matrix are the n numbers which summarize the essential properties of that matrix. Statistics; Math;, Home > Linear Algebra > Understanding matrices intuitively, part 2, eigenvalues and eigenvectors Understanding matrices intuitively, part 2, eigenvalues and.
Understanding Eigenvectors and Eigenvalues Visually Alyssa
What are the applications of Eigenvalue and eigenvector in. Functions & Statistics; Pre In this video lesson we will learn about Eigenvalues and Eigenvectors. in how car designers analyze eigenvalues in order to damp, Chapter 8 Eigenvalues basic theory of eigenvalues and eigenvectors, broad range of modern applications, including statistics,.
The Eigen-Decomposition: Eigenvalues and Eigenvectors Eigenvectors and eigenvalues are also referred A type of matrices used very often in statistics are 17/06/2013В В· introduction to Eigenvalues and Eigenvectors Statistics; Add translations A physical example of application of eigenvalues and eigenvectors
Statistics 5101 (Geyer, Spring 2016) Eigenvalues and Eigenvectors. Diagonal elements of D in the spectral decomposition are called eigenvalues of M. Group Comparison of Eigenvalues and Eigenvectors of Diffusion Tensors Voxelwise application of the test statistics leads to a Eigenvalues and Eigenvectors of
Linear Algebra, Theory and Applications was written by Dr 7.1 Eigenvalues And Eigenvectors Of A Matrix 13.7 An Application To Statistics Eigenvalues and eigenvectors give rise to many closely in the equation above is understood to be the vector obtained by application of the in statistics.
Eigenvectors: Properties, Application & Example. we solved a system of linear differential equations using eigenvalues and eigenvectors. Statistics Engineering geology application in civil engineering? Remember that for the eigenvalues (k) and eigenvectors (v) of a matrix (M) the Statistics; Steam Engines;
Statistics Notes Eigenvalues in a Nutshell 1 A real Eigenvalues and eigenvectors come in conjugate pairs. 9. 15/05/2009В В· I am trying to get some intuition for Eigenvalues/Eigenvectors. One real-life application appears to be a representation of resonance. What are some...
The following sections provide links to our complete lessons on all Linear Algebra Applications of Linear Systems and Linear Eigenvalues and Eigenvectors. In particular we will consider the computation of the eigenvalues and eigenvectors of a symmetric going to have p eigenvalues, of Statistics Online Programs
1/10/2014В В· Learn a physical example of application of eigenvalues and eigenvectors. For more videos and resources on this topic, please visit http://ma.mathforcollege Linear Algebra, Theory and Applications was written by Dr 7.1 Eigenvalues And Eigenvectors Of A Matrix 13.7 An Application To Statistics
The following sections provide links to our complete lessons on all Linear Algebra Applications of Linear Systems and Linear Eigenvalues and Eigenvectors. 1/10/2014В В· Learn a physical example of application of eigenvalues and eigenvectors. For more videos and resources on this topic, please visit http://ma.mathforcollege
Chapter 8 Eigenvalues IITK. The Eigen-Decomposition: Eigenvalues and Eigenvectors Eigenvectors and eigenvalues are also referred A type of matrices used very often in statistics are, Eigenvectors of repeated eigenvalues. Hello friends, today it's all about the eigenvectors of repeated eigenvalues. Have a look!! Eigenvectors of repeated.
Eigenvalues and Eigenvectors of Asymmetric Matrices
application of eigen value n eigen vector Eigenvalues. 3 Eigenvalues, Eigenvectors, in most fields of applied mathematics including statistics, image The eigenvalues and eigenvectors of a matrix play a role, Eigenvalues and Eigenvectors: theoretical interest and wide-ranging application. and statistics have focused considerable attention on "eigenvalues.
The Eigen-Decomposition Eigenvalues and Eigenvectors
4 Detailed Examples on How to Find Eigenvectors. Eigenvectors of repeated eigenvalues. Hello friends, today it's all about the eigenvectors of repeated eigenvalues. Have a look!! Eigenvectors of repeated https://en.wikipedia.org/wiki/Eigenfunction Overview. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen-is adopted from the German word eigen for.
Eigenvalues and Eigenvectors of Asymmetric Matrices. If is a square but asymmetric real matrix the eigenvector-eigenvalue situation becomes quite different from the 17/06/2013В В· introduction to Eigenvalues and Eigenvectors Statistics; Add translations A physical example of application of eigenvalues and eigenvectors
3 Eigenvalues, Eigenvectors, in most fields of applied mathematics including statistics, image The eigenvalues and eigenvectors of a matrix play a role Eigenvectors of repeated eigenvalues. Hello friends, today it's all about the eigenvectors of repeated eigenvalues. Have a look!! Eigenvectors of repeated
Overview. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen-is adopted from the German word eigen for Eigenvalues and Eigenvectors are important in the study of covariance matrix structure in statistics. Some of the examples are as follows: The Principal Component
Eigenvalues and Eigenvectors and their ApplicationsBy What are the application of eigenvectors and eigenvalues ? Application of Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors are important in the study of covariance matrix structure in statistics. Some of the examples are as follows: The Principal Component
Interpreting Eigenvalues of Here is a quote from "what is the application of eigenvalues in statistics?": "If you calculate the eigenvectors of a covariance Overview. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen-is adopted from the German word eigen for
Chapter 8 Eigenvalues basic theory of eigenvalues and eigenvectors, broad range of modern applications, including statistics, Linear Algebra, Theory and Applications was written by Dr 7.1 Eigenvalues And Eigenvectors Of A Matrix 13.7 An Application To Statistics
24/09/2013В В· Statistics; Add translations A physical example of application of eigenvalues and eigenvectors - Duration: Eigenvalues and eigenvectors made easy Real life examples for eigenvalues It seems like you want to understand an application of eigenvectors and eigenvalues that is simpler than the definition
Engineering geology application in civil engineering? Remember that for the eigenvalues (k) and eigenvectors (v) of a matrix (M) the Statistics; Steam Engines; 1/10/2014В В· Learn a physical example of application of eigenvalues and eigenvectors. For more videos and resources on this topic, please visit http://ma.mathforcollege
introduction to Eigenvalues and Eigenvectors YouTube
statistics Interpreting Eigenvalues of Co-variance. Eigenvalues and eigenvectors prominently More examples of real-world applications of eigenvectors and eigenvalues can be linear algebra matrices R statistics., Relative Eigenvalues and Eigenvectors Less important in statistics are minimization properties: Relative Eigenvectors and Eigenvalues Application to one-way.
Eigenvalues and eigenvectors an overview ScienceDirect. Principal component analysis Depending on the field of application, The eigenvalues and eigenvectors are ordered and paired., Group Comparison of Eigenvalues and Eigenvectors of Diffusion Tensors Voxelwise application of the test statistics leads to a Eigenvalues and Eigenvectors of.
Eigenvectors of repeated eigenvalues. Hello friends, today it's all about the eigenvectors of repeated eigenvalues. Have a look!! Eigenvectors of repeated Eigenvectors: Properties, Application & Example. we solved a system of linear differential equations using eigenvalues and eigenvectors. Statistics
Using Eigenvectors to Find Steady State Population Flows. The way to see that is by examining A's eigenvalues and eigenvectors. Statistics; Data Science; Eigenvectors and eigenvalues are used in many engineering problems and have applications in object recognition, edge detection in diffusion MRI images, moments of
Eigenvalues and Eigenvectors many applications in the physical sciences. 2 = 3 are the eigenvalues of A. Eigenvectors v = 2 6 4 v 1 v 2 v 3 3 7 Chapter 8 Eigenvalues basic theory of eigenvalues and eigenvectors, broad range of modern applications, including statistics,
Using Eigenvectors to Find Steady State Population Flows. The way to see that is by examining A's eigenvalues and eigenvectors. Statistics; Data Science; Eigenvalues and Eigenvectors Applications of Eigenvalues and Eigenvectors Radboud University Nijmegen Matrix Calculations: Eigenvalues and Eigenvectors statistics
Eigenvalues and Eigenvectors and their ApplicationsBy What are the application of eigenvectors and eigenvalues ? Application of Eigenvalues and Eigenvectors Statistics 910, #7 1 Eigenvectors and Eigenvalues of Stationary Processes Overview 1. Toeplitz matrices 2. Szeg o's theorem 3. Circulant matrices
Home > Linear Algebra > Understanding matrices intuitively, part 2, eigenvalues and eigenvectors Understanding matrices intuitively, part 2, eigenvalues and Home > Linear Algebra > Understanding matrices intuitively, part 2, eigenvalues and eigenvectors Understanding matrices intuitively, part 2, eigenvalues and
Statistics & Probability Letters 3 (1985) 95-96 North-Holland EIGENVALUE-EIGENVECTOR ANALYSIS FOR A CLASS OF PATrERNED CORRELATION MATRICES WITH AN APPLICATION: A In particular we will consider the computation of the eigenvalues and eigenvectors of a symmetric going to have p eigenvalues, of Statistics Online Programs
Eigenvalues and eigenvectors give rise to many closely in the equation above is understood to be the vector obtained by application of the in statistics. Eigenvalues and Eigenvectors and their ApplicationsBy What are the application of eigenvectors and eigenvalues ? Application of Eigenvalues and Eigenvectors
Eigenvalues and Eigenvectors many applications in the physical sciences. 2 = 3 are the eigenvalues of A. Eigenvectors v = 2 6 4 v 1 v 2 v 3 3 7 Eigenvalues and the application of it in statistics, etc.), which is why and the amount by which they're scaled are its eigenvalues. In this case, the
In today's pattern recognition class my professor talked about PCA, eigenvectors & eigenvalues. I got the mathematics of it. If I'm asked to find eigenvalues etc. Home > Linear Algebra > Understanding matrices intuitively, part 2, eigenvalues and eigenvectors Understanding matrices intuitively, part 2, eigenvalues and
In today's pattern recognition class my professor talked about PCA, eigenvectors & eigenvalues. I got the mathematics of it. If I'm asked to find eigenvalues etc. Eigenvalues and eigenvectors prominently More examples of real-world applications of eigenvectors and eigenvalues can be linear algebra matrices R statistics.
Chapter 8 Eigenvalues basic theory of eigenvalues and eigenvectors, broad range of modern applications, including statistics, In today's pattern recognition class my professor talked about PCA, eigenvectors & eigenvalues. I got the mathematics of it. If I'm asked to find eigenvalues etc.
24/09/2013В В· Statistics; Add translations A physical example of application of eigenvalues and eigenvectors - Duration: Eigenvalues and eigenvectors made easy Functions & Statistics; Pre In this video lesson we will learn about Eigenvalues and Eigenvectors. in how car designers analyze eigenvalues in order to damp
4.5 Eigenvalues and Eigenvectors STAT 505 - Statistics
Eigenvectors & Eigenvalues Stack Exchange. 1/10/2014В В· Learn a physical example of application of eigenvalues and eigenvectors. For more videos and resources on this topic, please visit http://ma.mathforcollege, Linear Algebra, Theory and Applications was written by Dr 7.1 Eigenvalues And Eigenvectors Of A Matrix 13.7 An Application To Statistics.
What is Linear Algebra? (A quick introduction) Calcworkshop. Eigenvalues and Eigenvectors and their Applications. By Dr. P.K.Sharma Sr. Lecturer in Mathematics D.A.V. College Jalandhar. Email Id: [email protected], The following sections provide links to our complete lessons on all Linear Algebra Applications of Linear Systems and Linear Eigenvalues and Eigenvectors..
Eigenvectors Properties Application & Example Study.com
An $\ell_{\infty}$ Eigenvector Perturbation Bound and Its. What is known about the distribution of eigenvectors of the conditional distribution of eigenvectors of random statistics of the eigenvalues is https://en.wikipedia.org/wiki/Talk%3AEigenvalue%2C_eigenvector_and_eigenspace What is the importance of eigenvalues/eigenvectors? do you have a specific application in mind? Matrices by themselves are just arrays of numbers,.
Statistics 5101 (Geyer, Spring 2016) Eigenvalues and Eigenvectors. Diagonal elements of D in the spectral decomposition are called eigenvalues of M. 1/10/2014В В· Learn a physical example of application of eigenvalues and eigenvectors. For more videos and resources on this topic, please visit http://ma.mathforcollege
What is known about the distribution of eigenvectors of the conditional distribution of eigenvectors of random statistics of the eigenvalues is 24/09/2013В В· Statistics; Add translations A physical example of application of eigenvalues and eigenvectors - Duration: Eigenvalues and eigenvectors made easy
Statistics 5101 (Geyer, Spring 2016) Eigenvalues and Eigenvectors. Diagonal elements of D in the spectral decomposition are called eigenvalues of M. Relative Eigenvalues and Eigenvectors Less important in statistics are minimization properties: Relative Eigenvectors and Eigenvalues Application to one-way
In today's pattern recognition class my professor talked about PCA, eigenvectors & eigenvalues. I got the mathematics of it. If I'm asked to find eigenvalues etc. Eigenvalues and Eigenvectors are important in the study of covariance matrix structure in statistics. Some of the examples are as follows: The Principal Component
Statistics 910, #7 1 Eigenvectors and Eigenvalues of Stationary Processes Overview 1. Toeplitz matrices 2. Szeg o's theorem 3. Circulant matrices What is the importance of eigenvalues/eigenvectors? do you have a specific application in mind? Matrices by themselves are just arrays of numbers,
Application of Derivatives Example #4 find all Eigenvalues and Eigenvectors for Defintion and Theorem about Diagonalization and Distinct Eigenvalues Eigenvalues, Eigenvectors and Their Uses Eigenvalues and eigenvectors have widespread practical application in multivariate statistics. In
Overview. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen-is adopted from the German word eigen for Eigenvalues and Eigenvectors: theoretical interest and wide-ranging application. and statistics have focused considerable attention on "eigenvalues
Statistics 5101 (Geyer, Spring 2016) Eigenvalues and Eigenvectors. Diagonal elements of D in the spectral decomposition are called eigenvalues of M. Eigenvalues and Eigenvectors and their Applications. By Dr. P.K.Sharma Sr. Lecturer in Mathematics D.A.V. College Jalandhar. Email Id: [email protected]
24/09/2013В В· Statistics; Add translations A physical example of application of eigenvalues and eigenvectors - Duration: Eigenvalues and eigenvectors made easy The following sections provide links to our complete lessons on all Linear Algebra Applications of Linear Systems and Linear Eigenvalues and Eigenvectors.
1/10/2014В В· Learn a physical example of application of eigenvalues and eigenvectors. For more videos and resources on this topic, please visit http://ma.mathforcollege Overview. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen-is adopted from the German word eigen for
This slide covers really greately applications regarding eigenvalues. Eigenvalues in a nutshell Eigenvalues in a A real Eigenvalues and eigenvectors come in The Eigen-Decomposition: Eigenvalues and Eigenvectors Eigenvectors and eigenvalues are also referred A type of matrices used very often in statistics are
Eigenvalues and Eigenvectors and their ApplicationsBy What are the application of eigenvectors and eigenvalues ? Application of Eigenvalues and Eigenvectors Engineering geology application in civil engineering? Remember that for the eigenvalues (k) and eigenvectors (v) of a matrix (M) the Statistics; Steam Engines;
24/09/2013 · Statistics; Add translations A physical example of application of eigenvalues and eigenvectors - Duration: Eigenvalues and eigenvectors made easy Using Eigenvectors to Find Steady State Population Flows. The way to see that is by examining A's eigenvalues and eigenvectors. Statistics; Data Science;
Eigenvalues and Eigenvectors many applications in the physical sciences. 2 = 3 are the eigenvalues of A. Eigenvectors v = 2 6 4 v 1 v 2 v 3 3 7 Describes how to use Schur's decomposition to find all the real eigenvalues and eigenvectors in Excel even for non-symmetric matrices.
Interpreting Eigenvalues of Here is a quote from "what is the application of eigenvalues in statistics?": "If you calculate the eigenvectors of a covariance Eigenvalues and Eigenvectors: theoretical interest and wide-ranging application. and statistics have focused considerable attention on "eigenvalues
The following sections provide links to our complete lessons on all Linear Algebra Applications of Linear Systems and Linear Eigenvalues and Eigenvectors. Eigenvalues and Eigenvectors: attention on "eigenvalues" and "eigenvectors"-their applications and their To application of eigen value n eigen vector.
← Budget 2017 Visa Application Fees
Application To Vary Or Revoke Provisional Violence Order Nsw →
Citrix Policy To Launch Applications Automatically
Everyday Loans Application Successful Now What
Requirements For Us Visa Application In Singapore
Where Do You Send The Application On Poptropica Reality Tv
How Much Cost Citizenship Application Australia
What Must An Application Have
Northern Territory 190 Application Process Time
How To Stop Applications Opening On Startup Mac
The Client Id Of Your Web Application
Cgange To Refugee Application Process
Online Application Single Status Certificate
Water Board Jobs Application Form
Dont Tell The Bride Australia Application 2016
North Coast Cottages And Development Applications
Https Clarence.smartygrants.com.au Applicant
Mac Os X Backup Application
Wendoree Park
Port Moorowie
Two Mile
Warrion
Barratta
Indulkana
Fernshaw
Babakin
Beeac
Oyster Cove
Wheeo
Cora Lynn
Winnellie
Yankalilla
Gil Gil | CommonCrawl |
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up.
Relocation to Mars
One billion years into the future and the Sun has swollen in size and it is now not possible to live on Earth due to the heat. Mankind has relocated to Mars where the temperature is more favorable now than on Earth. My question is how much time has this gotten us before we can't live on Mars any longer because the Sun is dying ? The next step I guess will be to relocate to another Solar System ?
the-sun
Peter UPeter U
$\begingroup$ It would be easier to shield Earth from radiation IMO. $\endgroup$ – Mithoron Feb 10 '15 at 20:59
As it gets older, the core of the Sun starts to fill with Helium ash. This increases the average mass per particle and hence the core temperature must increase to maintain the pressure. This increases the nuclear reaction rate and the Sun becomes more luminous, at almost a constant surface temperature.
The habitable zone is controlled not only by the luminosity of the star, but also by the atmosphere of the planet. It's doubtful Mars would ever become "habitable" in that sense (without our intervention), but what I will assume is that you want the equilibrium temperature to be warmer than 263K, but say cooler than 303K (i.e. between -10 and 30 Celsius).
The details of finding the blackbody equilibrium temperature can be found here. The formula we need is $$ T = \left(\frac {L_{\odot} (1-a)}{16\pi \sigma D^2}\right)^{1/4},$$ where $L_{\odot}$ is the luminosity of the Sun at any time, $D$ is the distance to the planet, $a$ is the albedo and $\sigma$ is the Stefan-Boltzmann constant. $T$ is in Kelvin.
I will assume that the average albedo of Mars is 0.25 (though it varies considerably with wavelength, depends on icecap coverage etc.) and that $D=2.27\times10^{11} m$. We can then rearrange the formula above to give the luminosity of the Sun for a given equilibrium temperature. $$L_{\odot} =21.3\ \pi \sigma D^2 T^4$$ This means that for $T>263 K$, then $L_{\odot}>9.35\times 10^{26}\ W$, but for $T<303\ K$, we require $L_{\odot}<1.65\times10^{27}$\ W$. That is the Sun's luminosity should be somewhere between 2.44 and 4.30 times its current luminosity.
The next step is to look at a stellar evolutionary model for a star like the Sun.You can generate one here. I find that the Sun will have a luminosity in this range from ages 8.9 billion (at the cool end) to 10.0 billion years (at the hot end).
Obviously you can play with the numbers (upper and lower temperature bound, albedo) to get different answers (the big assumption was to just use the equilibrium temperature, but an atmosphere could warm things up a bit), but you should be able to follow this prescription using whatever numbers you wish.
Incidentally, at the "cool end" of the Mars calculation - the Earth's equilibrium temperature would be 315K (42 Celsius).
ProfRobProfRob
The Sun is not dying when the Earth supposedly becomes uninhabitable, it is just getting hotter as it evolves on the main sequence (core hydrogen burning, the Sun has something like 5+ billion years before reaching the red giant stage of its evolution)). It will continue to get more luminous and at a guess if Mars could be made habitable it would remain so for millions, possibly another billion years. But then do you really think humanity (or rather our descendant civilization/s) if it survives for another billion years will need a planetary surface to survive?
Conrad TurnerConrad Turner
$\begingroup$ Yes, I am aware of this also ,that it would be simpler for us just to move into an artificial habitat in space but don't you think that it would be nicer to live on a Planet ? $\endgroup$ – Peter U Feb 10 '15 at 20:49
One billion years into the future and the Sun has swollen in size and it is now not possible to live on Earth due to the heat.
The problem isn't so much the Sun swelling in size. The key problems are that the Sun's luminosity increases over time and that the Earth is covered with oceans. The Earth will become uninhabitable long before the Sun turns into a red giant. Ever increasing luminosity will result in an increased surface temperature, which in turn will result in increased atmospheric water vapor. Right now this isn't a problem because increased atmospheric water vapor means more clouds, which increases albedo, which counters the effect of increased luminosity.
There is another effect to consider, however. Water vapor is an extremely powerful greenhouse gas. At some point, increased surface temperature will increase atmospheric water vapor to the point where the increased greenhouse effect increases the surface temperature, which increases vaporization even more, and so on. A loop! And a loop of the very worst kind, a positive feedback loop.
When this will happen is subject to debate, perhaps as short as 500 million years (Goldblatt 2013), perhaps a billion years (Kasting 1988), or perhaps as long as 1.5 to 2 billion years (Leconte 2013) in the future.
Mankind has relocated to Mars where the temperature is more favorable now than on Earth.
Speculating what humanity will do or become 500 million years from now is a bit much. It's not science; nobody will live to see whether there speculations turn out to be true. But it is a bit fun. Why would we relocate to Mars? Why not relocate Earth instead (Korycansky 2001)? We might eventually have to worry about Mars getting in the way. In that case, simply move Mars, too!
Goldblatt, Colin, et al (2013), "Low simulated radiation limit for runaway greenhouse climates," Nature Geoscience, 6.8:661-667.
Kasting, James F. (1988), "Runaway and moist greenhouse atmospheres and the evolution of Earth and Venus," Icarus, 74.3:472-494.
Korycansky, D. G., Gregory Laughlin, and Fred C. Adams (2001), "Astronomical engineering: a strategy for modifying planetary orbits," Astrophysics and Space Science, 275.4:349-366.
Leconte, Jérémy, et al. (2013), "Increased insolation threshold for runaway greenhouse processes on Earth-like planets," Nature, 504.7479:268-271.
David HammenDavid Hammen
$\begingroup$ Yes, this would be so much better to just knock Mars out of the way and move the Earth into its orbit farther from the Sun. How would this be done though ? $\endgroup$ – Peter U Feb 10 '15 at 20:51
$\begingroup$ @PeterU - Read the referenced article. The answer to your question is there. $\endgroup$ – David Hammen Feb 10 '15 at 21:26
If you postulate some things that began as humans are still around in 1,000,000 years, I would also guess that way off topic of astronomy (but not for exploration) that "people" will be safely basically immortal in virtual worlds and have sought out the safest place to do their thinking and playing. Perhaps headed outside the galaxy or the plane of the galaxy to wherever the least junk is flying about.
C. Towne SpringerC. Towne Springer
$\begingroup$ +1 I doubt the outside of the galaxy though. Two things to consider here. One is the strong radiation from the center of the galaxy, which is shielded by the starts in the plane of rotation, so the plane is the safest place. The other is entropy. Self evolving systems can exist only by dissipating a concentrated energy from a low-entropy source. This is the reason life evolved on the Earth as the right balance between the Sun and space. The same principle on a larger scale would keep the super-intelligent creatures of the future somewhere inside the galaxy(s) in the best entropy balance. $\endgroup$ – Victor Storm Feb 23 '18 at 0:44
Thanks for contributing an answer to Astronomy Stack Exchange!
Not the answer you're looking for? Browse other questions tagged the-sun or ask your own question.
Post your pictures of the Great Conjunction here!
How can there be 1,000 stellar ancestors before our Sun?
Fate of Jupiter when our sun dies
Mars vs Venus: the retention of atmospheres in relationship to Earth
How fast will the sun grow when it becomes a red giant?
Are there any ways in which the light time difference between the center and the edge of the solar disk is noticeable?
Any simple experiment with an H-alpha telescope?
How much more life could the Sun acquire via star lifting? | CommonCrawl |
How do I solve such questions on paramagnetism and ferromagnetism?
Match the type of magnetism given in Group I with the material given in Group II:
$$ \begin{array}{ll|ll} \hline & \textbf{Group I} & & \textbf{Group II} \\ \hline \text{P} & \text{Ferromagnetic} & 1 & \text{Nickel oxide} \\ \text{Q} & \text{Ferrimagnetic} & 2 & \text{Sodium} \\ \text{R} & \text{Antiferromagnetic} & 3 & \text{Magnetite} \\ \text{S} & \text{Paramagnetic} & 4 & \text{Cobalt} \\ \hline \end{array} $$
A) P-4, Q-3, R-1, S-2
B) P-4, Q-1, R-3, S-2
C) P-1, Q-2, R-4, S-3
D) P-3, Q-2, R-1, S-4
It was taken from a mock test sheet for GATE in India. With regard to whether a material is ferromagnetic or paramagnetic etc. All that I know is to check the magnetic domains and decide. However, with regard to such compounds I have been facing extreme difficulties. Is this something that is totally experimental and that I need to know by information? Or is there a certain intrinsic property of the elements which can help me to decide? If yes, please explain how to solve such questions.
inorganic-chemistry physical-chemistry magnetism
$\begingroup$ Ferromagnetism is not something you can deduce from first principles. $\endgroup$
$\begingroup$ Basically, you have to memorize which is which. $\endgroup$
I believe GATE is a university entrance exam in India, so they will not expect you to solve an extremely complicated equation to predict ferromagnetism or to memorize an infinite list of substances. Sadly, there is some element of rote memorization still lingering like a pest in the educational testing system.
Good news for you is that relatively few compounds and metals show ferromagnetism or ferrimagnetism. Find out the list of ferromagnetic metals from the web, you might end up with very few elements and oxides, such as iron compounds, iron oxide containing materials. Later you can attempt the multiple choice questions by the process of elimination. For example, in the given choices, choose the answers which show that magnetite is ferrimagnetic.
Lo and behold, there is only ONE choice which matches with Q-3!
M. FarooqM. Farooq
Historically, the term ferromagnetism was used for any material that could exhibit spontaneous magnetization: a net magnetic moment in the absence of an external magnetic field (Wikipedia). However, in 1948, Louis Néel showed that there are two levels of magnetic alignment that result in this behavior (Ref.1):
One is ferromagnetism in the strict sense, where all the magnetic moments are aligned A few examples of this type are $\ce{Fe}$, $\ce{Co}$, and $\ce{Ni}$.
The other is ferrimagnetism, where some magnetic moments point in the opposite direction but have a smaller contribution, so there is still a spontaneous magnetization. The oldest known magnetic substance, magnetite ($\ce{Fe3O4}$), which contains both iron(II) and ion(III) oxides is a well-known ferrimagnet ($\ce{Fe3O4}$ was originally classified as a ferromagnet before Néel's discovery). The other cubic ferrites composed of iron oxides with other elements such as $\ce{Mg}$ (e.g., $\ce{MgOFe2O3}$), $\ce{Cu}$ (e.g., $\ce{CuOFe2O3}$), and $\ce{Ni}$ (e.g., $\ce{NiOFe2O3}$) are also examples of ferrimagnets.
Néel had discovered a third level of magnetism called antiferromagnetism. In a special case where the opposing magnetic moments balance completely, the alignment is known as antiferromagnetism; but antiferromagnets do not have a spontaneous magnetization. A well-known antiferromagnets are common Iron oxide, hematite ($\ce{Fe2O3}$), the trasition metal oxides such as nickel oxide ($\ce{NiO}$), and alloys such as iron manganese ($\ce{FeMn}$).
All of these three levels of magnetisms have a kryptonite temperature, above which all of them become paramagnetic. For ferromagnetism and ferrimagnetism, it is called Curie temperature while for antiferromagnetism, it is Néel temperature.
Thus, M. Farooq gave an excellent way to answer your question for novices. Mostly, known ferromagnets are metals and metal oxides are usually ferrimagnets, except for $\ce{NiO}$, which is a known antiferromagnet. You have two metals and two oxides in RHS column. Now, it is easy to figured out.
M. Louis Néel, "Propriétés magnétiques des ferrites ; ferrimagnétisme et antiferromagnétisme (Magnetic properties of ferrites: ferrimagnetism and antiferromagnetism)," Annales de Physique (Paris) 1948, 12(3), 137-198 (https://doi.org/10.1051/anphys/194812030137).
Mathew MahindaratneMathew Mahindaratne
Not the answer you're looking for? Browse other questions tagged inorganic-chemistry physical-chemistry magnetism or ask your own question.
Any suggestions how to solve this?
Why are paramagnetic materials attracted to ferrimagnetic materials(such as iron, nickel, and cobalt)?
Hybridization, MOT and Paramagnetism
paramagnetism vs. dimagnetism
Amine Inversion and Paramagnetism
How do I solve this Ionic Equilibrium problem? | CommonCrawl |
Simultaneous Uniformization Theorem October 23, 2010
Tags: geometry, topology
The other day, in the graduate student talks, Subhojoy was talking about the Simultaneous Uniformization Theorem. It was a nice treat because I used to be really into geometric topology (or at least as much as an undergrad with way too little background could.)
The big reveal is
but most of the talk, naturally, goes into defining what those letters mean.
The Riemann Mapping Theorem says that any simply connected set is conformally equivalent to the disc . Conformal maps are angle-preserving; in practice, they're holomorphic functions with derivative everywhere nonzero. Conformal maps take round circles to round circles if the circles are small enough.
A Riemann Surface is a topological surface with conformal structure: a collection of charts to the complex plane such that the transition maps are conformal.
The first version of the Uniformization Theorem says that any simply-connected Riemann surface is conformally equivalent to the Riemann sphere (or complex plane or unit disc; these are all equivalent.)
The second, more general version of the Uniformization Theorem says that any Riemann surface of genus is conformally equivalent to where is the hyperbolic plane and is a discrete subgroup of .
To understand this better, we should observe more about the universal cover of a Riemann surface. This is, of course, simply connected. Its deck transformations are conformal automorphisms of the disc. But it can be proven that conformal automorphisms of the disc are precisely the Mobius transformations, or functions of the form
This implies that the automorphism group of is .
Now observe that there's a model of the hyperbolic plane on the disc, by assigning the metric
And, if you were to check, it would turn out that conformal transformations on the disc preserve this metric.
So it begins to make sense; Riemann surfaces are conformally equivalent to their universal covering space, modulo some group of relations, a subgroup of the group of deck transformations of the universal cover.
are called Fuchsian groups — these define which Riemann surface we're talking about, up to a conformal transformation.
Now we can define Fuchsian space as
It's the set of maps from the fundamental group of a surface to .
And we can define Teichmuller space as the space of marked conformal structures on the surface S.
This is less enormously huge than you might think, because we consider these up to an equivalence relation. If and are conformal structures, and there exists a conformal map such that then we consider equivalent structures.
In fact, Teichmuller space is not that enormously huge: . It turns out that Teichmuller space is completely determined by what happens to the boundary circles in a pair of pants decomposition of the surface.
Here's a picture of a pair of pants (aka a three-punctured sphere):
Here's a picture of a decomposition of a Riemann surface into pairs of pants:
(Here's a nice article demonstrating the fact. It's actually not as hard as it looks.)
Now we generalize to Quasi-Fuchsian spaces. For this, we'll be working with hyperbolic 3-space instead of 2-space. The isometries of hyperbolic 3-space happen to be .
Instead of a Poincare Disc Model, we have a ball model; again, acts by Mobius transformations, functions of the form .
A quasiconformal function takes, infinitesimally, circles to ellipses. It's like a conformal map, but with a certain amount of distortion. The Beltrami coefficient definds how much distortion:
Quasi-Fuchsian space, QF(S), is the set of all quasiconformal deformations of a Fuchsian representation. In other words, this is the space of all representations to preserving topological circles on the boundary.
Now, the Simultaneous Uniformization Theorem can be stated:
the Quasi-Fuchsian space of a surface is isomorphic to the product of two Teichmuller spaces of the surface.
One application of this theorem is to hyperbolic 3-manifolds.
If is a hyperbolic 3-manifold, and if then $M \simeq S \times \mathbb{R}$.
In other words, we can think of a hyperbolic three-manifold as the real line, with a Riemann surface at each point — you can only sort of visualize this, as it's not embeddable in 3-space.
The Simultaneous Uniformization Theorem implies that there is a hyperbolic metric on this 3-manifold for any choice of conformal structure at infinity.
This contrasts with the Mostow Rigidity Theorem, which states that a closed 3-manifold has at most one hyperbolic structure.
Together, these statements imply that any hyperbolic metric on is determined uniquely by the choice of conformal structures at infinity.
Persistent homology (and geometry?) June 13, 2010
Tags: Gunnar Carlsson, persistent homology, topology
I read this AMS article by Robert Ghrist about persistent homology and I am intrigued.
This is a method for using topological invariants to define data. In particular, we want to define the homology of a data set representing a point cloud in a high-dimensional Euclidean space. A topologist would replace the point cloud with a simplicial complex — one of the easiest to compute is the Rips Complex, whose k-simplices correspond to unordered k+1-tuples of points within pairwise Euclidean distance .
However, the resulting simplicial complex depends on the choice of . A very small leaves the complex a discrete set, while a very large results in the complex being one big simplex. As moves, holes in the simplicial complex are born, grow, and die; the picture over time provides a description of the data's topology.
The "persistence complex" is a sequence of chain complexes together with chain maps , which are inclusions. This is motivated by an increasing sequence of s and the inclusions from one complex to the next. (The precision here goes from fine to coarse.)
Ghrist introduces the notion of a "barcode" — each "bar" is a generator of the homology group, and the length of the bar is the range of values of for which this particular element is a generator of the homology group. A barcode is the persistence analogue of a Betti number.
Now, what I always wondered here is what this has to do with geometry. Consider a finger-shaped projection sticking out of a 2-dimensional surface. At different resolutions, the projection can appear to break off into an island. (This is a practical problem for protein visualization.) This would be an example of a feature that could be captured with persistence homology; but it could also be explained directly by noticing that the tip of the projection is a region of high curvature. Could other persistence homology features be explained by geometrical properties?
This paper by Gunnar Carlsson et al, seems to provide an answer. The authors define a tangent complex of a space, which is the closure of the set of all tangents to points in the space. Then, they define the filtered tangent complex, which is the set of tangent vectors for which the osculating circle has a radius larger than some . We have an inclusion between filtered tangent complexes of different s. (For curves, there is only one osculating circle; for surfaces, there is one in each direction, so the tangent space is defined based on the maximum between them.)
Then we look at the homology groups of the filtered tangent spaces. This provides a barcode. Such barcodes can, for instance, distinguish a bottle from a glass. (The relationship of the tangent-space barcode to the Rips-complex barcode remains mysterious to me.) | CommonCrawl |
WORD PROBLEM- HELP
Thread starter zamora_adriana
zamora_adriana
The bus fare in a city is $2.00. People who use the bus have the option of purchasing a coupon book for $27.00. With the coupon book, the fare is reduced to $1.00. Determine the number of times in a month the bus must be used so that the total monthly cost without the coupon book is the same as the total monthly cost with the coupon book?
mmm4444bot
I'll get you started by organizing for you a chart of what's going on.
\(\displaystyle \begin{array}{|c|c|c|}\hline {\text{Trip Number}&\text{Regular Fares}&\text{Book + Discounted Fares}}\\ \hline {1&2.00&28.00}\\ \hline{2&4.00&29.00}\\ \hline {3&6.00&30.00}\\ \hline {4&8.00&31.00}\\ \hline {5&10.00&32.00}\\ \hline{6&12.00&33.00}\\ \hline {7&14.00&34.00}\\ \hline {\vdots&\vdots&\vdots}\\ \hline \end{array}\)
The two "fare" columns represent running totals of the money paid out under each plan after 1 trip, 2 trips, 3 trips, et cetera.
Does this make sense, so far?
After taking 1 bus trip, a person with no discount has paid $2, while a person who bought a coupon book has paid a total of $28 ($27 for the book, and $1 for the discounted ride).
After taking 2 bus trips, a person with no discount has paid a total of $4, while a person entitled to discounts has paid a total of $29 ($27 for the book, and $1 each for the two discounted rides).
The exercise asks "how many trips"?
So, pick a variable to represent this unknown amount.
Let n = the number of trips
Now we can write an expression, using the variable n, to represent the total dollar amount spent (after taking n trips) for a person with no coupon book.
We can also write an expression to represent the total dollar amount spent (after taking n trips) for a person with a coupon book.
It's easy to find the number of trips where both totals are equal: set the two expressions equal (to form an equation), and then solve the equation for n.
Both columns represent arithmetic sequences. (Maybe, that's what your class is studying; I don't know because you gave us zero information about what you're thinking so far.)
Are these hints enough information for you to be able to continue?
Let us know, if you need more help. Please show whatever work you can, or explain why you're stuck.
Cheers ~ Mark | CommonCrawl |
Home » Statistics » Probability Distributions » Continuous Uniform Distribution Calculator With Examples
Continuous Uniform Distribution Calculator With Examples
Continuous Uniform Distribution Example
1 Continuous Uniform Distribution Calculator With Examples
2 Continuous Uniform Distribution Calculator
3 How to find Continuous Uniform Distribution Probabilities?
4 Definition of Uniform Distribution
5 Distribution Function
6 Continuous Uniform Distribution Example 1
10 Continuous Uniform Distribution Example 5
The continuous uniform distribution is the simplest probability distribution where all the values belonging to its support have the same probability density. It is also known as rectangular distribution.
This tutorial will help you understand how to solve the numerical examples based on continuous uniform distribution.
Continuous Uniform Distribution Calculator
Use this calculator to find the probability density and cumulative probabilities for continuous Uniform distribution with parameter $a$ and $b$.
Uniform Distribution Calculator
Minimum Value $a$:
Maximum Value $b$
Value of x
Probability density : f(x)
Probability X less than x: P(X < x)
Probability X greater than x: P(X > x)
How to find Continuous Uniform Distribution Probabilities?
Step 1 – Enter the minimum value $a$
Step 2 – Enter the maximum value $b$
Step 3 – Enter the value of $x$
Step 4 – Click on "Calculate" button to get Continuous Uniform distribution probabilities
Step 5 – Gives the output probability at $x$ for Continuous Uniform distribution
Step 6 – Gives the output cumulative probabilities for Continuous Uniform distribution
Definition of Uniform Distribution
A continuous random variable $X$ is said to have a Uniform distribution (or rectangular distribution) with parameters $\alpha$ and $\beta$ if its p.d.f. is given by
$$ \begin{align*} f(x)&= \begin{cases} \frac{1}{\beta - \alpha}, & \alpha \leq x\leq \beta \\ 0, & Otherwise. \end{cases} \end{align*} $$
Notation: $X\sim U(\alpha, \beta)$.
Distribution Function
The distribution function of uniform distribution $U(\alpha,\beta)$ is
$$ \begin{align*} F(x)&= \begin{cases} 0, & x<\alpha\\ \frac{x-\alpha}{\beta - \alpha}, & \alpha \leq x\leq \beta \\ 1, & x>\beta \end{cases} \end{align*} $$
Continuous Uniform Distribution Example 1
The waiting time at a bus stop is uniformly distributed between 1 and 12 minute.
a. What is the probability density function?
b. What is the probability that the rider waits 8 minutes or less?
c. What is the expected waiting time?
d. What is standard deviation of waiting time?
Let $X$ denote the waiting time at a bust stop. The waiting time at a bus stop is uniformly distributed between 1 and 12 minute. That is $X\sim U(1,12)$.
(a) The probability density function of $X$ is
$$ \begin{aligned} f(x) & = \frac{1}{12-1},\; 1\leq x \leq 12\\ & = \frac{1}{11},\; 1\leq x \leq 12. \end{aligned} $$
(b) The probability that the rider waits 8 minutes or less is
$$ \begin{aligned} P(X\leq 8) & = \int_1^8 f(x) \; dx\\ & = \frac{1}{11}\int_1^8 \; dx\\ & = \frac{1}{11} \big[x \big]_1^8\\ &= \frac{1}{11}\big[ 8-1\big]\\ &= \frac{7}{11}\\ &= 0.6364. \end{aligned} $$
(c) The expected wait time is $E(X) =\dfrac{\alpha+\beta}{2} =\dfrac{1+12}{2} =6.5$ minutes.
(d) The variance of waiting time is $V(X) =\dfrac{(\beta-\alpha)^2}{12} =\dfrac{(12-1)^2}{12} =10.08$.
Assume the weight of a randomly chosen American passenger car is a uniformly distributed random variable ranging from 2,500 pounds to 4,500 pounds.
a. What is the mean and standard deviation of weight of a randomly chosen vehicle?
b. What is the probability that a vehicle will weigh less than 3,000 pounds?
c. What is the probability that a vehicle will weigh more than 3,900 pounds?
d. What is the probability that a vehicle will weigh between 3,000 and 3,800 pounds?
Let the random variable $X$ denote the weight of randomly chosen American passenger car. It is given that $X\sim U(2500, 4500)$. That is $\alpha=2500$ and $\beta=4500$
The probability density function of $X$ is
$$ \begin{aligned} f(x)&=\frac{1}{4500- 2500},\quad2500 \leq x\leq 4500\\ &=\frac{1}{2000},\quad 2500 \leq x\leq 4500 \end{aligned} $$
and the distribution function of $X$ is
$$ \begin{aligned} F(x)&=\frac{x-2500}{4500- 2500},\quad 2500 \leq x\leq 4500\\ &=\frac{x-2500}{2000},\quad 2500 \leq x\leq 4500. \end{aligned} $$
a. The mean weight of a randomly chosen vehicle is
$$ \begin{aligned} E(X) &=\dfrac{\alpha+\beta}{2}\\ &=\dfrac{2500+4500}{2} =3500 \end{aligned} $$
The standard deviation of weight of randomly chosen vehicle is
$$ \begin{aligned} sd(X) &= \sqrt{V(X)}\\ &=\sqrt{\dfrac{(\beta-\alpha)^2}{12}}\\ &=\sqrt{\dfrac{(4500-2500)^2}{12}}\\ &=577.35 \end{aligned} $$
b. The probability that a vehicle will weigh less than $3000$ pounds is
$$ \begin{aligned} P(X < 3000) &=F(3000)\\ &=\dfrac{3000 - 2500}{2000}\\ &=\dfrac{500}{2000}\\ &=0.25 \end{aligned} $$
c. The probability that a vehicle will weigh more than $3900$ pounds is
$$ \begin{aligned} P(X > 3900) &=1-P(X\leq 3900)\\ &=1-F(3900)\\ &=1-\dfrac{3900 - 2500}{2000}\\ &=1-\dfrac{1400}{2000}\\ &=1-0.7\\ &=0.3\\ \end{aligned} $$
d. The probability that a vehicle will weight between $3000$ and $3800$ pounds is
$$ \begin{aligned} P(3000 < X < 3800) &= F(3800) - F(3000)\\ &=\frac{3800-2500}{2000}- \frac{3000-2500}{2000}\\ &= \frac{1300}{2000}-\frac{500}{2000}\\ &= 0.65-0.25\\ &= 0.4. \end{aligned} $$
Assume that voltages in a circuit follows a continuous uniform distribution between 6 volts and 12 volts.
a. What is the mean and variance of voltage in a circuit?
b. What is the distribution function of voltage in a circuit?
c. If a voltage is randomly selected, find the probability that the given voltage is less than 11 volts.
d. If a voltage is randomly selected, find the probability that the given voltage is more than 9 volts.
e. If a voltage is randomly selected, find the probability that the given voltage is between 9 volts and 11 volt.
Let the random variable $X$ denote the voltage in a circuit. It is given that $X\sim U(6, 12)$. That is $\alpha=6$ and $\beta=12$
$$ \begin{aligned} f(x)&=\frac{1}{12- 6},\quad6 \leq x\leq 12\\ &=\frac{1}{6},\quad 6 \leq x\leq 12 \end{aligned} $$
a. The mean voltage in a circuit is
$$ \begin{aligned} E(X) &=\dfrac{\alpha+\beta}{2}\\ &=\dfrac{6+12}{2}\\ &=9 \end{aligned} $$
The standard deviation of voltage in a circuit is
$$ \begin{aligned} sd(X) &= \sqrt{V(X)}\\ &=\sqrt{\dfrac{(\beta-\alpha)^2}{12}}\\ &=\sqrt{\dfrac{(12-6)^2}{12}}\\ &=1.73 \end{aligned} $$
b. The distribution function of $X$ is
$$ \begin{aligned} F(x)&=\frac{x-6}{12- 6},\quad 6 \leq x\leq 12\\ &=\frac{x-6}{6},\quad 6 \leq x\leq 12. \end{aligned} $$
b. The probability that given voltage is less than $11$ volts is
$$ \begin{aligned} P(X < 11) &=F(11)\\ &=\dfrac{11 - 6}{6}\\ &=\dfrac{5}{6}\\ &=0.8333 \end{aligned} $$
c. The probability that given voltage is more than $9$ volts is
$$ \begin{aligned} P(X > 9) &=1-P(X\leq 9)\\ &=1-F(9)\\ &=1-\dfrac{9 - 6}{6}\\ &=1-\dfrac{3}{6}\\ &=1-0.5\\ &=0.5\\ \end{aligned} $$
d. The probability that voltage is between $9$ and $11$ volts is
$$ \begin{aligned} P(9 < X < 11) &= F(11) - F(9)\\ &=\frac{11-6}{6}- \frac{9-6}{6}\\ &= \frac{5}{6}-\frac{3}{6}\\ &= 0.8333-0.5\\ &= 0.3333. \end{aligned} $$
The daily amount of coffee, in liters, dispensed by a machine located in an airport lobby is a random variable $X$ having a continuous uniform distribution with $A = 7$ and $B = 10$. Find the probability that on a given day the amount, of coffee dispensed by this machine will be
a. at most 8.8 liters;
b. more than 7.4 liters but less than 9.5 liters;
c. at least 8.5 liters.
Let the random variable $X$ represent the daily amount of coffee dispensed by a machine. It is given that $X\sim U(7, 10)$. That is $\alpha=7$ and $\beta=10$
The distribution function of $X$ is
a. The probability that on a given day the amount of coffee dispensed by the machine will be at most $8.8$ liters is
$$ \begin{aligned} P(X < 8.8) &=F(8.8)\\ &=\dfrac{8.8 - 7}{3}\\ &=\dfrac{1.8}{3}\\ &=0.6 \end{aligned} $$
b. Let us find the probability that on a given day the amount of coffee dispensed by the machine will be more than $7.4$ liters but less than $9.5$ liters.
$$ \begin{aligned} P(7.4 < X < 9.5) &= F(9.5) - F(7.4)\\ &=\frac{9.5-7}{3}- \frac{7.4-7}{3}\\ &= \frac{2.5}{3}-\frac{0.4}{3}\\ &= 0.8333-0.1333\\ &= 0.7. \end{aligned} $$
c. Let us determine the probability that on a given day the amount of coffee dispensed by the machine will be at least $8.5$ liters.
$$ \begin{aligned} P(X > 8.5) &=1-P(X\leq 8.5)\\ &=1-F(8.5)\\ &=1-\dfrac{8.5 - 7}{3}\\ &=1-\dfrac{1.5}{3}\\ &=1-0.5\\ &=0.5\\ \end{aligned} $$
A bus arrives every 10 minutes at a bus stop. It is assumed that the waiting time for a particular individual is a random variable with a continuous uniform distribution.
a. What is the probability that the individual waits more than 7 minutes?
b. What is the probability that the individual waits between 2 and 7 minutes?
Let the random variable $X$ represent the waiting time for a particular individual. It is given that $X\sim U(0, 10)$. That is $\alpha=0$ and $\beta=10$
$$ \begin{aligned} f(x)&=\frac{1}{10- 0},\quad0 \leq x\leq 10\\ &=\frac{1}{10},\quad 0 \leq x\leq 10 \end{aligned} $$
$$ \begin{aligned} F(x)&=\frac{x-0}{10- 0},\quad 0 \leq x\leq 10\\ &=\frac{x}{10},\quad 0 \leq x\leq 10. \end{aligned} $$
a. Let us determine the probability that an individual waits more than $7$ minutes.
$$ \begin{aligned} P(X > 7) &=1-P(X\leq 7)\\ &=1-F(7)\\ &=1-\dfrac{7 - 0}{10}\\ &=1-\dfrac{7}{10}\\ &=1-0.7\\ &=0.3\\ \end{aligned} $$
b. Let us find the probability that an individual waits between $2$ and $7$ minutes.
$$ \begin{aligned} P(2 \leq X \leq 7) &= F(7) - F(2)\\ &=\frac{7-0}{10}- \frac{2-0}{10}\\ &= \frac{7}{10}-\frac{2}{10}\\ &= 0.7-0.2\\ &= 0.5. \end{aligned} $$
In this tutorial, you learned about how to calculate mean, variance and probabilities of Continuous Uniform distribution. You also learned about how to solve numerical problems based on Continuous Uniform distribution.
To read more about the step by step tutorial on Continuous Uniform distribution refer the link Continuous Uniform Distribution. This tutorial will help you to understand Continuous Uniform distribution and you will learn how to derive mean of Continuous Uniform distribution, variance of Continuous Uniform distribution, moment generating function and other properties of Continuous Uniform distribution.
To learn more about other probability distributions, please refer to the following tutorial:
Let me know in the comments if you have any questions on Continuous Uniform Distribution Calculator with Examples and your thought on this article.
Categories All Calculators, Probability Distributions, Statistics, Statistics-Calc Tags continuous distributions, continuous uniform distribution examples, probability distributions, rectangular distribution examples
Poisson Distribution Calculator With Examples
Laplace Distribution Probabilities Using R | CommonCrawl |
new-tag A new tag zsigmondy was created.
Q: Generalisation of IMO 1990/P3:For which $b $ do there exist infinitely many positive integers $n$ such that $n^2$ divides $b^n+1$?
For which positive integers $b > 2$ do there exist infinitely many positive integers $n$ such that $n^2$ divides $b^n+1$? It was from my LTE/ Zsigmondy handout. By taking examples, it looks like for $b= 2^k-1 , 2$ it's not true . Here's my progress: I got $b=4,5,6,8,9$ works ( $2,3,7$ doesn't ...
number-theory elementary-number-theory contest-math zsigmondy
Zsigmondy's theorem
In number theory, Zsigmondy's theorem, named after Karl Zsigmondy, states that if a > b > 0 are coprime integers, then for any integer n ≥ 1, there is a prime number p (called a primitive prime divisor) that divides an − bn and does not divide ak − bk for any positive integer k < n, with the following exceptions: n = 1, a − b = 1; then an − bn = 1 which has no prime divisors n = 2, a + b a power of two; then any odd prime factors of a2 - b2 = (a + b)(a1 - b1) must be contained in a1 - b1, which is also even n = 6, a = 2, b = 1; then a6 − b6 = 63 = 32×7 = (a2 − b2)2(a3 − b3)This generalizes Bang...
There is another question tagged zsigmondy, but I suppose it is simply missatagged.
Q: Using Excel to keep track of maintenance services done on machines
Assume that you buy a caterpillar, and for every 10 hours of work, you need to apply lubricant. For every 100, you need to replace a certain part. For 500 hours, you need to perform any given adjustment. For 1000 hours, you need to have it checked by a technician. The formula/s I need should take...
zsigmondy
new-tag A new tag hodgkin-huxley-equation was created.
Q: Solving Hodgkin and Huxley equation
I'm trying to solve Hodgkin and Huxley equation given by, $$\frac{d n}{d t}=\alpha_{n}(1-n)-\beta_{n} n$$ using the boundary condition $$n_{0}=\frac{\alpha_{n_{0}}}{\alpha_{n_{0}}+\beta_{n_{0}}}$$. The solution of this differential equation should be $$\begin{aligned} &n=n_{\infty}-\left(n_{\inft...
ordinary-differential-equations initial-value-problems hodgkin-huxley-equation
Hodgkin–Huxley model
The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical characteristics of excitable cells such as neurons and cardiac myocytes. It is a continuous-time dynamical system. Alan Hodgkin and Andrew Huxley described the model in 1952 to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon. They received the 1963 Nobel Prize in Physiology or Medicine for this...
When should a tag be added | CommonCrawl |
MathOverflow Meta
What are some slogans that express mathematical tricks?
Asked 10 years, 1 month ago
Many "tricks" that we use to solve mathematical problems don't correspond nicely to theorems or lemmas, or anything close to that rigorous. Instead they take the form of analogies, or general methods of proof more specialized than "induction" or "reductio ad absurdum" but applicable to a number of problems. These can often be summed up in a "slogan" of a couple of sentences or less, that's not fully precise but still manages to convey information. What are some of your favorite such tricks, expressed as slogans?
(Note: By "slogan" I don't necessarily mean that it has to be a well-known statement, like Hadamard's "the shortest path..." quote. Just that it's fairly short and reasonably catchy.)
Justifying blather: Yes, I'm aware of the Tricki, but I still think this is a useful question for the following reasons:
Right now, MO is considerably more active than the Tricki, which still posts new articles occasionally but not at anything like the rate at which people contribute to MO.
Perhaps causally related to (1), writing a Tricki article requires a fairly solid investment of time and effort. The point of slogans is that they can be communicated without much of either. If you want, you can think of this question as "Possible titles for Tricki articles," although that's by no means its only or even main purpose.
soft-question big-picture big-list
5 revisions, 3 users
Harrison Brown 100%
$\begingroup$ I realize this question's borderline (and I wasn't sure whether I should ask it), but I don't think it's any less MO-appropriate than "Fundamental examples" or the mathematical joke thread. People who downvoted, care to explain why you disagree? $\endgroup$ – Harrison Brown Dec 14 '09 at 15:31
$\begingroup$ I wasn't the downvoter, but I think some people are getting a little annoyed at the number of these "produce a ginormous list of answers" soft questions. You're entirely right that there are worse offenders on the site, but I think the issue is in part the density of them, not any particular ones. $\endgroup$ – Ben Webster♦ Dec 14 '09 at 16:07
$\begingroup$ @Ben: That makes sense, although a better solution might be to ignore the "soft-question" tag. Or make a new tag for "ginormous list" questions. But this would be more appropriate on meta, so I'll start a topic there. $\endgroup$ – Harrison Brown Dec 14 '09 at 16:11
$\begingroup$ Ben: I really like the ginormous list of answer questions. It's nice to have some big picture responses from a variety of mathematicians. Peter: If you don't like soft answer questions, you don't have to read them. There's even a box on the front page for ignoring tags you don't like. $\endgroup$ – Tom LaGatta Jan 13 '10 at 9:51
$\begingroup$ @Tom: soft-answer is not the same as big-list; the meta discussion that came from this thread actually inspired the "big-list" tag, for exactly the reason that it can be ignored. But if you have more to say, feel free to contribute on meta.MO. $\endgroup$ – Harrison Brown Jan 13 '10 at 11:01
The analyst's toolbox consists of three things:
The Cauchy-Schwarz inequality
Changing the order of integration/summation
(I'm not saying I believe that; it's just a very common saying.)
Darsh Ranjan
$\begingroup$ You forgot 4. Adding and substracting something. $\endgroup$ – Mariano Suárez-Álvarez Jan 13 '10 at 7:37
$\begingroup$ Would you allow me the principle that if the average of some numbers is at least C then one of the numbers must be at least C? $\endgroup$ – gowers Sep 23 '10 at 21:42
$\begingroup$ An analyst once told me: "When in doubt, integrate by parts." $\endgroup$ – Micah Milinovich Sep 23 '10 at 22:42
$\begingroup$ I've also heard the following cited as a tool: if $a \leq b + \epsilon$ for all $\epsilon > 0$, then $a \leq b$. $\endgroup$ – S. Carnahan♦ Sep 24 '10 at 3:02
$\begingroup$ When Peter Lax went to receive the national medal of science, he was asked by the other recipients about his merits. His answer was (apocryph) I integrated by parts. $\endgroup$ – Denis Serre Apr 7 '11 at 9:33
If something does not hold, make it true! Examples: - Sobolev spaces (not necessarily differentiable functions satisfy differential equations) - distribution theory (think of identities involving the delta "function") - no converging? take the closure of your vector space (analysis) or compactify your space (geometry)
Orbicular
$\begingroup$ Related to that, "Look in a bigger bag." For example, to find an integer solution, first find a rational or even complex solution then try to show it's an integer. Or find a solution in a Sobolev space then prove the solution is actually a classical solution. $\endgroup$ – John D. Cook Dec 14 '09 at 21:15
$\begingroup$ Another famous example of this is the notion of stacks: quotients of schemes by group actions do not necessarily exist? We make them exist by sheer brute force! (And it's funny that the definition of a stack also mimics the definition of a distribution somehow -- in both situations, the idea is to forget the object itself and only remember how it acts on some class of "test objects".) $\endgroup$ – Dan Petersen Dec 15 '09 at 8:19
If you have to chose some auxiliary object and that object is not unique, it's better to make all choices simultaneously.
I think there are many examples of this, but for me it first hit home when I learned about crystalline cohomology. There you want to lift varieties in positive characteistic to characteristic zero. Locally there are many nonisomorphic lifts, and rather than picking one, it's better to work with the category of all of them. I've absorbed this lesson pretty fully, to the point where I don't need to remind myself of it, but at first it seemed revolutionary.
JBorger
I forget who this is attributed to, but someone said something like "A technique is a trick used twice."
Norman Lewis Perlmutter
Devissage is a useful tool when proving something holds for a general class of objects, at least in algebraic geometry, like all schemes/stacks/morphisms.
shenghao
$\begingroup$ What is devissage? $\endgroup$ – Ilya Grigoriev Dec 15 '09 at 18:13
$\begingroup$ I don't know if there is a formal definition. For me it roughly means that, when proving something in general, one reduces step by step to special cases. Some people say that it's an important feature in Grothendieck's style proofs. Here's an "example". Suppose we want to prove something for any morphism $f:X\to Y$ of schemes of finite type over a field k. One can check if this property is local in the sense that, if it's true for all fibers of $f,$ then it's true for $f.$ If it is local then we reduce to the case where Y is a point Spec k. (too long to fit in one comment; to be continued) $\endgroup$ – shenghao Dec 15 '09 at 20:37
$\begingroup$ Let U be any open subset of X with complement Z. Suppose that if the property holds for both U and Z, then it holds for X. Then we may shrink X to any open subset and use noetherian induction, so we can assume X is affine of equidimension d. There exists a map $X\to X'$ with fibers of dimension <=1, and dim X'=d-1, so by induction on d we reduce to two cases: X is a curve, or X is a point. Finally we prove these special cases. $\endgroup$ – shenghao Dec 15 '09 at 20:39
The best way to solve a problem is to define it out of existence.
Typical example: Weil constructed abelian varieties over finite fields, and at first he did not know if these were varieties because it was not clear that they were projective. Weil defined this problem out of existence by changing the definition of variety and inventing abstract varieties.
Richard Borcherds
Try to replace a structure on an object with a map to a clasifying object.
E.g., replace a cohomology class of a space with a map to an Eilenberg-MacLane space. Replace a vector/general bundle on a manifold with a map to the Grassmanian/other classifying space.
There must also be plenty of examples outside algebraic topology, though this technique seems to be most popular there...
Ilya Grigoriev 89%
$\begingroup$ Along these lines, replace a subset of a finite set by an element of a vector space over GF(2). $\endgroup$ – Harrison Brown Dec 15 '09 at 2:50
Weil's "three columns": Number fields over $\mathbb{Q}$ behave like function fields of curves over finite fields which are related to the field of algebraic functions over $\mathbb{C}$. (This is far removed from my comfort zone, so please fix it if I'm off the mark.)
You must exchange the order of summation in order to prove any identity involving multiple sums.
Nick Salter
$\begingroup$ And the same for integrals. My advisor used to express this as: "Theorem: Any time you see two integrals, you should try to interchange them." $\endgroup$ – Nate Eldredge May 8 '17 at 17:15
"Think homologically, prove cohomologically!" definitely sounds like a slogan. One argument for this is that homology has a nice explanation in terms of geometry, think singular simplices or cells, so you can think about a space in terms of its cellular homology. When proving things you might want to have more structure around, like a product, and this is where cohomology comes in.
Maxime Bourrigan 67%
There are two interesting tricks in K-theory / operator algebras / homotopy theory - one attached to an amusing slogan and the other with an amusing name - that I think foot the bill.
The first is "uniqueness is a relative form of existence", due apparently to Shmuel Weinberger. This slogan seems to appear frequently in operator theory. Take, for example the problem of proving that K-theory commutes with direct limits (say, of C* algebras $A_1 \subseteq A_2 \subseteq \ldots \subseteq A$). There are two components to the proof: surjectivity (the "existence" part) which amounts to showing that every element of $K_0(A)$ lies in the image of some $K_0(A_j) \to K_0(A)$, and injectivity (the "uniqueness" part) which involves proving that if two elements of $K_0(A_j)$ are equivalent in $K_0(A)$ then they are equivalent in $K_0(A_j)$. Once you have proven existence you can verify uniqueness by joining representatives of your chosen $K_0(A_j)$ classes by a homotopy in the space of generators for $K_0(A)$ and then use your existence argument to lift to a homotopy in $A_j$. In other words, prove uniqueness by applying your existence argument to a pair.
The second is the (in)famous "Eilenberg Swindle" which seems to come up everywhere. I first encountered it in K-theory, but I think the canonical example is the argument which proves that the $n$-sphere is prime with respect to connected sum (which I will denote +). Suppose that $M$ and $N$ are manifolds such that $M + N = S^n$. We have that $(M + N) + (M + N) + (M + N) + \ldots$ is homeomorphic to $\mathbb{R}^n$ (it is a cylinder with the left opening glued shut), and similarly so is $(N + M) + (N + M) + \ldots$. Since $M + (N + M) + \ldots = (M + N) + (M + N) + \ldots$, we have shown that $M + \mathbb{R}^n = \mathbb{R}^n$ which forces $M$ to be homeomorphic to $S^n$.
Paul Siegel
$\begingroup$ This is definitely older than Shmuel, but i guess you are referring specifically to the quote. The main point is that uniqueness in homotopy theory is just saying everything you want to construct is connected by a homotopy. So you need that homotopy to exist, but you have an existence result that you can usually apply to construct your homotopy. Also think of the example provided by whitney's embedding theorem and how it shows any two embeddings are homotopic. $\endgroup$ – Sean Tilson Sep 26 '10 at 6:02
"If you count something two different ways, you get the same result." This is related to the trick of changing the order of integration (or summation) discussed above, but discrete and more general.
This method is used all the time in combinatorics. I think it has also been phrased differently, but I don't remember the exact phrasing.
"It is easy to prove existence when there is only one, or when there are many"
If there is only one object with a certain property, you can sometimes use it to define it. For example, in geometric situations, you can sometimes define it locally and glue the patches since uniqueness guarantees compatibility on overlaps. It suggests that you should try proving uniqueness before proving existence and if uniqueness fails, maybe you should add constraints (thus, paradoxically, adding constrains can help in proving existence). On the other hand, sometimes it is easier to prove that there are many than to point out one specific example (transcendental numbers, continues nowhere differentiable functions,...). Therefore, you may want to seek for the right notion of "many" in your universe (cardinality, measure, "topological bigness" like the baire property,...) and try to prove that actually there are "few" objects that don't have the required property.
comment: This relates to the answer saying that when you can't avoid making a choice, make all of them simultaneously. This happens when there are more than one, but not many...
KotelKanim
Look at flabbier objects. This seems to be especially useful in complex algebraic geometry. Hard to prove something for varieties? See if there's a version that's true for schemes. Or maybe Kahler manifolds. Or worse: stacks. Vector bundles giving you trouble? Try coherent sheaves. Try quasi-coherent sheaves. In fact, try complexes of them. This is really just a special case of "Generalize the question as far as you can" but in this specific case, it's rather clarifying, here are some examples in algebraic geometry:
It's hard to say anything about fundamental groups of complex projective varieties that isn't also true about compact Kahler manifolds. Perhaps the proof should focus on using the Kahler structure, when you're working on these.
Want to parameterize subvarieties of a projective variety? Tough, it doesn't work. SubSCHEMES, however, gives the Hilbert Scheme.
Proving things about ideals is often easier to do with modules in general
Charles Siegel
$\begingroup$ can you elaborate on 3? $\endgroup$ – Ho Chung Siu Jan 13 '10 at 17:04
$\begingroup$ Well, that's pretty much one of the main themes of commutative algebra books: that looking at ideals is the wrong point of view, really they're just submodules of the free module on one generator, and many (most, perhaps) results about ideals are actually true about modules, and how to prove them is easier to see when you realize just what the right way to look at things is. $\endgroup$ – Charles Siegel Jan 13 '10 at 17:10
If you want to show that a graph has few edges, prove that not too many vertices can have large degree.
(The complementary statement is the main trick in the solution to this MO question, by way of example. It's also used in the proof of the Stanley-Wilf conjecture.)
$\begingroup$ is there away of enlarging this to other fields? or rather, how should it be phrased? $\endgroup$ – Sean Tilson Sep 26 '10 at 6:04
$\begingroup$ Something like "If the average is small, and you have controlled the large outliers, then everything must be small" but that's not very snappy. $\endgroup$ – Matthew Daws Oct 1 '10 at 14:04
One of the slogans in T. W. Körner's book Fourier Analysis that is definitely in the harmonic analyst's toolbox: The function $f*g$ has the good properties both of f and g.
An example of its use is in approximating functions by trigonometric polynomials: convolving the function with any trigonometric polynomial gives you a trigonometric polynomial, and if you pick the polynomial carefully the resulting function will have similar properties to the original one.
$\begingroup$ With one exception: support in physical space. If $f$ has compact support and $g$ does not, the convolution does not have compact support. Of course, another way of looking at it is that having compact support is such an unstable property that it is actually not good in harmonic analysis. $\endgroup$ – Willie Wong Oct 1 '10 at 15:06
I'm not sure if this is a bit too general, but it is a slogan/heuristic that I find very useful and that I think most people will be able to come up with plenty of examples of:
"Extremalities always arise from symmetry."
Christian Bjartli
$\begingroup$ I would personally change 'always' to 'often'. Life for geometric analysts will be somewhat easier if this were actually a theorem. In the theoreical physics literature this is related to the so-called Coleman's Principle. However, it was shown by Kapitanski and Ladyzhenskaya that unless one makes additional assumptions, this principle is generally incorrect. ams.org/mathscinet-getitem?mr=711846 $\endgroup$ – Willie Wong Oct 1 '10 at 15:14
"When in doubt, differentiate." I've heard this attributed to Chern.
Timothy Chow
Pick a random example.
If you add lots of small and reasonably independent things together then the result will be highly concentrated about its mean.
gowers
A perfect example are the twelve heuristics listed on page 1 of L. Larson, "Problem solving through problems": http://books.google.com/books?id=qFNZIUQ_MYUC&lpg=PP1&dq=larson%20problem%20solving&pg=PA1#v=onepage&q&f=false
Jacobi's famous quote that "one must always invert." He had elliptic integrals in mind.
Michael Renardy
"Yes and No are the smallest possible answers yet they need the most thinking to be done " !!
adityaguharoy
$0 = \infty$
here is a math slogan that describes the way
$\begingroup$ I'm sorry, I thought you said what are some math slogans which describe life. $\endgroup$ – user10290 Jun 25 '17 at 7:00
I don't know if it fits here, but saying : "Nothing exists if it means something not that of a minimum or a maximum" ........ (I think I read this was by Euler)
$\begingroup$ Had Euler been drinking at the time? [and what mathematical trick does this express?] $\endgroup$ – Gerry Myerson May 8 '17 at 22:50
$\begingroup$ I have no idea what you are trying to say. $\endgroup$ – Gerry Myerson May 9 '17 at 12:12
$\begingroup$ Just trying to realize why he may have said so (if he actually did) $\endgroup$ – adityaguharoy May 9 '17 at 12:22
Thanks for contributing an answer to MathOverflow!
Not the answer you're looking for? Browse other questions tagged soft-question big-picture big-list or ask your own question.
Problems where we can't make a canonical choice, solved by looking at all choices at once
What are some good resources for mathematical translation?
What are some famous rejections of correct mathematics?
What are some mathematical sculptures?
What do named "tricks" share?
Theorems that are 'obvious' but hard to prove
What could be some potentially useful mathematical databases?
What are some Applications of Teichmüller Theory?
What are some important but still unsolved problems in mathematical logic?
What are some noteworthy "mic-drop" moments in math? | CommonCrawl |
HomePublicationsNature of the many-body excitations in a quantum wire
Nature of the many-body excitations in a quantum wire: theory and experiment
Alex Tsyplyatyev
Andy Schofield
Y. Jin
M. Moreno
W. K. Tan
A. S. Anirban
C. J. B. Ford
J. P. Griffiths
I. Farrer
G. A. C. Jones
D. A. Ritchie
The natural excitations of an interacting one-dimensional system at low energy are hydrodynamic modes of Luttinger liquid, protected by the Lorentz invariance of the linear dispersion. We show that beyond low energies, where quadratic dispersion reduces the symmetry to Galilean, the main character of the many-body excitations changes into a hierarchy: calculations of dynamic correlation functions for fermions (without spin) show that the spectral weights of the excitations are proportional to powers of $\mathcal{R}^{2}/L^{2}$, where $\mathcal{R}$ is a length-scale related to interactions and $L$ is the system length. Thus only small numbers of excitations carry the principal spectral power in representative regions on the energy-momentum planes. We have analysed the spectral function in detail and have shown that the first-level (strongest) excitations form a mode with parabolic dispersion, like that of a renormalised single particle. The second-level excitations produce a singular power-law line shape to the first-level mode and multiple power-laws at the spectral edge. We have illustrated crossover to Luttinger liquid at low energy by calculating the local density of state through all energy scales: from linear to non-linear, and to above the chemical potential energies. In order to test this model, we have carried out experiments to measure momentum-resolved tunnelling of electrons (fermions with spin) from/to a wire formed within a GaAs heterostructure. We observe well-resolved spin-charge separation at low energy with appreciable interaction strength and only a parabolic dispersion of the first-level mode at higher energies. We find structure resembling the second-level excitations, which dies away rapidly at high momentum in line with the theoretical predictions here.
23 pages, 10 figures, 2 tables
cond-mat.str-el
1.85 MB, PDF document
Nonlinear spectra of spinons and holons in short GaAs quantum wires
Alex Tsyplyatyev & Andy Schofield, 15 Sep 2016, In : Nature Communications. 7, 8 p., 12784.
Hierarchy of modes in an interacting one-dimensional system
Alex Tsyplyatyev, Andy Schofield, 11 May 2015, In : Physical Review Letters. 114, 5 p., 196401 .
Spectral-edge mode in interacting one-dimensional systems
Alex Tsyplyatyev & Andy Schofield, 31 Jul 2014, In : Physical Review B. 90, 1, 9 p., 014309.
Luttinger parameters of interacting fermions in one dimension at high energies
Alex Tsyplyatyev & Andy Schofield, 26 Sep 2013, In : Physical Review B. 88, 11, 6 p., 115142.
Momentum-dependent power law measured in an interacting quantum wire beyond the Luttinger limit
Andy Schofield, 27 Jun 2019, In : Nature Communications. 10, 1, 2821. | CommonCrawl |
Correlation integral and determinism for a family of $2^\infty$ maps
Wei Luo 1, and Zhaoyang Yin 2,
Department of Mathematics, Sun Yat-sen University, Guangzhou, 510275, China
Department of Mathematics, Zhongshan University, Guangzhou, 510275
Received July 2015 Revised November 2015 Published May 2016
In this paper we mainly investigate the Cauchy problem of a three-component Camassa-Holm system. By using Littlewood-Paley theory and transport equations theory, we establish the local well-posedness of the system in the critical Besov space. Moreover, we obtain some weighted $L^p$ estimates of strong solutions to the system. By taking suitable weighted functions, we can get the persistence properties of strong solutions on exponential, algebraic and logarithmic decay rates, respectively.
Keywords: critical Besov space, local well-posedness, persistence properties., A three-component Camassa-Holm system.
Mathematics Subject Classification: Primary: 35Q53; Secondary: 35A01, 35B44, 35B6.
Citation: Wei Luo, Zhaoyang Yin. Local well-posedness in the critical Besov space and persistence properties for a three-component Camassa-Holm system with N-peakon solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5047-5066. doi: 10.3934/dcds.2016019
H. Bahouri, J.-Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Grundlehren der Mathematischen Wissenschaften, Vol. 343, Berlin-Heidelberg-NewYork: Springer, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar
L. Brandolese, Break down for the Camassa-Holm equation using decay criteria and persistence in weighted spaces, Int. Math. Res. Not. IMRN, 22 (2012), 5161-5181. Google Scholar
A. Bressan and A. Constantin, Global conservative solutions of the Camassa-Holm equation, Arch. Ration. Mech. Anal., 183 (2007), 215-239. doi: 10.1007/s00205-006-0010-z. Google Scholar
A. Bressan and A. Constantin, Global dissipative solutions of the Camassa-Holm equation, Anal. Appl., 5 (2007), 1-27. doi: 10.1142/S0219530507000857. Google Scholar
R. Camassa and D. D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664. doi: 10.1103/PhysRevLett.71.1661. Google Scholar
R. Camassa, D. Holm and J. Hyman, A new integrable shallow water equation, Adv. Appl. Mech., 31 (1994), 1-33. doi: 10.1016/S0065-2156(08)70254-0. Google Scholar
A. Constantin, The Hamiltonian structure of the Camassa-Holm equation, Exposition. Math., 15 (1997), 53-85. Google Scholar
A. Constantin, On the scattering problem for the Camassa-Holm equation, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 457 (2001), 953-970. doi: 10.1098/rspa.2000.0701. Google Scholar
A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach, Ann. Inst. Fourier (Grenoble), 50 (2000), 321-362. doi: 10.5802/aif.1757. Google Scholar
A. Constantin, Finite propagation speed for the Camassa-Holm equation, J. Math. Phys., 46 (2005), 023506, 4 pp. doi: 10.1063/1.1845603. Google Scholar
A. Constantin, The trajectories of particles in Stokes waves, Invent. Math., 166 (2006), 523-535. doi: 10.1007/s00222-006-0002-5. Google Scholar
A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 26 (1998), 303-328. Google Scholar
A. Constantin and J. Escher, Well-posedness, global existence, and blowup phenomena for a periodic quasi-linear hyperbolic equation, Comm. Pure Appl. Math., 51 (1998), 475-504. doi: 10.1002/(SICI)1097-0312(199805)51:5<475::AID-CPA2>3.0.CO;2-5. Google Scholar
A. Constantin and J. Escher, Wave breaking for nonlinear nonlocal shallow water equations, Acta Math., 181 (1998), 229-243. doi: 10.1007/BF02392586. Google Scholar
A. Constantin and J. Escher, Particle trajectories in solitary water waves, Bull. Amer. Math. Soc., 44 (2007), 423-431. doi: 10.1090/S0273-0979-07-01159-7. Google Scholar
A. Constantin and J. Escher, Analyticity of periodic traveling free surface water waves with vorticity, Ann. of Math., 173 (2011), 559-568. doi: 10.4007/annals.2011.173.1.12. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations, Arch. Ration. Mech. Anal., 192 (2009), 165-186. doi: 10.1007/s00205-008-0128-2. Google Scholar
A. Constantin and L. Molinet, Global weak solutions for a shallow water equation, Comm. Math. Phys., 211 (2000), 45-61. doi: 10.1007/s002200050801. Google Scholar
A. Constantin and W. A. Strauss, Stability of peakons, Comm. Pure Appl. Math., 53 (2000), 603-610. doi: 10.1002/(SICI)1097-0312(200005)53:5<603::AID-CPA3>3.0.CO;2-L. Google Scholar
A. Constantin and R. Ivanov, On an integrable two-component Camassa-Holm shallow water system, Phys. Lett. A, 372 (2008), 7129-7132. doi: 10.1016/j.physleta.2008.10.050. Google Scholar
R. Danchin, A few remarks on the Camassa-Holm equation, Differential Integral Equations, 14 (2001), 953-988. Google Scholar
R. Danchin, A note on well-posedness for Camassa-Holm equation, J. Differential Equations, 192 (2003), 429-444. doi: 10.1016/S0022-0396(03)00096-2. Google Scholar
J. Escher, O. Lechtenfeld and Z. Yin, Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation, Discrete Contin. Dyn. Syst., 19 (2007), 493-513. doi: 10.3934/dcds.2007.19.493. Google Scholar
B. Fuchssteiner and A. S. Fokas, Symplectic structures, their Bäcklund transformations and hereditary symmetries,, Phys. D, 4 (): 47. doi: 10.1016/0167-2789(81)90004-X. Google Scholar
C. Guan and Z. Yin, Global existence and blow-up phenomena for an integrable two-component Camassa-Holm shallow water system, J. Differential Equations, 248 (2010), 2003-2014. doi: 10.1016/j.jde.2009.08.002. Google Scholar
C. Guan, K. H. Karlsen and Z. Yin, Well-posedness and blow-up phenomena for a modified two-component Camassa-Holm equation, Contemp. Math., 526 (2010), 199-220. doi: 10.1090/conm/526/10382. Google Scholar
C. Guan and Z. Yin, Global weak solutions for a two-component Camassa- Holm shallow water system, J. Funct. Anal., 260 (2011), 1132-1154. doi: 10.1016/j.jfa.2010.11.015. Google Scholar
C. Guan and Z. Yin, Global weak solutions for a modified two-component Camassa-Holm equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 28 (2011), 623-641. doi: 10.1016/j.anihpc.2011.04.003. Google Scholar
C. Guan, H. He and Z. Yin, Well-posedness, blow-up phenomena and persistence properties for a two-component water wave system, Nonlinear Anal. Real World Appl., 25 (2015), 219-237. doi: 10.1016/j.nonrwa.2015.04.001. Google Scholar
X. G. Geng and B. Xue, A three-component generalization of Camassa-Holm equation with N-peakon solutions, Adv. Math., 226 (2011), 827-839. doi: 10.1016/j.aim.2010.07.009. Google Scholar
G. Gui and Y. Liu, On the global existence and wave-breaking criteria for the two-component Camassa-Holm system, J. Funct. Anal., 258 (2010), 4251-4278. doi: 10.1016/j.jfa.2010.02.008. Google Scholar
D. Henry, Persistence properties for a family of nonlinear partial differential equations, Nonlinear Anal., 70 (2009), 1565-1573. doi: 10.1016/j.na.2008.02.104. Google Scholar
D. D. Holm, L. Naraigh and C. Tronci, Singular solution of a modified twocomponent Camassa-Holm equation, Phys. Rev. E (3), 79 (2009), 016601, 13 pp. doi: 10.1103/PhysRevE.79.016601. Google Scholar
A. Himonas, G. Misiolek, G. Ponce and Y. Zhou, Persistence properties and unique continuation of solutions of the Camassa-Holm equation, Comm. Math. Phys., 271 (2007), 511-522. doi: 10.1007/s00220-006-0172-4. Google Scholar
Y. Liu and Z. Yin, Global existence and blow-up phenomena for the Degasperis-Procesi equation, Comm. Math. Phys., 267 (2006), 801-820. doi: 10.1007/s00220-006-0082-5. Google Scholar
W. Luo and Z. Yin, Global existence and local well-posedness for a three-component Camassa-Holm system with N-peakon solutions, J. Differential Equations, 259 (2015), 201-234. doi: 10.1016/j.jde.2015.02.005. Google Scholar
G. Rodríguez-Blanco, On the Cauchy problem for the Camassa-Holm equation, Nonlinear Anal., 46 (2001), 309-327. doi: 10.1016/S0362-546X(01)00791-X. Google Scholar
W. Tan and Z. Yin, Global conservative solutions of a modified two-component Camassa-Holm shallow water system, J. Differential Equations, 251 (2011), 3558-3582. doi: 10.1016/j.jde.2011.08.010. Google Scholar
W. Tan and Z. Yin, Global dissipative solutions of a modified two-component Camassa-Holm shallow water system, J. Math. Phys., 52 (2011), 033507, 24pp. doi: 10.1063/1.3562928. Google Scholar
J. F. Toland, Stokes waves, Topol. Methods Nonlinear Anal., 7 (1996), 1-48. Google Scholar
X. Wu and B. Guo, Persistence properties and infinite propagation for the modified 2-component Camassa-Holm equation, Discrete Contin. Dyn. Syst., 33 (2013), 3211-3223. doi: 10.3934/dcds.2013.33.3211. Google Scholar
Z. Xin and P. Zhang, On the weak solutions to a shallow water equation, Comm. Pure Appl. Math., 53 (2000), 1411-1433. doi: 10.1002/1097-0312(200011)53:11<1411::AID-CPA4>3.0.CO;2-5. Google Scholar
K. Yan and Z. Yin, Well-posedness for a modified two-component Camassa-Holm system in critical spaces, Discrete Contin. Dyn. Syst., 33 (2013), 1699-1712. doi: 10.3934/dcds.2013.33.1699. Google Scholar
Kai Yan, Zhaoyang Yin. Well-posedness for a modified two-component Camassa-Holm system in critical spaces. Discrete & Continuous Dynamical Systems, 2013, 33 (4) : 1699-1712. doi: 10.3934/dcds.2013.33.1699
Yongsheng Mi, Chunlai Mu. On a three-Component Camassa-Holm equation with peakons. Kinetic & Related Models, 2014, 7 (2) : 305-339. doi: 10.3934/krm.2014.7.305
Jae Min Lee, Stephen C. Preston. Local well-posedness of the Camassa-Holm equation on the real line. Discrete & Continuous Dynamical Systems, 2017, 37 (6) : 3285-3299. doi: 10.3934/dcds.2017139
Lei Zhang, Bin Liu. Well-posedness, blow-up criteria and gevrey regularity for a rotation-two-component camassa-holm system. Discrete & Continuous Dynamical Systems, 2018, 38 (5) : 2655-2685. doi: 10.3934/dcds.2018112
Qiaoyi Hu, Zhijun Qiao. Persistence properties and unique continuation for a dispersionless two-component Camassa-Holm system with peakon and weak kink solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2613-2625. doi: 10.3934/dcds.2016.36.2613
Joachim Escher, Olaf Lechtenfeld, Zhaoyang Yin. Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2007, 19 (3) : 493-513. doi: 10.3934/dcds.2007.19.493
Jihong Zhao, Ting Zhang, Qiao Liu. Global well-posedness for the dissipative system modeling electro-hydrodynamics with large vertical velocity component in critical Besov space. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 555-582. doi: 10.3934/dcds.2015.35.555
Xi Tu, Zhaoyang Yin. Local well-posedness and blow-up phenomena for a generalized Camassa-Holm equation with peakon solutions. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2781-2801. doi: 10.3934/dcds.2016.36.2781
Ying Fu, Changzheng Qu, Yichen Ma. Well-posedness and blow-up phenomena for the interacting system of the Camassa-Holm and Degasperis-Procesi equations. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1025-1035. doi: 10.3934/dcds.2010.27.1025
Yongsheng Mi, Boling Guo, Chunlai Mu. Persistence properties for the generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (5) : 1623-1630. doi: 10.3934/dcdsb.2019243
Xinglong Wu. On the Cauchy problem of a three-component Camassa--Holm equations. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2827-2854. doi: 10.3934/dcds.2016.36.2827
Hongmei Cao, Hao-Guang Li, Chao-Jiang Xu, Jiang Xu. Well-posedness of Cauchy problem for Landau equation in critical Besov space. Kinetic & Related Models, 2019, 12 (4) : 829-884. doi: 10.3934/krm.2019032
Shaojie Yang, Tianzhou Xu. Symmetry analysis, persistence properties and unique continuation for the cross-coupled Camassa-Holm system. Discrete & Continuous Dynamical Systems, 2018, 38 (1) : 329-341. doi: 10.3934/dcds.2018016
Zhaoyang Yin. Well-posedness and blow-up phenomena for the periodic generalized Camassa-Holm equation. Communications on Pure & Applied Analysis, 2004, 3 (3) : 501-508. doi: 10.3934/cpaa.2004.3.501
Jinlu Li, Zhaoyang Yin. Well-posedness and blow-up phenomena for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5493-5508. doi: 10.3934/dcds.2016042
Chenghua Wang, Rong Zeng, Shouming Zhou, Bin Wang, Chunlai Mu. Continuity for the rotation-two-component Camassa-Holm system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6633-6652. doi: 10.3934/dcdsb.2019160
Zeng Zhang, Zhaoyang Yin. On the Cauchy problem for a four-component Camassa-Holm type system. Discrete & Continuous Dynamical Systems, 2015, 35 (10) : 5153-5169. doi: 10.3934/dcds.2015.35.5153
Yongsheng Mi, Boling Guo, Chunlai Mu. On an $N$-Component Camassa-Holm equation with peakons. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1575-1601. doi: 10.3934/dcds.2017065
Zhichun Zhai. Well-posedness for two types of generalized Keller-Segel system of chemotaxis in critical Besov spaces. Communications on Pure & Applied Analysis, 2011, 10 (1) : 287-308. doi: 10.3934/cpaa.2011.10.287
Xinglong Wu, Boling Guo. Persistence properties and infinite propagation for the modified 2-component Camassa--Holm equation. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 3211-3223. doi: 10.3934/dcds.2013.33.3211
Wei Luo Zhaoyang Yin | CommonCrawl |
A $C^{0,1}$-functional It\^o's formula and its applications in mathematical finance
Bruno Bouchard,Grégoire Loeper,Xiaolu Tan
Using Dupire's notion of vertical derivative, we provide a functional (path-dependent) extension of the It\^o's formula of Gozzi and Russo (2006) that applies to C^{0,1}-functions of continuous weak Dirichlet processes. It is motivated and illustrated by its applications to the hedging or superhedging problems of path-dependent options in mathematical finance, in particular in the case of model uncertainty
A Reinforcement Learning Based Encoder-Decoder Framework for Learning Stock Trading Rules
Mehran Taghian,Ahmad Asadi,Reza Safabakhsh
A wide variety of deep reinforcement learning (DRL) models have recently been proposed to learn profitable investment strategies. The rules learned by these models outperform the previous strategies specially in high frequency trading environments. However, it is shown that the quality of the extracted features from a long-term sequence of raw prices of the instruments greatly affects the performance of the trading rules learned by these models. Employing a neural encoder-decoder structure to extract informative features from complex input time-series has proved very effective in other popular tasks like neural machine translation and video captioning in which the models face a similar problem. The encoder-decoder framework extracts highly informative features from a long sequence of prices along with learning how to generate outputs based on the extracted features. In this paper, a novel end-to-end model based on the neural encoder-decoder framework combined with DRL is proposed to learn single instrument trading strategies from a long sequence of raw prices of the instrument. The proposed model consists of an encoder which is a neural structure responsible for learning informative features from the input sequence, and a decoder which is a DRL model responsible for learning profitable strategies based on the features extracted by the encoder. The parameters of the encoder and the decoder structures are learned jointly, which enables the encoder to extract features fitted to the task of the decoder DRL. In addition, the effects of different structures for the encoder and various forms of the input sequences on the performance of the learned strategies are investigated. Experimental results showed that the proposed model outperforms other state-of-the-art models in highly dynamic environments.
AdaVol: An Adaptive Recursive Volatility Prediction Method
Nicklas Werge,Olivier Wintenberger
Quasi-Maximum Likelihood (QML) procedures are theoretically appealing and widely used for statistical inference. While there are extensive references on QML estimation in batch settings, it has attracted little attention in streaming settings until recently. An investigation of the convergence properties of the QML procedure in a general conditionally heteroscedastic time series model is conducted, and the classical batch optimization routines extended to the framework of streaming and large-scale problems. An adaptive recursive estimation routine for GARCH models named AdaVol is presented. The AdaVol procedure relies on stochastic approximations combined with the technique of Variance Targeting Estimation (VTE). This recursive method has computationally efficient properties, while VTE alleviates some convergence difficulties encountered by the usual QML estimation due to a lack of convexity. Empirical results demonstrate a favorable trade-off between AdaVol's stability and the ability to adapt to time-varying estimates for real-life data.
Analyzing the response to TV serials retelecast during COVID19 lockdown in India
Sandeep Ranjan
TV serials are a popular source of entertainment. The ongoing COVID19 lockdown has a high probability of degrading the publics mental health. The Government of India started the retelecast of yesteryears popular TV serials on public broadcaster Doordarshan from 28th March 2020 to 31st July 2020. Tweets corresponding to the Doordarshan hashtag were mined to create a dataset. The experiment aims to analyze the publics response to the retelecast of TV serials by calculating the sentiment score of the tweet dataset. Datasets mean sentiment score of 0.65 and high share 64.58% of positive tweets signifies the acceptance of Doordarshans retelecast decision. The sentiment analysis result also reflects the positive state of mind of the public.
Are Managers Susceptible to the Ostrich Effect?
Bernard, Darren,Cade, Nicole L.,Connors, Elizabeth
Using data from an information provider in the cannabis industry, we observe that managers of retail dispensaries appear to suffer from the "ostrich effect"â€"the selective acquisition of news based on an expectation of the likely hedonic response (e.g., avoiding bad news to avoid psychological discomfort). Managers are more likely to acquire store and product performance information as its expected valence (i.e., its "goodness" versus "badness") increases and revisit this information more as its actual valence increases. These relations are attenuated when managers can more easily attribute the performance to external factors, suggesting managers intuitively acquire good news they can take credit for and avoid bad news they must internalize. Managers' information acquisition decisions also appear to have real effectsâ€"future product stock-outs are greater when managers avoid the information. Our results suggest that hedonic effects of information influence key information acquisition choices of managers.
Asset Prices When Investors Ignore Discount Rate Dynamics
Wang, Renxuan
Commonly used valuation methods assume the discount rate will remain constant, yet studies have demonstrated that discount rates are dynamic. In this paper, I propose and test a "Constant Discount Rate" hypothesis: some investors ignore the volatility of discount rates when forming return expectations. In theory, such a heuristic should cause investors to develop positive biases in their expectations of returns; and their biases should be stronger for stocks with higher potential cash flow growth and/or uncertainty, which leads them to buy more of these stocks. At the same time, risk-averse arbitrageurs know these stocks are more sensitive to aggregate discount rate shocks, so mispricing persists. To test this hypothesis, I measure mispricing at the firm level and find that overvalued stocks exhibit large and long-lasting negative abnormal returns, even among stocks in the S&P 500. A tradable mispricing factor explains the CAPM alphas of 12 leading anomalies (9 out of the 11 in Stambaugh and Yuan (2017)) including investment, profitability, beta, idiosyncratic volatility and cash flow duration. The empirical relationship between cross-section of fundamental characteristics and investors' subjective beliefs are also consistent with the hypothesis.
Bayesian Consensus: Consensus Estimates from Miscalibrated Instruments under Heteroscedastic Noise
Chirag Nagpal,Robert E. Tillman,Prashant Reddy,Manuela Veloso
We consider the problem of aggregating predictions or measurements from a set of human forecasters, models, sensors or other instruments which may be subject to bias or miscalibration and random heteroscedastic noise. We propose a Bayesian consensus estimator that adjusts for miscalibration and noise and show that this estimator is unbiased and asymptotically more efficient than naive alternatives. We further propose a Hierarchical Bayesian Model that leverages our proposed estimator and apply it to two real world forecasting challenges that require consensus estimates from error prone individual estimates: forecasting influenza like illness (ILI) weekly percentages and forecasting annual earnings of public companies. We demonstrate that our approach is effective at mitigating bias and error and results in more accurate forecasts than existing consensus models.
Challenging Practical Features of Bitcoin by the Main Altcoins
Andrew Spurr,Marcel Ausloos
We study the fundamental differences that separate: Litecoin; Bitcoin Gold; Bitcoin Cash; Ethereum; and Zcash from Bitcoin, and draw analysis to how these features are appreciated by the market, to ultimately make an inference as to how future successful cryptocurrencies may behave. We use Google Trend data, as well as price, volume and market capitalization data sourced from coinmarketcap.com to support this analysis. We find that Litecoin's shorter block times offer benefits in commerce, but drawbacks in the mining process through orphaned blocks. Zcash holds a niche use for anonymous transactions, benefitting areas of the world lacking in economic freedom. Bitcoin Cash suffers from centralization in the mining process, while the greater decentralization of Bitcoin Gold has generally left it to stagnate. Ether's greater functionality offers the greatest threat to Bitcoin's dominance in the market. A coin that incorporates several of these features can be technically better than Bitcoin, but the first-to-marketadvantage of Bitcoin should keep its dominant position in the market.
Closed form optimal exercise boundary of the American put option
Yerkin Kitapbayev
We present three models of stock price with time-dependent interest rate, dividend yield, and volatility, respectively, that allow for explicit forms of the optimal exercise boundary of the finite maturity American put option. The optimal exercise boundary satisfies the nonlinear integral equation of Volterra type. We choose time-dependent parameters of the model so that the integral equation for the exercise boundary can be solved in the closed form. We also define the contracts of put type with time-dependent strike price that support the explicit optimal exercise boundary.
Comparison of the effects of investor attention using search volume data before and after mobile device popularization
Jonghyeon Min
In this study, we will study investor attention measurement using the Search Volume Index in the recent market. Since 2009, the popularity of mobile devices and the spread of the Internet have made the speed of information delivery faster and the investment information retrieval data for obtaining investment information has increased dramatically. In these circumstances, investor attention measurement using search volume data can be measured more accurately and faster than before mobile device popularization. To confirm this, we will compare the effect of measuring investor attention using search volume data before and after mobile device popularization. In addition, it is confirmed that the measured investor attention is that of retail traders, not institutional traders or professional traders, and the relationship between investor attention and short-term price pressure theory. Using SVI data provided by Google Trends, we will experiment with Russell 3000 stocks and IPO stocks and compare the results. In addition, the results of investigating the investor's interest using the search volume data from various angles through experiments such as the comparison of the results based on the inclusion of the noise ticker group, the comparison of the limitations of the existing investor attention measurement method, and the comparison of explanatory variables with existing IPO related studies. We would like to verify its practicality and significance.
Containment efficiency and control strategies for the Corona pandemic costs
Claudius Gros,Roser Valenti,Lukas Schneider,Kilian Valenti,Daniel Gros
The rapid spread of the Coronavirus (COVID-19) confronts policy makers with the problem of measuring the effectiveness of containment strategies, balancing public health considerations with the economic costs of social distancing measures. We introduce a modified epidemic model that we name the controlled-SIR model, in which the disease reproduction rate evolves dynamically in response to political and societal reactions. An analytic solution is presented. The model reproduces official COVID-19 cases counts of a large number of regions and countries that surpassed the first peak of the outbreak. A single unbiased feedback parameter is extracted from field data and used to formulate an index that measures the efficiency of containment strategies (the CEI index). CEI values for a range of countries are given. For two variants of the controlled-SIR model, detailed estimates of the total medical and socio-economic costs are evaluated over the entire course of the epidemic. Costs comprise medical care cost, the economic cost of social distancing, as well as the economic value of lives saved. Under plausible parameters, strict measures fare better than a hands-off policy. Strategies based on current case numbers lead to substantially higher total costs than strategies based on the overall history of the epidemic.
Credit Cards and the Floating Rate Channel of Monetary Policy
Grodzicki, Daniel
I quantify the impact of Federal Funds Rate (FFR) movements on consumers' welfare via the floating, or variable, rate on their credit cards. I first newly document that 96% of card rates adjust to the FFR within 3 months of a change in the latter. Exploiting these rate changes, I construct a model of card use and estimate it using a large national database of U.S. card accounts. The model estimates imply that a hypothetical 25 bp rise in the FFR lowers annual consumers' surplus by 0.24% of personal consumption expenditures ($33.4 billion), and disproportionately more so in lower income areas.
Deep Learning-Based Exchange Rate Prediction during the COVID-19
Akhtaruzzaman, Md,Hasan Moon, Mahmudul,Hammami, Helmi,Abedin, Mohammad Zoynul
We propose two ensemble deep learning approaches: Bagging Ridge regression (BR) and Bi-LSTM Bagging Ridge (Bi-LSTM BR) to predict the exchange rates of 21 currencies against USD during COVID-19 and non-COVID-19 periods. We also apply machine learning algorithms, such as Decision Tree (DT), Support Vector Regression (SVR), Random Forest Regression (RFR), and deep learning algorithms, such as Long Short-Term Memory (LSTM), Bi-directional Long Short-Term Memory (Bi-LSTM) for the prediction. Our proposed ensemble deep learning approaches perform well to obtain accuracy in forecasting exchange rates. However, the performance of algorithms varies during COVID-19 and non-COVID-19 periods across currencies. Our study is beneficial for foreign exchange traers in forecast performance and potential trading profitability.
Deep Reinforcement Learning with Function Properties in Mean Reversion Strategies
Sophia Gu
With the recent advancement in Deep Reinforcement Learning in the gaming industry, we are curious if the same technology would work as well for common quantitative financial problems. In this paper, we will investigate if an off-the-shelf library developed by OpenAI can be easily adapted to mean reversion strategy. Moreover, we will design and test to see if we can get better performance by narrowing the function space that the agent needs to search for.We achieve this through augmenting the reward function by a carefully picked penalty term.
Deep reinforcement learning for portfolio management based on the empirical study of chinese stock market
Gang Huang,Xiaohua Zhou,Qingyang Song
The objective of this paper is to verify that current cutting-edge artificial intelligence technology, deep reinforcement learning, can be applied to portfolio management. We improve on the existing Deep Reinforcement Learning Portfolio model and make many innovations. Unlike many previous studies on discrete trading signals in portfolio management, we make the agent to short in a continuous action space, design an arbitrage mechanism based on Arbitrage Pricing Theory,and redesign the activation function for acquiring action vectors, in addition, we redesign neural networks for reinforcement learning with reference to deep neural networks that process image data. In experiments, we use our model in several randomly selected portfolios which include CSI300 that represents the market's rate of return and the randomly selected constituents of CSI500. The experimental results show that no matter what stocks we select in our portfolios, we can almost get a higher return than the market itself. That is to say, we can defeat market by using deep reinforcement learning.
Double-Robust Identification for Causal Panel Data Models
Dmitry Arkhangelsky,Guido W. Imbens
We study identification and estimation of causal effects in settings with panel data. Traditionally researchers follow model-based identification strategies relying on assumptions governing the relation between the potential outcomes and the unobserved confounders. We focus on a novel, complementary, approach to identification where assumptions are made about the relation between the treatment assignment and the unobserved confounders. We introduce different sets of assumptions that follow the two paths to identification, and develop a double robust approach. We propose estimation methods that build on these identification strategies.
Dynamics, behaviours, and anomaly persistence in cryptocurrencies and equities surrounding COVID-19
Nick James
This paper uses new and recently introduced methodologies to study the similarity in the dynamics and behaviours of cryptocurrencies and equities surrounding the COVID-19 pandemic. We study two collections; 45 cryptocurrencies and 72 equities, both independently and in conjunction. First, we examine the evolution of cryptocurrency and equity market dynamics, with a particular focus on their change during the COVID-19 pandemic. We demonstrate markedly more similar dynamics during times of crisis. Next, we apply recently introduced methods to contrast trajectories, erratic behaviours, and extreme values among the two multivariate time series. Finally, we introduce a new framework for determining the persistence of market anomalies over time. Surprisingly, we find that although cryptocurrencies exhibit stronger collective dynamics and correlation in all market conditions, equities behave more similarly in their trajectories, extremes, and show greater persistence in anomalies over time.
Early-life Income Shocks and Old-Age Cause-Specific Mortality
Hamid NoghaniBehambari,Farzaneh Noghani,Nahid Tavassoli
This paper investigates the causal relationship between income shocks during the first years of life and adulthood mortality due to specific causes of death. Using all death records in the United States during 1968-2004 for individuals who were born in the first half of the 20th century, we document a sizable and statistically significant association between income shocks early in life, proxied by GDP per capita fluctuations, and old age cause-specific mortality. Conditional on individual characteristics and controlling for a broad array of current and early-life conditions, we find that a 1 percent decrease in the aggregate business cycle in the year of birth is associated with 2.2, 2.3, 3.1, 3.7, 0.9, and 2.1 percent increase in the likelihood of mortality in old ages due to malignant neoplasms, Diabetes Mellitus, cardiovascular diseases, Influenza, chronic respiratory diseases, and all other diseases, respectively.
Enhancing Stock Market Anomalies with Machine Learning
Azevedo, Vitor,Hoegner, Christopher
We examine the predictability of 299 capital market anomalies enhanced by 30 machine learning approaches and over 250 models in a dataset with more than 500 million firm-month-anomaly observations. We find significant monthly (out-of-sample) returns of around 1.8-2.0%, and over 80% of the models yield returns equal or larger than our linearly constructed baseline factor. The risk-adjusted returns are significant across alternative asset pricing models, considering transaction costs with round-trip costs of up to 2% and including only anomalies after publication. Our results indicate that non-linear models can reveal market inefficiencies (mispricing) that are hard to conciliate with risk-based explanations.
Explicit solution simulation method for the 3/2 model
Iro René Kouarfate,Michael A. Kouritzin,Anne MacKay
An explicit weak solution for the 3/2 stochastic volatility model is obtained and used to develop a simulation algorithm for option pricing purposes. The 3/2 model is a non-affine stochastic volatility model whose variance process is the inverse of a CIR process. This property is exploited here to obtain an explicit weak solution, similarly to Kouritzin (2018). A simulation algorithm based on this solution is proposed and tested via numerical examples. The performance of the resulting pricing algorithm is comparable to that of other popular simulation algorithms.
Exploring the association between R&D expenditure and the job quality in the European Union
Fernando Almeida,Nelson Amoedo
Investment in research and development is a key factor in increasing countries' competitiveness. However, its impact can potentially be broader and include other socially relevant elements like job quality. In effect, the quantity of generated jobs is an incomplete indicator since it does not allow to conclude on the quality of the job generated. In this sense, this paper intends to explore the relevance of R&D investments for the job quality in the European Union between 2009 and 2018. For this purpose, we investigate the effects of R&D expenditures made by the business sector, government, and higher education sector on three dimensions of job quality. Three research methods are employed, i.e. univariate linear analysis, multiple linear analysis, and cluster analysis. The findings only confirm the association between R&D expenditure and the number of hours worked, such that the European Union countries with the highest R&D expenses are those with the lowest average weekly working hours.
Functional False Discovery Rate in Mutual Fund Performance
Ma, Tren,Kyriakou, Ioannis,Sermpinis, Georgios
We introduce a novel multiple hypothesis testing framework for selecting outperforming mutual funds with control of luck called the functional False Discovery Rate "plus". We show that our method, which incorporates informative covariates to control for the false discovery rate, gains considerable power over the False Discovery Rate "plus" of Barras, Scaillet and Wermers. We experiment the method with five covariates that commonly affect the mutual funds' performance by constructing portfolios that generate positive alphas and consistently beat portfolios based on sorting on covariates or the False Discovery Rate "plus". Our results confirm the informative power of the five covariates and we demonstrate, for the first time in the literature, their economic value in mutual funds' selection after controlling of luck. Finally, by applying a set of linear combinations and shrinkage regressions, we achieve superior trading performance.
Going Green by Putting a Price on Pollution: Firm-level Evidence from the EU
De Jonghe, Olivier,Mulier, Klaas,Schepens, Glenn
This paper shows that, when the price of emission allowances is sufficiently high, emission trading schemes improve the emission efficiency of highly polluting firms. The efficiency gain comes from a relative decrease in emissions rather than a relative increase in operating revenue. Part of the improvement is realized via the acquisition of green firms. The size of the improvement depends on the initial allocation of free emission allowances: highly polluting firms receiving more emission allowances for free, such as firms on the carbon leakage list, have a weaker incentive to become more efficient. For identification, we exploit the tightening in EU ETS regulation in 2017, which led to a steep price increase of emission allowances and made the ETS regulation more binding for polluting firms.
Lazy Momentum with Growth-Trend timing: Resilient Asset Allocation (RAA)
Keller, Wouter J.
Resilient Asset Allocation (RAA) is a more aggressive version of our Lethargic Asset Allocation (LAA) strategy. It combines a more robust "All Weather" portfolio with lazy growth-trend (GT) timing, canary crash-protection and breadth momentum. GT timing goes risk-off only when both the US unemployment (UE) and the US capital markets are bearish. To arrive at RAA, we adapt LAA in three steps. First, the (risky, near-static) portfolio is changed to an even more robust and more diversified "all-weather" portfolio, now with five (instead of four) equal weighted assets and with only bonds as risk-off assets ("cash"). Second, the "canary" technology from our DAA paper is used for determining the market trend with a faster filter. Third, we change the unemployment trend filter to a slower one, where we simply compare the recent unemployment level with that of one year ago. As a result, RAA is more aggressive and more robust than LAA, while at the same time nearly as "lazy" with respect to trading and turnover (on average one trading month per year).
Mean-Variance Investment and Risk Control Strategies -- A Time-Consistent Approach via A Forward Auxiliary Process
Yang Shen,Bin Zou
We consider an optimal investment and risk control problem for an insurer under the mean-variance (MV) criterion. By introducing a deterministic auxiliary process defined forward in time, we formulate an alternative time-consistent problem related to the original MV problem, and obtain the optimal strategy and the value function to the new problem in closed-form. We compare our formulation and optimal strategy to those under the precommitment and game-theoretic framework. Numerical studies show that, when the financial market is negatively correlated with the risk process, optimal investment may involve short selling the risky asset and, if that happens, a less risk averse insurer short sells more risky asset.
Mechanistic Framework of Global Value Chains
Sourish Dutta
Indeed, the global production (as a system of creating values) is eventually forming like a gigantic and complex network/web of value chains that explains the transitional structures of global trade and development of the global economy. It's truly a new wave of globalisation, and we term it as the global value chains (GVCs), creating the nexus among firms, workers and consumers around the globe. The emergence of this new scenario asks: how an economy's firms, producers and workers connect in the global economy. And how are they capturing the gains out of it in terms of different dimensions of economic development? This GVC approach is very crucial for understanding the organisation of the global industries and firms. It requires the statics and dynamics of diverse players involved in this complex global production network. Its broad notion deals with different global issues (including regional value chains also) from the top down to the bottom up, founding a scope for policy analysis (Gereffi & Fernandez-Stark 2011). But it is true that, as Feenstra (1998) points out, any single computational framework is not sufficient to quantification this whole range of economic activities. We should adopt an integrative framework for accurate projection of this dynamic multidimensional phenomenon.
Nutritional deficiency and infants health outcomes
Hossein Shahri
Previous studies show that prenatal shocks to embryos could have adverse impacts on health endowment at birth. Using the universe of birth data and a difference-in-difference-in-difference strategy, I find that exposure to Ramadan during prenatal development has negative birth outcomes. Exposure to a full month of fasting is associated with 96 grams lower birth-weight. These results are robust across specifications and do not appear to be driven by mothers selective fertility.
On the RND under Heston's stochastic volatility model
Ben Boukai
We consider Heston's (1993) stochastic volatility model for valuation of European options to which (semi) closed form solutions are available and are given in terms of characteristic functions. We prove that the class of scale-parameter distributions with mean being the forward spot price satisfies Heston's solution. Thus, we show that any member of this class could be used for the direct risk-neutral valuation of the option price under Heston's SV model. In fact, we also show that any RND with mean being the forward spot price that satisfies Hestons' option valuation solution, must be a member of a scale-family of distributions in that mean. As particular examples, we show that one-parameter versions of the {\it Log-Normal, Inverse-Gaussian, Gamma, Weibull} and the {\it Inverse-Weibull} distributions are all members of this class and thus provide explicit risk-neutral densities (RND) for Heston's pricing model. We demonstrate, via exact calculations and Monte-Carlo simulations, the applicability and suitability of these explicit RNDs using already published Index data with a calibrated Heston model (S\&P500, Bakshi, Cao and Chen (1997), and ODAX, Mr\'azek and Posp\'i\v{s}il (2017)), as well as current option market data (AMD).
Political Sentiment and Syndicated Loan Borrowing Costs of Multinational Enterprises
Karavitis, Panagiotis,Kazakis, Pantelis
International business literature widely recognizes that political forces play a crucial role in modern corporations. Yet, rare are the studies of how foreign operations mitigate the detrimental effect that firm-level political exposure has on the cost of lending. We study such channels in a sample of U.S. corporations with foreign subsidiaries in 69 countries. We proxy firm-level political exposure via political sentiment. We show that firms with lower political sentiment (i.e., higher political exposure) have a higher cost of lending. We document that multinational enterprises with a presence in many countries, and those having an extended network of foreign subsidiaries can lower the harmful effects of increased political uncertainty. This outcome also holds in the presence of foreign economies of scale, and when multinational corporations have foreign subsidiaries in countries with higher political polarization.
Pricing the COVID-19 Vaccine: A Mathematical Approach
Susan Martonosi,Banafsheh Behzad,Kayla Cummings
According to the World Health Organization, development of the COVID-19 vaccine is occurring in record time. Administration of the vaccine has started the same year as the declaration of the COVID-19 pandemic. The United Nations emphasized the importance of providing COVID-19 vaccines as "a global public good", which is accessible and affordable world-wide. Pricing the COVID-19 vaccines is a controversial topic. We use optimization and game theoretic approaches to model the COVID-19 U.S. vaccine market as a duopoly with two manufacturers Pfizer-BioNTech and Moderna. The results suggest that even in the context of very high production and distribution costs, the government can negotiate prices with the manufacturers to keep public sector prices as low as possible while meeting demand and ensuring each manufacturer earns a target profit. Furthermore, these prices are consistent with those currently predicted in the media.
Quantum Technology for Economists
Isaiah Hull,Or Sattath,Eleni Diamanti,Göran Wendin
Research on quantum technology spans multiple disciplines: physics, computer science, engineering, and mathematics. The objective of this manuscript is to provide an accessible introduction to this emerging field for economists that is centered around quantum computing and quantum money. We proceed in three steps. First, we discuss basic concepts in quantum computing and quantum communication, assuming knowledge of linear algebra and statistics, but not of computer science or physics. This covers fundamental topics, such as qubits, superposition, entanglement, quantum circuits, oracles, and the no-cloning theorem. Second, we provide an overview of quantum money, an early invention of the quantum communication literature that has recently been partially implemented in an experimental setting. One form of quantum money offers the privacy and anonymity of physical cash, the option to transact without the involvement of a third party, and the efficiency and convenience of a debit card payment. Such features cannot be achieved in combination with any other form of money. Finally, we review all existing quantum speedups that have been identified for algorithms used to solve and estimate economic models. This includes function approximation, linear systems analysis, Monte Carlo simulation, matrix inversion, principal component analysis, linear regression, interpolation, numerical differentiation, and true random number generation. We also discuss the difficulty of achieving quantum speedups and comment on common misconceptions about what is achievable with quantum computing.
Quantum credit loans
Ardenghi Juan Sebastian
Quantum models based on the mathematics of quantum mechanics (QM) have been developed in cognitive sciences, game theory and econophysics. In this work a generalization of credit loans is introduced by using the vector space formalism of QM. Operators for the debt, amortization, interest and periodic installments are defined and its mean values in an arbitrary orthonormal basis of the vectorial space give the corresponding values at each period of the loan. Endowing the vector space of dimension M, where M is the loan duration, with a SO(M) symmetry, it is possible to rotate the eigenbasis to obtain better schedule periodic payments for the borrower, by using the rotation angles of the SO(M) transformation. Given that a rotation preserves the length of the vectors, the total amortization, debt and periodic installments are not changed. For a general description of the formalism introduced, the loan operator relations are given in terms of a generalized Heisenberg algebra, where finite dimensional representations are considered and commutative operators are defined for the specific loan types. The results obtained are an improvement of the usual financial instrument of credit because introduce several degrees of freedom through the rotation angles, which allows to select superposition states of the corresponding commutative operators that enables the borrower to tune the periodic installments in order to obtain better benefits without changing what the lender earns.
Recent Developments in Real Estate Investment Trusts in Spain
Vaquero, Victor G.,Roibas, Irene
Real estate investment trusts in Spain (SOCIMIs by their Spanish abbreviation) are instruments for investing in real estate assets which were regulated for the first time in Spain in 2009. They have grown rapidly in recent years to reach a relative size, approximated by their stock market capitalisation in terms of GDP, which is above the average for this type of companies in the euro area as a whole. In Spain this sector is highly concentrated, since a few, large vehicles exist alongside a sizeable group of small companies. SOCIMIs listed in regulated markets and those listed in alternative markets are notably different in terms of their size, balance sheet composition and ownership structure. The low exposure of Spanish SOCIMIs to the residential real estate segment, although it has risen in recent years, is worth noting, as is the high proportion of their capital owned by non-resident investors.
Relative Arbitrage Opportunities in $N$ Investors and Mean-Field Regimes
Tomoyuki Ichiba,Tianjiao Yang
The relative arbitrage portfolio, formulated in Stochastic Portfolio Theory (SPT), outperforms a benchmark portfolio over a given time-horizon with probability one. This paper analyzes the market behavior and optimal investment strategies to attain relative arbitrage both in the $N$ investors and mean field regimes under some market conditions. An investor competes with a benchmark of market and peer investors, expecting to outperform the benchmark and minimizing the initial capital.
With market price of risk processes depending on the market portfolio and investors, we develop a systematic way to solve multi-agent optimization problem within SPT's framework. The objective can be characterized by the smallest nonnegative continuous solution of a Cauchy problem. By a modification in the structure of the extended mean field game with common noise and its notion of the uniqueness of Nash equilibrium, we show a unique equilibrium in $N$-player games and mean field games with mild conditions on the equity market.
Research on the Competitive Consequences of Common Ownership: A Methodological Critique
Azar, José,Schmalz, Martin C.,Tecu, Isabel
This note argues that the evidence presented in several critiques of Azar, Schmalz, and Tecu's "airlines" paper does often not back the conclusion these studies draw. Specifically, widely circulated studies claiming that there are no anticompetitive effects of common ownership, or that there is no evidence of it, either do not attempt to refute AST's findings of anticompetitive effects in the U.S. airlines industry or in fact confirm the evidence by AST and even dispel valid concerns about AST's methodology. Focusing on Kennedy, O'Brien, Song, and Waehrer (KOSW), we note that their panel regressions using market-share-free indices of common ownership concentration confirm the positive correlation between common ownership concentration and price, which AST showed with a measure containing potentially endogenous market shares. We then examine the alternative empirical methods KOSW propose: (i) their conclusion that estimates from a structural model show no evidence of anticompetitive effects is based on an estimation that discards 90% of the available data and therefore, at best, is only valid for that subsample; (ii) their structural model makes no economic sense because it produces a negative effect of route distance on marginal cost; and (iii) they construct an alternative version of the widely used BlackRock-BGI instrument that is arguably invalid. Even absent these methodological concerns, KOSW's structural estimates are so noisy that they do not in fact reject the hypothesis that common ownership concentration has a positive effect on prices. A more recent structural paper by Park and Seo has shown these concerns to be well-founded: using a different and larger subsample of AST's data and more standard estimation methods compared to KOSW, they estimate a positive effect of common ownership on prices, as well as a positive effect of route distance on cost. A lesson for future research â€" and readers of the literature -- is to critically evaluate the conclusions drawn by studies in this field, including those that advertise themselves as providing evidence against the existence of anticompetitive effects of common ownership.
Risk-taking in Impact Investing: The Role of Gender and Experience
Alemany, Luisa,Scarlata, Mariarosa,Zacharakis, Andrew
Relying on gender role congruity theory, this paper investigates the relationship between the gender of the top management team of venture philanthropy (VP) firms and their risk-taking orientation. Our research also assesses if and how experience moderates this relationship. Using a combination of survey data to capture the VP firm's risk orientation, and biographical data to identify managers' gender and experience, we find that only gender affects the risk-taking orientation in these firms. Yet, this is in an opposite direction than what theorized, whereby teams with a higher proportion of women have a higher risk-taking profile. This suggests the existence of a gender bind dilemma in VP.
Shareholder Monitoring and Discretionary Disclosure
Nagar, Venky,Schoenfeld, Jordan
Theories of delegated monitoring predict that when public disclosure is costly, monitoring by a large investor leads management to supply more private information to that investor, and less public disclosure to other similarly aligned investors who free-ride off the monitor. We test this prediction in the setting where large shareholders contractually bind management to share private information. We find that after the execution of such contracts, firms improve their performance and reduce their public disclosures. The large shareholders in our setting do not trade on their private information, so information asymmetry among trading shareholders, as proxied for by bid-ask spreads, does not change after the disclosure reductions. Overall, our evidence supports the disclosure prediction of delegated monitoring theories, and is inconsistent with performance, expropriation, and trading-based theories of disclosure.
Subjective Return Expectations
How do people form return expectations? Existing studies find overwhelming evidence of people extrapolating from past returns, but remain silent on why people extrapolate. I first document that return expectations can in fact be contrarian: sell-side analysts hold strongly volatile and contrarian return expectations and are more contrarian as they become more experienced. Besides, sell-side analysts' aggregate expectations are positively related to those of buy-side analysts but negatively related to those of CFOs, retail investors and price to fundamental ratios. Second, I propose an expectation formation model to explain why people hold heterogeneous expectation dynamics, and show the model also compatible with evidence on return predictability. In the model, different forecasters acknowledge the imperfections of return predictors and minimize their own subjective forecast errors. Since not all parameters in the objective predictive system are identifiable, different forecasters rationally learn from past returns, which contain information about the discount rate; they agree to disagree because of their different prior beliefs about a) what fundamental news means for future returns; b) whether discount rates or future fundamentals are more important for asset prices. Model estimation results reveal that buy-side and sell-side analysts believe positive fundamental news leads to lower future returns, while CFOs, retail investors believe the opposite. However, different forecasters agree on future fundamental growth as the dominant force driving asset prices.
Team Production Theory Across the Waves
Cheffins, Brian R.,Williams, Richard
Team production theory, which Margaret Blair developed in tandem with Lynn Stout, has had a major impact on corporate law scholarship. The team production model, however, has been applied sparingly outside the United States. This paper, given as part of a symposium honoring Margaret Blair's scholarship, serves as a partial corrective by drawing on team production theory to assess corporate arrangements in the United Kingdom. Even though Blair and Stout are dismissive of "shareholder primacy" and the U.K. is thought of as a "shareholder-friendly" jurisdiction, deploying team production theory sheds light on key corporate law topics such as directors' duties and the allocation of managerial authority. In particular, the case study offered here shows that board centrality â€" a key element of team production thinking -- features prominently in U.K. corporate governance despite Britain's shareholder-oriented legal framework. The case study also draws attention to the heretofore neglected role that private ordering can play in the development of team production-friendly governance arrangements.
The 'COVID' Crash of the 2020 U.S. Stock Market
Min Shu,Ruiqiang Song,Wei Zhu
We employed the log-periodic power law singularity (LPPLS) methodology to systematically investigate the 2020 stock market crash in the U.S. equities sectors with different levels of total market capitalizations through four major U.S. stock market indexes, including the Wilshire 5000 Total Market index, the S&P 500 index, the S&P MidCap 400 index, and the Russell 2000 index, representing the stocks overall, the large capitalization stocks, the middle capitalization stocks and the small capitalization stocks, respectively. During the 2020 U.S. stock market crash, all four indexes lost more than a third of their values within five weeks, while both the middle capitalization stocks and the small capitalization stocks have suffered much greater losses than the large capitalization stocks and stocks overall. Our results indicate that the price trajectories of these four stock market indexes prior to the 2020 stock market crash have clearly featured the obvious LPPLS bubble pattern and were indeed in a positive bubble regime. Contrary to the popular belief that the COVID-19 led to the 2020 stock market crash, the 2020 U.S. stock market crash was endogenous, stemming from the increasingly systemic instability of the stock market itself. We also performed the complementary post-mortem analysis of the 2020 U.S. stock market crash. Our analyses indicate that the 2020 U.S. stock market crash originated from a bubble which began to form as early as September 2018; and the bubbles in stocks with different levels of total market capitalizations have significantly different starting time profiles. This study not only sheds new light on the making of the 2020 U.S. stock market crash but also creates a novel pipeline for future real-time crash detection and mechanism dissection of any financial market and/or economic index.
The Averaging Principle for Non-autonomous Slow-fast Stochastic Differential Equations and an Application to a Local Stochastic Volatility Model
Filippo de Feo
In this work we study the averaging principle for non-autonomous slow-fast systems of stochastic differential equations. In particular in the first part we prove the averaging principle assuming the sublinearity, the Lipschitzianity and the Holder's continuity in time of the coefficients, an ergodic hypothesis and an $\mathcal{L}^2$-bound of the fast component. In this setting we prove the weak convergence of the slow component to the solution of the averaged equation. Moreover we provide a suitable dissipativity condition under which the ergodic hypothesis and the $\mathcal{L}^2$-bound of the fast component, which are implicit conditions, are satisfied.
In the second part we propose a financial application of this result: we apply the theory developed to a slow-fast local stochastic volatility model. First we prove the weak convergence of the model to a local volatility one. Then under a risk neutral measure we show that the prices of the derivatives, possibly path-dependent, converge to the ones calculated using the limit model.
The Case Against the Universal Basic Income
Le Dong Hai Nguyen
With the cost of implementation shrinking and the robot-to-workers ratio skyrocketing, the effects of automation on our economy and society are more palpable than ever. Over half of our jobs could be fully executed by machines over the next decade or two, with severe impacts concentrated disproportionately on manufacturing-focused developing countries. In response to the threat of mass displacement of labor due to automation, economists, politicians, and even the business community have come to see Universal Basic Income (UBI) as the panacea. This paper argued against a UBI by addressing its implementation costs and inefficiency in mitigating the impact of automation through quantitative evidence as well as results of failed UBI-comparable programs across the world. The author made a case for the continuation of existing means-tested welfare systems and further investment in education and training schemes for unskilled and low-skilled labor as a more sustainable and effective solution to the automation-induced large-scale displacement of workers.
The ESG - Innovation Disconnect: Evidence from Green Patenting
Cohen, Lauren,Gurun, Umit G.,Nguyen, Quoc
No firm or sector of the global economy is untouched by innovation. In equilibrium, innovators will flock to (and innovation will occur where) the returns to innovative capital are the highest. In this paper, we document a strong empirical pattern in green patent production. Specifically, we find that oil, gas, and energy-producing firms - firms with lower Environmental, Social, and Governance (ESG) scores, and who are often explicitly excluded from ESG funds' investment universe â€" are key innovators in the United States' green patent landscape. These energy producers produce more, and significantly higher quality, green innovation. Our findings raise important questions as to whether the current exclusions of many ESG-focused policies â€" along with the increasing incidence of explicit divestiture campaigns - are optimal, or whether reward-based incentives would lead to more efficient innovative outcomes.
The Effect of Mandatory Information Disclosure on Financial Constraints
Cabezon, Felipe
This paper studies the effects of the mandatory implementation of a more informative disclosure regime on firms' financial constraints and investment policies. I run a difference-in-difference analysis and find that firms moving from a voluntary use of the regime to a mandatory use increase debt issuance and investment in tangible assets, and reduce the level of discussion about difficulties in obtaining debt financing. At the same time, they report higher difficulties obtaining external finance through equity. These findings support the hypothesis that mandatory disclosure provides a commitment device to future disclosure but shuts down the signaling value of voluntary disclosure.
The Impact of the Eurozone Crisis and Its Regulation on the Decrease of European Bank Mergers
Nissioti, Evangelia
The paper shall begin by covering the main incentives of banks to engage in Mergers and Acquisitions. It will shortly present the current European Bank Union situation, with its goals and future agenda. Following that, the legal framework of the supervision of Bank M&As will be covered. Having analyzed the incentives of banks to merge, the past five years of the European Banking Union history and the current trends of the bank mergers, two relevant questions arise. Firstly, it is not clear which shall be considered the appropriate monitoring authority of a European Bank Merger from an efficiency point of view. Secondly, it is crucial to understand the underlying reasons of the failure of the European bank mergers and their potential relation to the regulatory strategies of the monitoring authorities. In order to approach the first research question, it is important to compare the duties, tasks, competencies and toolboxes of both the European Commission and the European Central Bank when it comes to assessing a domestic or cross-border bank merger. For the second question, we shall present an attempt of two banking institutions in Greece to merge and the reasons why the agreement fail through in the end. Other stories of failed or non-executed European bank mergers will be shortly mentioned. As a consequence, we will attempt to infer the general reasons why European bank mergers fail by remaining few and apprehend if this phenomenon it to be attributed to the inefficiency of the Merger Monitoring Authorities.
The Quality of Finance Matters: Bank Efficiency, Stock Market Volatility, and Post-Pandemic Recovery
Dissanayake, Ruchith,Wu, Yanhui
Financial development is an essential catalyst of economic growth. A hitherto unexplored finding is the role of financial institutions efficiency in reducing uncertainty â€" measured using stock market volatility â€" during an economic crisis. The differences in the exogenous component of banking efficiency â€" component defined by legal origins and creditor protections â€" explain the heterogeneity in uncertainty across countries during the Covid-19 crisis. Countries with regulation that restricts banks from conducting insurance activities and increases barriers to initially capitalize are associated with higher uncertainty. In addition, we document that countries with efficient banks are associated with superior post-pandemic growth based on economic forecasts.
The 'New Normal' During Normal Times â€" Liquidity Regulation and Conventional Monetary Policy
Kroon, Sînziana,Bonner, Clemens,van Lelyveld, Iman,Wrampelmeyer, Jan
We analyze the impact of a requirement similar to the Basel III Liquidity Coverage Ratio (LCR) on conventional monetary policy implementation. Combining unique data sets of Dutch banks from 2002 to 2005, we find that the introduction of the LCR impacts banks' behaviour in open market operations. After the introduction of the LCR, banks bid for higher volumes and pay higher interest rates for central bank funds. In line with theory, banks reduce their reliance on overnight and short term unsecured funding. We do not observe a worsening of collateral quality pledged in open market operations. Thus, to correctly anticipate an open market operation's effect on interest rates, monetary policy requires central banks to consider not only the size of the operation, but also how it impacts banks' liquidity management and compliance with the LCR.
Tiered Intermediation in Business Groups and Targeted SME Supports
Shi, Yu,Townsend, Robert M.,Zhu, Wu
Using business registry data from China, we show that internal capital markets in business groups can play the role of financial intermediary and propagate corporate shareholders' credit supply shocks to their subsidiaries. An average of 16.7% local bank credit growth where corporate shareholders are located would increase subsidiaries investment by 1% of their tangible fixed asset value, which accounts for 71% (7%) of the median (average) investment rate among these firms. We argue that equity exchanges is one channel through which corporate shareholders transmit bank credit supply shocks to the subsidiaries and provide evidence to support the channel.
Time-to-Build and Capacity Expansion
Jeon, Haejun
We study a firm's optimal investment timing and capacity decisions in the presence of uncertain time-to-build. Because of the time-to-build, the firm can expand its capacity before or after the initial project is completed and the lags of the follow-up investment can be shorter than those of the initial project due to learning by doing. We derive the optimal investment strategies in each scenario and examine the impact of time-to-build on the investment dynamics. We show that both the initial and the follow-up investment can be made earlier in the presence of time-to-build than they would in the absence of the lags, especially in a volatile market. This is in contrast to the case of a single investment, whose timing is always delayed by the time-to-build. Furthermore, the capacity of the follow-up project can dominate that of the initial one in the presence of time-to-build, whereas the latter always dominates the former in the absence of the lags. The capacity choice of each project, however, is non-monotone with respect to the size of the lags. We can endogenize the degree of learning by doing based on the proportion of capacity in each stage of the investment. Endogenous learning by doing is found to be non-monotone with respect to the size of the initial lags because the learning incurs costs of more investment at the earlier stage.
Upswing in Industrial Activity and Infant Mortality during Late 19th Century US
Nahid Tavassoli,Hamid Noghanibehambari,Farzaneh Noghani,Mostafa Toranji
This paper aims to assess the effects of industrial pollution on infant mortality between the years 1850-1940 using full count decennial censuses. In this period, US economy experienced a tremendous rise in industrial activity with significant variation among different counties in absorbing manufacturing industries. Since manufacturing industries are shown to be the main source of pollution, we use the share of employment at the county level in this industry to proxy for space-time variation in industrial pollution. Since male embryos are more vulnerable to external stressors like pollution during prenatal development, they will face higher likelihood of fetal death. Therefore, we proxy infant mortality with different measures of gender ratio. We show that the upswing in industrial pollution during late nineteenth century and early twentieth century has led to an increase in infant mortality. The results are consistent and robust across different scenarios, measures for our proxies, and aggregation levels. We find that infants and more specifically male infants had paid the price of pollution during upswing in industrial growth at the dawn of the 20th century. Contemporary datasets are used to verify the validity of the proxies. Some policy implications are discussed.
Using the Econometric Models for Identification of Risk Factors for Albanian SMEs (Case study: SMEs of Gjirokastra region)
Lorenc Kociu,Kledian Kodra
Using the econometric models, this paper addresses the ability of Albanian Small and Medium-sized Enterprises (SMEs) to identify the risks they face. To write this paper, we studied SMEs operating in the Gjirokastra region. First, qualitative data gathered through a questionnaire was used. Next, the 5-level Likert scale was used to measure it. Finally, the data was processed through statistical software SPSS version 21, using the binary logistic regression model, which reveals the probability of occurrence of an event when all independent variables are included. Logistic regression is an integral part of a category of statistical models, which are called General Linear Models. Logistic regression is used to analyze problems in which one or more independent variables interfere, which influences the dichotomous dependent variable. In such cases, the latter is seen as the random variable and is dependent on them. To evaluate whether Albanian SMEs can identify risks, we analyzed the factors that SMEs perceive as directly affecting the risks they face. At the end of the paper, we conclude that Albanian SMEs can identify risk
Visualizing the Financial Impact of Presidential Tweets on Stock Markets
Ujwal Kandi,Sasikanth Gujjula,Venkatesh Buddha,V S Bhagavan
As more and more data being created every day, all of it can help take better decisions with data analysis. It is not different from data generated in financial markets. Here we examine the process of how the global economy is affected by the market sentiment influenced by the micro-blogging data (tweets) of American President Donald Trump. The news feed is gathered from The Guardian and Bloomberg from the period between December 2016 and October 2019, which are used to further identify the potential tweets that influenced the markets as measured by changes in equity indices. | CommonCrawl |
Home Journals IJHT Numerical Study of Natural Convection for Generalized Second-Grade Fluids Confined in a Square Cavity Subjected to Horizontal Heat Flux
Numerical Study of Natural Convection for Generalized Second-Grade Fluids Confined in a Square Cavity Subjected to Horizontal Heat Flux
Hamza Daghab | Mourad Kaddiri* | Said Raghay | Mohamed Lamsaadi | Hassan El Harfi
Industrial Engineering Laboratory, Sultan Moulay Slimane University, B.P. 523, Béni-Mellal 23000, Morocco
Laboratory of Applied Mathematics and Computing, Cadi Ayyad University, B.P. 549, Marrakech 40000, Morocco
Two-dimensional steady laminar natural convection of a viscoelastic fluid represented by generalized second-grade fluid model in a square enclosure is studied. The cavity is submitted at its vertical sides to a uniform density of heat flux while the horizontal walls are insulated, without slipping conditions at all the solid boundaries. The governing conservation and constitutive equations with the corresponding boundary conditions are solved by finite volume method in a collocated grid system. The contributions of shear rate dependent and elastic characteristics of the viscoelastic fluid are investigated on momentum and heat transport. The effects of elastic number (E) in the range 0 - 1 on heat transfer and fluid motion are interpreted for a power-law index (n) in the range 1.4 - 0.6 and nominal values of Rayleigh number (Ra) range of 103 to 105.
finite volume, generalized second-grade model, natural convection, numerical study, square cavity, viscoelastic fluids
The study of viscoelastic flows is very important due to their wide range of applications in many areas such as geophysics, biomechanical engineering, chemical and petroleum industries. The viscoelastic fluids include both viscous and elastic effects, making their mathematical modeling complicated than the Newtonian fluids one. To describe the behavior of viscoelastic fluids different models have been used. Amongst these models, Rivlin-Ericksen fluids or differential type fluids proposed by Rivlin and Ericksen [1], have received special attention. As a particular case of Rivlin-Ericksen fluids we find second grade fluids, relevant issues concerning these fluids have been discussed in detail by Dunn and Fosdick [2], Fosdick and Rajagopal [3], and Fosdick and Rajagopal [4], respectively. It is clear that second-grade fluids show only the normal stress effects. However, many realistic fluids exhibit combined effects of normal stress and shear-thinning/shear-thickening behavior. In such cases, generalized second-grade fluids, proposed by Man and Sun [5], are suitable. Because of this motivation, the model considered in the present study is a generalized second-grade fluid.
Thermal transfer of viscoelastic fluids has attained tremendous number of studies because of its presence in many areas. The previous investigations have analytically and numerically analyzed this phenomenon in order to interpret and show the effects of elasticity on the flow and heat transfer characteristics related to the viscoelastic flow in different geometries. In the precedent studies, it has been found that the elasticity affects heat transfer through the change of Nusselt number. Shenoy and Mashelkar [6] have analyzed natural convection of viscoelastic fluid by using the approximate integral method. They have noted that the effect of elasticity on Nusselt number depends on the value of Weissenberg number. Also, a set of studies [7-10] have been done for viscoelastic second-grade fluids in different geometries. It has been noticed that the elasticity is considered as a resistance force, which trends to depreciate heat transfer within the flow and decelerate the fluid motion.
Natural convection flows arise from density variations with temperature or concentration within a non-isothermal-fluid under the influence of gravity. This is frequently encountered in many industrial and practical engineering areas that include heat exchangers, nuclear reactors, geothermal systems, metallurgical processes, crystal growth and others. Buoyancy-driven convection for non-Newtonian fluids in enclosures has been broadly studied. Such one-dimensional flows often allow getting an analytical solution of the governing conservation and constitutive equations. In this context, Lamsaadi et al. [11] have analytically and numerically studied natural convection of power law fluids in a shallow cavity subjected to a uniform density of heat flux at its horizontal walls. The parallel flow approximation has been used to simplify the governing non-linear differential equations. They have demonstrated that the flow characteristics are sensitive to the power law index but not to the Prandtl number, when this later is sufficiently large. Another model of non-Newtonian fluids has been used in the study of Allaoui and Vasseur [12], which is the Carreau-Yasuda non-Newtonian fluid. They have deduced a semi-analytical solution for natural convection of a Carreau-Yasuda fluid in a vertical enclosure heated from the side walls, founding pseudo-rheological behavior generates a significant modification in heat transfer characteristics. Whereas 2-D flows require a numerical study to solve the non-linear governing equations. Natural convection of power law fluids inside square enclosures has been numerically studied by Turan et al. [13-17] for various boundary conditions. It has been noted that Nusselt number and the flow intensity are increasing functions with rising Rayleigh number (Ra) and decreasing power law index (n), and heat transfer takes place by conduction for low values of Ra and large values of n. Buoyancy-driven convection for another type of non-Newtonian fluids, which are viscoplastic fluids, confined in cavities has been numerically investigated by Hassan et al. [18-20]. It has been shown that heat transfer enhances with increasing Casson parameter, for Casson viscoplastic fluids, for any value of Rayleigh number considered on their studies. While an increase in Bingham number generates a decrease in the fluid circulation and heat transfer, for Bingham fluids, and for each Rayleigh number there correspond a critical Bingham number for which the thermal transfer inside the cavity takes place by conduction mode. Furthermore, the studies of natural convection of viscoelastic fluids in 2-D cavities have been numerically studied by Demir et al. [10, 21, 22]. Demir et al. [21, 22] have numerically investigated unsteady natural convection of viscoelastic Criminale-Erickson-Filbey (CEF) fluid confined in a square cavity heated from below. Recently, Sheremet and Pop [10] have studied the flow of natural convection combined with thermal radiation for second-grade fluids confined in a square enclosure, submitted to different temperatures on vertical walls. They have used a finite difference method to solve the governing non-linear conservation and constitutive equations. Results have been obtained just in the nearly Newtonian flow (E = 0.0001-0.001). It has been found that the elasticity effect is more important for high values of Rayleigh number and radiation parameter as well as small values of Prandtl number.
From the above mentioned literature survey, one can find the lack of studies treating the natural convection for confined viscoelastic generalized second-grade fluids, which exhibit both normal stress effects and variable viscosity. Consequently, the present study has considered a generalized second-grade fluid confined in a square enclosure, submitted at its vertical sides to a uniform density of heat flux while the horizontal walls are insulated, through a numerical algorithm using a finite volume method that allows to obtain results for relatively high elasticity numbers which has been one of the main challenges in numerical simulations.
2. Geometry and Governing Equations
2.1 Geometry and Governing Equations
The geometry of interest is a viscoelastic fluid-filled square cavity subjected at its vertical walls to a uniform density of heat flux, while the horizontal ones are insulated, without slipping conditions at all the solid boundaries (Figure 1).
The Cauchy stress tensor of an incompressible homogenous second grade fluid [1] is defined by:
$\Sigma=-p I+\mu A_{1}+\alpha_{1} A_{2}+\alpha_{2} A_{1}^{2}$ (1)
where, –pI denotes the indeterminate part of the stress due to assumption of incompressibility, μ is the coefficient of viscosity, α1 and α2 are the material constants referred to normal stress modulus, the Rivlin-Ericksen tensors A1 and A2 are given by:
$A_{1}=g r a d V^{\prime}+\left(g r a d V^{\prime}\right)^{T}$ (2)
$A_{2}=\frac{d A_{1}}{d t}+A_{1}\left(\operatorname{grad} V^{\prime}\right)+\left(\operatorname{gradV}^{\prime}\right)^{T} A_{1}$ (3)
Here V' stands for the velocity and d/dt is the material derivative defined as follow:
$\frac{d(.)}{d t}=\frac{\partial(.)}{\partial t}+[\operatorname{grad}(.)] V^{\prime}$ (4)
Then the material parameters must respect the following restrictions [2]:
μ ≥ 0; α1 ≥ 0 and α1 + α2 = 0.
A shortcoming of second-grade model is that cannot predict shear-thinning/shear-thickening, which is the decrease/increase of viscosity with increasing share rate. The generalized second-grade fluids exhibit both normal stress effects and variable viscosity. The stress tensor of such fluids is defined as (see [23, 24]):
$\sum=-p I+\mu(|\dot{\gamma}|) A_{1}+\alpha_{1} A_{2}+\alpha_{2} A_{1}^{2}$ (5)
Here $|\dot{\gamma}|$ is the shear rate. The viscosity is represented by the power-law model, due to Ostwald-de Waele, which can be expressed as:
$\mu(|\dot{\gamma}|)=k\left[2\left(\left(\frac{\partial U^{\prime}}{\partial X_{\prime}}\right)^{2}+\left(\frac{\partial V_{\prime}}{\partial Y_{\prime}}\right)^{2}\right)+\left(\frac{\partial U_{\prime}}{\partial Y^{\prime}}+\frac{\partial V^{\prime}}{\partial X_{\prime}}\right)^{2}\right]^{\frac{n-1}{2}}$ (6)
where, k and n are the consistency and the power law indices, respectively.
On the basis of the assumptions commonly used in natural convection problems, the dimensionless governing equations for Boussinesq fluids, written in terms of dimensionless velocity vector components, (U, V), dimensionless pressure, P, and dimensionless temperature, T, in Cartesian coordinate system (X, Y), are:
- Continuity equation
- Momentum equations:
$\left(U \frac{\partial U}{\partial x}+V \frac{\partial U}{\partial Y}\right)=-\frac{\partial P}{\partial X}+\operatorname{Pr}\left[\mu_{a}\left(\frac{\partial^{2} U}{\partial X^{2}}+\frac{\partial^{2} U}{\partial Y^{2}}\right)+\right.$
$\left.2 \frac{\partial \mu_{a} \partial U}{\partial x} \frac{\partial U}{\partial X}+\frac{\partial \mu_{a}}{\partial Y}\left(\frac{\partial U}{\partial Y}+\frac{\partial V}{\partial X}\right)\right]+\frac{\partial \tau_{x x}}{\partial x}+\frac{\partial \tau_{x y}}{\partial V}$ (8)
$\left(U \frac{\partial V}{\partial X}+V \frac{\partial V}{\partial Y}\right)=-\frac{\partial P}{\partial Y}+\operatorname{Pr}\left[\mu_{a}\left(\frac{\partial^{2} V}{\partial X^{2}}+\frac{\partial^{2} V}{\partial Y^{2}}\right)+\right.$
$\left.2 \frac{\partial \mu_{a}}{\partial Y} \frac{\partial V}{\partial Y}+\frac{\partial \mu_{a}}{\partial X}\left(\frac{\partial U}{\partial Y}+\frac{\partial V}{\partial X}\right)+R a T\right]+\frac{\partial \tau_{x y}}{\partial x}+\frac{\partial \tau_{y y}}{\partial Y}$ (9)
- Energy equation:
$U \frac{\partial T}{\partial X}+V \frac{\partial T}{\partial Y}=\frac{\partial^{2} T}{\partial X^{2}}+\frac{\partial^{2} T}{\partial Y^{2}}$ (10)
$\mu_{a}=\left[2\left(\left(\frac{\partial U}{\partial x}\right)^{2}+\left(\frac{\partial V}{\partial Y}\right)^{2}\right)+\left(\frac{\partial U}{\partial Y}+\frac{\partial V}{\partial x}\right)^{2}\right]^{\frac{n-1}{2}}$ (11)
$\tau_{x x}=E\left[2 \frac{d}{d t}\left(\frac{\partial U}{\partial X}\right)+\left(\frac{\partial U}{\partial Y}\right)^{2}-\left(\frac{\partial V}{\partial X}\right)^{2}\right]$ (12)
$\tau_{y y}=E\left[2 \frac{d}{d t}\left(\frac{\partial V}{\partial Y}\right)+\left(\frac{\partial V}{\partial X}\right)^{2}-\left(\frac{\partial U}{\partial Y}\right)^{2}\right]$ (13)
$\tau_{x y}=E\left[\frac{d}{d t}\left(\frac{\partial U}{\partial Y}+\frac{\partial V}{\partial X}\right)+2 \frac{\partial U}{\partial X} \frac{\partial V}{\partial X}+2 \frac{\partial U}{\partial Y} \frac{\partial V}{\partial Y}\right]$ (14)
In the above equations μa is the effective viscosity, τxx, τyy and τxy are the extra-stress tensor components.
The dimensionless parameters, appearing in all these equations, are the Rayleigh number $\left(R a=\frac{g \beta L^{\prime 2 n+2} q^{\prime}}{(k / \rho) \alpha^{n} \lambda}\right)$, Prandtl number $\left(P r=\frac{(k / \rho) L^{\prime 2-2 n}}{\alpha^{2-n}}\right)$and elastic number $\left(E=\frac{\alpha_{1}}{\rho L^{2}}\right)$. The dimensionless variables used here are:
(X, Y) = (X', Y') / L', (U, V) = (U', V') / (α / L'), T = (T') / (q'L' / λ), where g represents the acceleration due to gravity, β is the thermal expansion coefficient at the reference temperature, q' stands for the constant heat flux per unit area, α is the thermal diffusivity at reference temperature, λ is the thermal conductivity at the reference temperature.
To close the above equations system, the following appropriate boundary conditions are necessary:
$U=V=\frac{\partial T}{\partial X}+1=0$, for $X=0$ and $X=1$ (15)
$U=V=\frac{\partial T}{\partial Y}=0$, for $Y=0$ and $Y=1$ (16)
The physical quantities which present the heat transfer rate by convection are the local Nusselt number, Nu, and the average Nusselt number, $\overline{N u}$ that are expressed as:
$N u=-\frac{q^{\prime} L^{\prime}}{\lambda \Delta T^{\prime}}=-\frac{1}{\Delta T}=\frac{1}{T(0, Y)-T(1, Y)}$ (17)
$\overline{N u}=\int_{0}^{1} N u d y$ (18)
3. Numerics
3.1 Discretization method
The above conservation and constitutive equations are solved by using a finite volume formulation in collocated grid. All of conservation equations can be written in the following general form of transport equation [25]:
$\frac{\partial}{\partial x}\left(U \phi-\Gamma \frac{\partial \phi}{\partial x}\right)+\frac{\partial}{\partial y}\left(V \phi-\Gamma \frac{\partial \phi}{\partial y}\right)=S_{\phi}$ (19)
Here ϕ is the working variable, г is the diffusion coefficient and Sϕ is the source term, which can be linearised as:
$S_{\phi}=S_{C}+S_{P} \phi_{P}$ (20)
where, Sc is the constant part of Sϕ that is explicitly independent on ϕ, while Sp is the coefficient of ϕp which is made negative to enhance the numerical stability [25].
A finite volume formulation is adopted, in collocated grid, for the special discretization. The flow domain is divided into a number of control volumes ΔV around P (Figure 2).
By integrating the Eq. (19) over the control volume, the final form of discretized equations relating the variable ϕp to its neighboring grid point values can be given in every control volume as:
$A_{P} \phi_{P}=A_{W} \phi_{W}+A_{E} \phi_{E}+A_{S} \phi_{S}+A_{N} \phi_{N}+S_{\phi}$ (21)
The behavior of the numerical scheme can be improved by the discretization of source term. To compute it, we need the first gradient of τxx, τyy and τxy. For the term $\partial \tau_{\mathrm{xx}} / \partial x$, as like as the other gradients, is assuming quadratic variation of τxx along the x direction. Therefore, $\partial \tau_{\mathrm{xx}} / \partial \mathrm{x}$ is expressed as 2ax+b [26].
3.2 Solving method
The obtaining discrete system for each control volume consist of a set of linear algebraic equations, which are then easily solved by means of the line by line technique based on the tridiagonal matrix algorithm (TDMA) [25, 27].
To solve the discretized equations, an equation of pressure is clearly necessary, since it is an unknown in the momentum equations that requires the use of Semi-Implicite Method for Pressure Linked Equation (SIMPLE) algorithm, in which the continuity equation is transformed to the pressure equation. To avoid checkerboard velocity and pressure distribution, cell face velocities are evaluated by Momentum Interpolation Method (MIM), firstly proposed by Rhie and Chow [28], and widely used in the literature [29-31].
$\operatorname{MAX}\left\{\frac{\phi^{\mathrm{n}+1}-\phi^{\mathrm{n}}}{\phi^{\mathrm{n}+1}}\right\} \leq 10^{-7}$ (22)
where, ϕ = (U, V, P, T).
To check the mesh independency of the obtaining solution, minimum stream function value, Ψmin, and average Nusselt number, $\overline{N u}$, are presented for two fixed cases, in which the control parameters are given by: n=1, Ra =105, Pr = 30, E= 0.4 and for the second case n = 0.6, Ra = 104, Pr = 30 and E = 0.06. Several tests were made for four different grids. Results are tabulated in Table 1. For sufficient precision the uniform grid of 250x250 has been considered for the further study.
To validate the elaborated numerical code, our results, expressed in terms of minimum stream function, Ψmin, and average Nusselt number, $\overline{N u}$, for both imposed temperature and flux cases, are compared with previous researches with or without elasticity effect. Hence, as can be seen from Table 2 and Figure 3 a good agreement is generally obtained.
Table 1. Minimum stream function, Ψmin, and average Nusselt number, $\overline{N u}$, for different meshes
n = 1, Ra = 105, Pr = 30 and E = 0.4
$\boldsymbol{\Psi}_{\min }$
$\overline{\boldsymbol{N} \boldsymbol{u}}$
$\Delta_{\mathbf{1}}=\left|\frac{\left. \left.\boldsymbol{\Psi}_{\min }\right)_{i x j}-\boldsymbol{\Psi}_{\min }\right)_{250 x 250}}{ \left.\boldsymbol{\Psi}_{\min }\right)_{i x j}}\right|$
$\Delta_{2}=\left|\frac{ \left.\overline{N u}_{i x j}-\overline{N u}\right)_{250 x 250}}{\overline{N u})_{i x j}}\right|$
- 4.568350
n = 0.6, Ra = 104, Pr = 30 and E = 0.06
Table 2. Comparison of our simulation results with previous studies for different values of E, Pr and Ra: case of imposed constant temperature difference
Present Work
Sheremet and Pop [10]
Turan et al. [13]
De Val Davis [32]
Ψmin
$\overline{\boldsymbol{N u}}$
Figure 3. Comparison of our simulation results (left) with those of Turan et al. [14] (right) for Pr = 100 and different values of Ra and n: case of imposed constant heat flux
4.3 Flow and heat transfer in viscoelastic fluids
The main objective of this study is to evaluate the effects of Ra, n and E on the flow and heat transfer generated by natural convection for a generalized second-grade fluid. To do this, numerical trials are conducted for various values of the governing parameters E (0 - 1), n (0.6, 1, 1.4), Ra (103 - 105) and Prandtl number was kept at a constant value Pr = 30.
4.3.1 Combined effects of Rayleigh and elastic numbers
In this part, the power law index was kept at a constant value, n = 1, while the Rayleigh number was varied between 103 and 105 and the elasticity parameter was ranged between 0 and 1. It was found that the attainable maximum elastic number, at which a converged solution was obtained, strongly depends on Rayleigh number. At Ra = 103 the maximum elastic number reached was 1, while at Ra = 105 it was 0.4.
Typical streamlines and isotherms illustrating elasticity effects on the flow structure and temperature patterns are presented in Figures 4a-4b for two different values of Ra (103 and 105) and various values of E. At Ra = 103 (Figure 4a), a circular clockwise flow is formed inside the enclosure for all values of elastic number. In fact, the buoyancy forces generated due to the temperature difference between the vertical walls cause the fluid to rise along the hot wall and to descend along the cold wall. At the same time, the corresponding isotherms are nearly parallel to the isothermal walls indicating that the pseudo-conductive regime dominates the heat transfer mechanism. A growth of elastic number shows a non-significant modification of streamlines and isotherms. An increase in Rayleigh number to Ra = 105 (relatively high convection), which indicates a strengthening of buoyancy forces comparing to viscous forces, generates, in the Newtonian case (E = 0), an elongation of isotherms and cell core of streamlines along the horizontal axis, which indicates that convection dominates heat transfer. However, for a viscoelastic fluid, a more important flow resistance is generated as the value of elastic number increases. Therefore, the cell core of streamlines becomes less elongated, and the corresponding isotherms seem to be less non-linear with increasing E. Hence, the phenomenon proves that the elasticity reduces the convection process.
Additionally, Figure 5 shows the distribution of non-dimensional temperature T along the centerline at Y = 0.5 for two values of Ra (103 and 105) and different values of E. It is clear to notice that, in general, the distribution of T exhibits an increase in boundary layer thickness with rising E. This increasing trend of thermal boundary layer thickness with a growth of E indicates that the effects of convection weaken with elasticity. An augmentation of the thermal boundary layer thickness gives rise to a decrease in the magnitude of heat flux at the vertical walls, which acts to decrease the average Nusselt number $\overline{N u}$ as can be shown in Figure 6, where the variation of the average Nusselt number $\overline{N u}$ with E is presented. It is clear that the effect of elasticity is more pronounced for Ra=105 than Ra = 103 (This behavior is consistent with results of Sheremet and Pop [10]), which means that the parameter E plays an inhibiting role against the mixing role of Ra.
Figure 4. Stream lines (right) and isotherms (left) for different values of E and various values of Ra: (a) Ra = 103 and (b) Ra =105
Figure 5. Variations of non-dimensional temperature along the mid-length of the cavity for different values of E and various values of Ra: Ra = 103 (left) and Ra = 105 (right)
Figure 6. Average Nusselt number $\overline{N u}$ for different values of E and various values of Ra: Ra = 103 (left) and Ra = 105 (right)
Figure 7. Stream lines (top) and isotherms (bottom) for n = 1.4, Ra = 105 and (a) E = 0, (b) E = 0.4 and (c) E = 0.6
Figure 8. Non-dimensional temperature (left) and vertical velocity (right) along the line Y = 0.5 for n = 1.4, Ra = 105 and different values of E
4.3.2 Combined effects of power law index and elastic number
In this part of the study, mutual effects of the rheological parameters have been discussed. During the variation of power law index, the governing equations become heavily non-linear, mainly for shear-thinning fluids (n<1). Therefore, the attainable maximum elastic number, at which a converged solution was obtained, strongly depends on Rayleigh number and power law index. For n = 0.6, the value of Ra was taken equal to 104 and E was ranged from 0 to 0.06. Despite this restriction to the small values of elastic number, it was noticed that interesting structural changes in both streamlines and isotherms. For shear-thickening fluids (n>1), it was considered n = 1.4, Ra = 105 and E was varied from 0 to 0.6.
Results presented in the previous section are given for n =1. Now let us increase the value of n. consequently, the effects of viscous forces become increasingly strong in comparison to buoyancy forces because of shear-thickening. Figure 7 presents typical streamlines and isotherms for Ra = 105, n= 1.4 and different values of E. A very limited qualitative effect can be observed in the flow structure and temperature patterns by changing E. Furthermore, the distributions of non-dimensional temperature T and the vertical velocity component V along the mid-length of the cavity are presented in Figure 8; it is observed a non-significant modification in the temperature distribution, and a slight decrease in the magnitude of vertical velocity. Moreover, Table 3 shows the dependence of the flow intensity (absolute value of minimum stream function), |Ψmin|, and the mean heat transfer (average Nusselt number), $\overline{N u}$, on E for Ra = 105 and two different values of n (n = 1.4 and n = 1). It is clear from Table 3 that the increase in E causes the flow intensity, |Ψmin|, and average Nusselt number, $\overline{N u}$, to decrease because of the flow resistance offered by elasticity. Table 3 further shows that, at n = 1.4, elasticity influences slightly the mean heat transfer and flow intensity. Here, an increase in E from 0 to 0.6 generates a diminution of |Ψmin| and $\overline{N u}$ at 2.08% and 0.77%, respectively. Whereas in the case where n = 1 and Ra = 105, a growth of elastic number from 0 up to 0.4 leads to a decrease in |Ψmin| and $\overline{N u}$ at 10.76 % and 12.57%, respectively. In other words, the influence of elasticity is less important for shear-thickening fluids. It is probably due to the fact that this parameter does not affect the flow when the viscous forces are more important.
Table 3. Absolute value of minimum stream function, average Nusselt number for Pr = 30, Ra = 105, n = 1.4 and n=1 versus E
n = 1.4
|Ψmin|
Figure 9. Stream lines (top) and isotherms (bottom) for n = 0.6, Ra = 104 and (a) E = 0, (b) E = 0.04 and (c) E = 0.06
Figure 10. Non-dimensional temperature (left) and vertical velocity (right) along the line Y = 0.5 for n = 0.6, Ra = 104 and different values of E
For n = 0.6, the effects of convection become stronger. As expected, the flow structure is more sensitive to the change of the elastic parameter although the variation interval of E is smaller. According to Figure 9, with the increase in E, streamlines become more curved in the vicinity of the walls, and the central cell starts to be less elongated. Also, the isotherms are less non-linear due to the fact that the elasticity incapacitates the convection process. On the other hand, Figure 10 shows the variation of non-dimensional temperature T and the vertical velocity component V along the mid-length of the cavity. It is clear to notice that, an increase in elastic parameter E leads to an increase in thermal boundary layer thickness and a decrease in magnitude of the vertical velocity, which in turn indicates that the effects of convection weaken with elasticity. Moreover, Table 4 shows the dependence of |Ψmin| and $\overline{N u}$ on E for Ra = 104 and two different values of n (n = 0.6 and n = 1). It is evident from Table 4 that the flow intensity and average Nusselt number decrease with the enhancement of elastic number as a result of the flow resistance generated by elasticity. Table 4 also shows that, at n = 0.6, the mean heat transfer and flow intensity are significantly affected by the rise of elastic number. An increase in E from 0 to 0.06 leads to a reduction of |Ψmin| and $\overline{N u}$ at 19.01% and 19.06%, respectively. This change is more important comparing to the case when n = 1 and Ra = 104, where an increase in E from 0 to 0.6 leads to a reduction of |Ψmin| and $\overline{N u}$ at 5.81 % and 3.39%, respectively.
In the present work, the problem of natural convection of Generalized Second-Grade fluids, which exhibit both normal stress and shear-thinning/shear-thickening effects, confined in a square cavity has been numerically solved using a finite volume method and SIMPLE algorithm in a non-staggered grid system. Numerical simulations have been conducted for different values of Rayleigh number, power law index and elastic number. It has been found that the flow and thermal structures as well as the thermo-hydraulic characteristics of the fluid are sensitive to its rheological behavior, since the flow intensity and heat transfer rate decrease with the elasticity effect, which represents a resistance force. Besides, the effect of elasticity is more pronounced for high values of Rayleigh number and low values of power law index.
The authors like to express their thankfulness to the National Center of Scientific and Technical Research (CNRST-Morocco) for providing a computing infrastructure during this work.
A1,2
Rivlin-Ericksen tensors
elastic number
gravitational acceleration, m.s-2
consistency index for a power-law fluid
height or width of the enclosure, m
power law index
local Nusselt number
mean Nusselt number
generalized Prandtl number
q'
constant heat flux, W. m-2
generalized Rayleigh number
horizontal and vertical velocities, m.s-1
horizontal and vertical coordinates, m
$\alpha_{l, 2}$
normal stress modulus
thermal conductivity, W.m-1. K-1
τij
elastic stress tensor
$\mu$
dimensionless effective viscosity of fluid
$\rho$
density of fluid, Kg. m-3
working variable
$a$
effective variable
[1] Rivlin, R.S., Ericksen, J.L. (1955). Stress deformation relations for isotropic materials. Collected Papers of RS Rivlin, 911-1013. https://doi.org/10.1007/978-1-4612-2416-7_61
[2] Dunn, J.E., Fosdick, R.L. (1974). Thermodynamics, stability, and boundedness of fluids of complexity 2 and fluids of second grade. Archive for Rational Mechanics and Analysis, 56(3): 191-252. https://doi.org/10.1007/BF00280970
[3] Fosdick, R.L., Rajagopal, K.R. (1979). Anomalous features in the model of second order fluids. Archive for Rational Mechanics and Analysis, 70(2): 145-152. https://doi.org/10.1007/BF00250351
[4] Fosdick, R.L., Rajagopal, K.R. (1980). Thermodynamics and stability of fluids of third grade. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, 339(1738): 351-377. https://doi.org/10.1098/rspa.1980.0005
[5] Man, C.S., Sun, Q.X. (1987). On the significance of normal stress effects in the flow of glaciers. Journal of Glaciology, 33: 268-273. https://doi.org/10.3189/S0022143000008832
[6] Shenoy, A.V., Mashelkar, R.A. (1978). Laminar natural convection heat transfer to a viscoelastic fluid. Chemical Engineering Science, 33: 769-776. https://doi.org/10.1016/0009-2509(78)80056-6
[7] Mustafa, N., Asghar, S., Hossain, M.A. (2010). Natural convection flow of second-grade fluid along a vertical heated surface with variable heat flux. Int. J. Heat Mass. Transfer, 53(25-26): 5856-5862. https://doi.org/10.1016/j.ijheatmasstransfer.2010.07.060
[8] Prasad, V.R., Bhuvanavijaya, R., Bandaru, M. (2016). Natural convection on heat transfer flow of non-newtonian second grade fluid over horizontal circular cylinder with thermal radiation. Journal of Naval Architecture and Marine Engineering, 13(1): 63-78. https://doi.org/10.3329/jname.v13i1.20703
[9] Ewis, K.M. (2019). Natural convection flow of Rivlin–Ericksen fluid and heat transfer through non-Darcy medium with radiation. Advances in Mechanical Engineering, 11(8). https://doi.org/10.1177/1687814019866033
[10] Sheremet, M.A., Pop, I. (2018). Natural convection combined with thermal radiation in a square cavity filled with a viscoelastic fluid. International Journal of Numerical Methods for Heat and Fluid Flow, 28(3): 624-640. https://doi.org/10.1108/HFF-02-2017-0059
[11] Lamsaadi, M., Naimi, M., Hasnaoui, M. (2005). Natural convection of non-Newtonian power-law fluids in a shallow horizontal rectangular cavity uniformly heated from below. Heat Mass Transfer, 41(3): 239-249. https://doi.org/10.1007/s00231-004-0530-8
[12] Alloui, Z., Vasseur, P. (2015). Natural convection of Carreau-Yasuda non-Newtonian fluids in a vertical cavity heated from the sides. International Journal of Heat and Mass Transfer, 84: 912-924. https://doi.org/10.1016/j.ijheatmasstransfer.2015.01.092
[13] Turan, O., Sachdeva, A., Chakraborty, N., Poole, R.J. (2011). Laminar natural convection of power-law fluids in a square enclosure with differentially heated side walls subjected to constant temperatures. Journal of Non-Newtonian Fluid Mechanics, 166(17-18): 1049-1063. https://doi.org/10.1016/j.jnnfm.2011.06.003
[14] Turan, O., Sachdeva, A., Poole, R.J., Chakraborty, N. (2012). Laminar natural convection of power-law fluids in a square enclosure with differentially heated sidewalls subjected to constant wall heat flux. Journal of Heat Transfer, 134(12). https://doi.org/10.1115/1.4007123
[15] Kaddiri, M., Naïmi, M., Raji, A., Hasnaoui, M. (2012). Rayleigh-Bénard convection of non-Newtonian Power-law fluids with temperature-dependent viscosity. ISRN Thermodynamics. https://doi.org/10.5402/2012/614712
[16] Raisi, A. (2016). Natural convection of non-Newtonian fluids in a square cavity with a localized heat source. Strojniski Vestnik/Journal of Mechanical Engineering 62(10). https://doi.org/10.5545/svjme.2015.3218
[17] Horimek, A., Noureddine, B., Benkhchiba, A., Ait-Messaoudene, N. (2017). Laminar natural convection of power-law fluid in a differentially heated inclined square cavity. Annales de Chimie-Science des Matériaux, 41(3-4): 261-281. https://doi.org/10.3166/acsm.41.261-280
[18] Hassan, M.A., Pathak, M., Khan, M. (2013). Natural convection of viscoplastic fluids in a square enclosure. Journal of Heat Transfer, 135(12). https://doi.org/10.1115/1.4024896
[19] Pop, I., Mikhail, S. (2017). Free convection in a square cavity filled with a Casson fluid under the effects of thermal radiation and viscous dissipation. International Journal of Numerical Methods for Heat & Fluid Flow, 27(10): 2318-2332. https://doi.org/10.1108/HFF-092016-0352
[21] Demir, H., Akyoldoz, F.T. (2000). Unsteady thermal convection of a non-Newtonian fluid. International Journal of Engineering Science, 38(17): 1923-1938. https://doi.org/10.1016/S00207225(00)00011-2
[22] Demir, H. (2001). Thermal convection of viscoelastic fluid with Biot boundary conduction. Mathematics and Computers in Simulation (MATCOM), 56(3): 277-296.
[23] Walicki, E., Walicka, A. (2002). Convergent flows of molten polymers modeled by generalized second-grade fluids of power-law type. Mechanics of Composite Materials, 38(1): 89-94. https://doi.org/10.1023/A:1014017125466
[24] Carapau, F. (2010). One-dimensional viscoelastic fluid model where viscosity and normal stress coefficients depend on the shear rate. Nonlinear Analysis: Real World Applications, 11(5): 4342-4354. https://doi.org/10.1016/j.nonrwa.2010.05.020
[25] Patankar, S. (1980). Heat Transfer and Fluid Flow. Hemisphere, New York.
[26] Raghay, S., Hakim, A. (2001). Numerical simulation of White–Metzner fluid in a 4: 1 contraction. International Journal for Numerical Methods in Fluids, 35(5): 559-573. https://doi.org/10.1002/1097-0363(20010315)35:5<559::AID-FLD102>3.0.CO;2-P
[27] Van Doormaal, J.P., Raithby, G.D. (1984). Enhancement of the SIMPLE method for preciding incompressible fluid flows. Numerical Heat Transfer, 7: 147-163. https://doi.org/10.1080/01495728408961817
[28] Rhie, C.M., Chow, W.L. (1983). Numerical study of the turbulent flow past an airfoil with trailing edge separation. AIAA Journal, 21(11): 1525-1532. https://doi.org/10.2514/3.8284
[29] Rahman, M.M., Siikonen, T. (2000). An improved simple method on a collocated grid. Numerical Heat Transfer: Part B: Fundamentals, 38(2): 177-201. https://doi.org/10.1080/104077900750034661
[30] Darwish, M., Sraj, I., Moukalled, F. (2007). A coupled incompressible flow solver on structured grids. Numerical Heat Transfer, Part B, 52: 353-371. https://doi.org/10.1080/10407790701372785
[31] Kolmogorov, D.K., Shen, W.Z., Sørensen, N.N., Sørensen, J.N. (2015). Fully consistent SIMPLE-like algorithms on collocated grids. Numerical Heat Transfer, Part B: Fundamentals, 67(2): 101-123. https://doi.org/10.1080/10407790.2014.949583
[32] de Vahl Davis, G. (1983). Natural convection of air in a square cavity: A bench mark numerical solution. International Journal for Numerical Methods in Fluids, 3(3): 249-264. https://doi.org/10.1002/FLD.1650030305 | CommonCrawl |
Mycelial compatibility groups, pathogenic diversity and biological control of Sclerotium rolfsii on turfgrass
Filiz Ünal1,
Ayşe Aşkın1,
Ercan Koca1,
Mesut Yıldırır2 &
M. Ümit Bingöl3
Sclerotium rolfsii Sacc. (the sclerotial state of Athelia rolfsii (Cruzi) Tu and Kimbrough), the soil-borne pathogen on several plants all over the world, has been previously reported from Turkey on certain plants. In this study, turfgrass areas in 9 provinces of Turkey were firstly surveyed for S. rolfsii, and samples showing chlorotic, reddish-brown, and frog-eye shaped circular patches were collected. Totally, 32 Sclerotium rolfsii isolates were obtained from these areas. One mycelial compatibility group (MCG) was identified among S. rolfsii isolates. Disease severity in pathogenicity tests carried out in the greenhouse ranged from 83.74 to 92.87%. Identification of fungal and bacterial isolates used in the study was performed by DNA sequencing analysis. Five antagonistic bacterial strains, previously found as effective on controlling some fungal pathogens, were tested to determine their antifungal effects against the southern blight by using seed coating method in greenhouse conditions. In consequence of the biological control studies, Bacillus cereus 44bac and Stenotrophomonas rhizophila 88bfp were found more effective than the other strains with the ratio of 91.00 and 90.11%, respectively.
Southern blight, caused by the soil-borne fungus Sclerotium rolfsii Sacc. (Atheliaceae: Athelia rolfsii (Cruzi) Tu and Kimbrough), is a serious disease for a wide range of plants, including vegetables, fruits, ornamental plants, and field crops (Mullen 2001). The fungus, also, attacks primarily bentgrass, bluegrass, fescues, and ryegrass (Smiley 1992).
The first symptoms of outstanding in turfgrass areas are round crescent-shaped yellow areas about 20 cm in diameter. Grasses grow yellowish over time and become sparse. As long as the disease continues to progress, diseased areas in the form of rings or patches die. But the grass in the center remains green. The color of the dead areas turns reddish brown over time. These rings, formed in dead turfgrass in summer and humid weather, expand quickly (about 20 cm per week). Sometimes, symptoms observed in these areas similar to the "frog eye." In Agrostis and Poa species, the diseased areas caused by this disease are usually seen in the autumn. Under moist conditions, white mycelial growth which develops on the dead grass and later on sclerotia ranging from white or light to dark brown on the mycelium are observed (Smiley 1992).
S. rolfsii isolates can be diverged into different mycelial compatibility groups (MCGs) based on mycelial interactions among isolates. The role of MCGs is important in defining field populations of fungi and facilitating genetic variation in fungal species, where the sexual reproductive stage (teleomorph stage) of the life cycle has a minimal impact on the disease cycle (Kohn et al. 1991).
Nowadays, the plantation and conservation of turfgrass areas has become a huge industrial sector in the world. Cultural practices are not efficient for controlling the disease, that is why fungicide usage is very common and widely used in turfgrass areas all around the world. Chemical fungicides are extensively used in turfgrass areas in Turkey, and excessive use of chemical fungicides has led to deteriorating human health, environmental pollution, and development of pathogen resistance to fungicides (Balcı and Gedikli, 2012). Due to the harmful effects in controlling fungal diseases, new studies are needed to use alternative methods for plant protection, which are less dependent on chemicals and are more environmentally friendly. In this regard, biological control can be an alternative or supplement to current management practices for S. rolfsii (Sai et al. 2010).
The most commonly beneficial microorganisms used in the control of plant pathogens are the following bacterial strains: Bacillus, Pseudomonas, and Trichoderma spp. (Raaijmakers et al. 2010). For the biocontrol of S. rolfsii, some bacterial genera have been tested for their ability to control. Pseudomonas spp. and Bacillus spp. have been commonly studied to control S. rolfsii on various plants. It was detected that Pseudomonas and Bacillus strains restricted in vitro hyphal growth or reduced germination of sclerotia of S. rolfsii (Rakh et al. 2011 and Tonelli et al. 2011). Several commercial preparations, containing these bacterial and fungal agents, are also recommended on turfgrass diseases in the world. Bio-Trek 22G (Trichoderma harzianum ) is the first registered biopesticide for dollar spot, brown patch, and Pythium root rot on turfgrass (Harman and Lo 1996). Eco Guard TM (Bacillus licheniformis), Rhapsody (B. subtilis), Actinovate SP (Streptomyces lydicus WYEC 108), and Botrycid (Pseudomonas aureofaciens) are the other microbial biocides used against turfgrass diseases (Corwin et al. 2007). Among these, only Rapsody is recommended against the southern blight disease caused by S. rolfsii in turfgrass areas. But there is no registered microbial biocide against turfgrass diseases in Turkey so far.
The objective of this study was to molecularly identify S. rolfsii isolates in Turkey, and to detect their virulence and mycelial compatibility groups, using some domestic bacterial and fungal isolates under greenhouse conditions.
Survey and isolation of the pathogens
The survey was performed and samples were collected from the turfgrass areas in İstanbul, Antalya, Ankara, İzmir, Kayseri, Bursa, Aydın, and Muğla Provinces in 2015. Segments of leaves and roots were sterilized for 1 min, in 1% sodium hypochlorite (NaOCl) solution, then washed with sterile water and air dried in a laminar flow cabinet before culturing on potato dextrose agar (PDA, Difco, USA) containing 50 mg/l streptomycin sulfate. Isolates were incubated under the light and dark regimes, respectively on 28 ± 1 °C for 7 days.
Bacterial isolates
Five antagonistic domestic bacterial strains (215b, 44bac, 88cfp, 166fp, and 88bfp) used in this study were isolated from the tomato and cucumber rhizospheres in a previous study, where isolates 166fp and 88cfp managed with Pythium deliense, Sclerotinia minor, and Alternaria solani on tomato were detected (Aşkın 2008). Also, isolate 44bac managed downy mildew on cucumber (Aşkın and Ozan 2013) under field conditions. Molecular identification of the 5 antagonistic bacterial isolates, used in this study, was first determined in this study.
Molecular identifications of fungal and bacterial isolates
Isolation of fungal DNA was carried out by Blood and Tissue Kit (QIAGEN Inc. Valencia, CA), as specified by the manufacturer. The PCR reaction mixture and conditions were made by modifying according to Mahadevakumar et al. (2016). DNA amplification was performed, using the optimized cycles optimized with Techne TC-5000 thermal cycler. Primers ITS-1 (5 ′TCC GTA GGT GAA CCT GCGG 3′) and ITS-4 (5 ′TCC TCC GCT TAT TGA TATGC 3′) were used for amplification of ITS regions (White et al. 1990). The polymerase chain reaction (PCR) was performed in a 50-μl reaction mixture containing 1 μl template DNA, 1 μl forward primer (10 mM), 1 μl reverse primer (10 mM), 5 μl reaction buffer (10×), 4 μl dNTP (each 2.5 mM), 0.5 μl Taq DNA Polymerase (5 U/μl), and 37.5 μl sterile double-distilled water. The PCR cycling protocol consisted of initial denaturation at 94 °C for 4 min, followed by 30 cycles of 94 °C for 45 s, 55 °C for 45 s, 72 °C for 2 min, and a final elongation step of 72 °C for 10 min. As a negative control, the template DNA was replaced by sterile double-distilled water.
Molecular definition of bacteria was made according to the protocol of DNA isolation from Blood and Tissue Kit (QIAGEN Inc. Valencia, CA). The 16S rDNA gene fragments were amplified by PCR using the universal primers 27F 5′AGAGTTTGATCMTGGCTCAG3′ and 1492R 5′TACGGYTACCTTGTTACGACTT3′ (Lane, 1991). The PCR reaction mixture and conditions were modified to carry out the PCR reaction. DNA replications were performed in the ABI Veriti (Applied Biosystem) thermal cycler using the following cycles:
The initial denaturation consist of 5 min at 94 °C, 35 cycles of amplification step consisting of denaturation of 94 °C for 30 s, annealing at 55 °C for 30 s, extension at 72 °C for 120 s, and final extension of 10 min at 72 °C (Lane 1991).
The PCR product was directly subjected to Sanger sequence treatment in a special Arge Laboratory (BM Gene Research and Biotechnology Company, Ankara, Turkey).
Bipartite raw sequence electropherograms were compared to the isolate sequences in Gen Bank after BLAST screening in NCBI (https://blast.ncbi.nlm.nih.gov/Blast.cgi).
Determination of mycelial compatibility groups (MCGs) of S. rolfsii isolates
In order to determine the mycelial compatibility among 32 isolates, obtained from different areas grown with turfgrass areas, mating each isolate with themselves and with all other isolates was carried out (Punja and Grogan 1983). Mating studies were performed on PDA medium with 0.25% food coloring (Ponceau 4R, Turkey). Mycelial discs of two isolates were reciprocatively plated on PDA medium with a distance of 3–4 cm. Cultures were incubated at 25 ± 1 °C and colony growth was observed after 7–14 days (Kohn et al. 1991). Hyphal interaction among the isolates mate was observed at the end of 7 days after culturing. Compatibility between each of the two groups was evaluated according to a red line with separation in the region, where the hyphae collided. When the red line was seen, it was accepted as incompatible, otherwise not (Punja and Grogan 1983).
Fungal inoculums
S. rolfsii was grown on wheat bran medium in bottles of 500 ml, sterilized in an autoclave for 20 min. at 121 °C, for 15 days at 28 ± 1 °C (Aşkın 2008).
Pathogenicity tests
Pathogenicity tests of S. rolfsii isolates were conducted under greenhouse conditions. The fungus inoculums grown on wheat bran (4 g inoculums/kg soil) was added to the sterilized garden soil, fine sand, and burnt fertilizer mixed (2:1:1) and then distributed in the pots (10 cm in diameter). Control pots contained the sterilized garden soil, fine sand, and burnt fertilizer mixed (2:1:1) free from the inoculum. Three pots were replicated for each treatment. All pots were covered by a sanitized polyethylene nylon and incubated for 3 days. At the end of the duration, 30 seeds of turfgrass (cv. Festuca arundinacea) were placed on the soil surface, covered with 1 cm of sterile natural soil, and watered with 9–10 ml of water. The infected plants were counted 3 weeks later (Zhang et al. 2014) and recorded. Evaluation was made according to a scale of 0 to 5: 0 = no disease symptoms, 1 = 1–10% hypocotyl infecting and/or shortening, 2 = 11–30% hypocotyl infecting and/or shortening, 3 = 31–50% hypocotyl infecting and/or shortening, 4 = 51–80% hypocotyl infecting and/or shortening, and 5 = entire hypocotyl infecting and/or shortening (Ichielevich Auster et al. 1985). Disease severity was calculated according to the Townsend–Heuberger formula (Townsend and Heuberger 1943):
$$ \mathrm{Disease}\ \mathrm{severity}\%\kern0.5em =\kern0.5em \Sigma \kern0.5em \left(N\kern0.5em \times \kern0.5em V\right)/Z\kern0.5em \times \kern0.5em N\kern0.5em \times 100 $$
(N is the number of samples in the scale with different disease grades, V is the scale value, Z is the highest scale value, and N is the total number of samples observed)
Bacterial inoculums
Bacterial isolates were cultured in potato dextrose broth. After 24 h, the bacterial concentration was verified through spectrophotometry at a wavelength (λ) of 600 nm seeking for an absorbance between 0.9 and 1 equivalent to a concentration of 1 × 108 c/ml and by counting the colony forming units (cfu) per milliliter through the total viable count. Surface-disinfected seeds of turfgrass were inoculated with bacterial solutions by soaking with agitation for 12 h. Rhizobacterial stock cultures were maintained in nutrient agar medium amended with 15% glycerol and stored at − 80 °C. Before being used in the bioassays, stock cultures were streaked onto nutrient agar plates and incubated at 28 ± 1 °C for 48 h.
Biocontrol assays
This study was carried out using turfgrass seeds mixture containing 4 cvs: Festuca rubra, Lolium perenne, Poa pratensis, Festuca arundinacea, and the most virulent S. rolfsii isolate (Sr34-10). The soil used in the experiment was prepared in the form of a mixture of 2:1:1 garden soil to stream sand to burnt fertilizer. Soil mix was sterilized in an autoclave at 121 °C for 45 min. The inoculum of S. rolfsii was developed on wheat bran. Antagonistic bacteria were applied by coating to the seeds. Experiments were carried out in both sterilized and non-sterilized soils, where three treatments were performed: (1) negative control by sowing uncoated turfgrass seeds in non-infested soils, (2) positive control by sowing uncoated turfgrass seeds in infested soils to evaluate the varietal sensitivity, and (3) sowing coated turfgrass seeds in infested soils to evaluate the biocontrol efficacy of each antagonistic isolate against S. rolfsii. The mixture of inoculum and soil (5 g to 1 kg of soil) were filled in sterile plastic pots (10 cm in diameter). After 4–5 days, 30 coated and uncoated turfgrass seeds were sown at a depth of 1 cm per pot. The plants were grown in a plant growth medium containing 12 h of light, 12 h of darkness, and 25 ± 1 °C temperature. The experiments were carried out in 3 replicates, according to randomized plot design. After the inoculation, observations were made at intervals of 10 days and a 0–5 scale was used 30 days after sowing, and disease severity was calculated as mentioned before.
Variance analyses were carried out, using the SPSS GLM statistical program, to determine the differences among virulence levels of isolates and disease rates in biocontrol assay.
Disease ratios were estimated according to the devised scale by Townsend-Heuberger formula (Townsend and Heuberger 1943). The calculated disease severity and the activity of bacterial isolates using the Abbott formula was determined from the disease severity values. Disease severity was compared by Tukey's multiple comparison test on these ratios.
Survey of the disease and identification of Sclerotium rolfsii
The survey was carried out in wide parks, golf courses, stadiums, and recreation areas of 8 provinces among the largest turfgrass areas in different regions in Turkey, and totally 1400 samples were collected. In consequence of isolations, distinguishing symptoms, which were chlorotic or reddish-brown frog-eye or crescent-shaped circular patches (Figs. 1 a and b), of 32 S. rolfsii isolates were obtained, based on both colony morphology and rDNA internal transcribed spacer (ITS) region sequences. The isolates showed a rapid and a radial mycelial development. The colony color was white on the PDA medium, with a large amount of round, 1–3-mm-diameter brown sclerotia formed (Fig. 1c)."Clamp connection" event in fungus hyphae was observed (Fig. 1d).
General view of disease symptoms on turfgrass areas (a from afar, b close view). Colonial view on PDA (c). Clamp connection on hypha (d) of Sclerotium rolfsii
On the other hand, molecular studies were also performed on isolates according to ITS-1 and 4 regions, and amplicons displayed by gel transilluminator were found to be between 650–680 bp, which is specific for S. rolfsii. The obtained results are parallel to that of Poornima et al. (2018) who studied genetic variations among S. rolfsii isolates of groundnut by using ITS rDNA sequence data and obtained an amplification fragment of about 650–700 bp. The sequences of the isolates showed a 99–100% similarity rate compared to those of S. rolfsii deposited in the NCBI database. It was concluded that S. rolfsii caused more damage in warmer climates than in semi-arid areas in Turkey. S. rolfsii was previously found in soybean, peanut, sugar beet, tomatoes, pepper, eggplant, bean, and artichoke in Turkey, causing it to have different names according to crops (Yaşar and Türk 2016 and Aydoğdu et al. 2016). This is the first confirmed study on southern blight on turfgrass in Turkey.
Identification of the bacterial bioagents
Unidentified bacterial bioagents, which used in this study to see possible effects for controlling southern blight on turfgrass, were previously isolated from different crops (Aşkın 2008 and Aşkın and Ozan 2013). The sequence data of bacterial isolates showed 99–100% similarity with isolates in GenBank. The results of BLAST analysis showed that Stenotrophomonas rhizophila (100% similarity), Pseudomonas putida 166fp (100% similarity), P. putida 88cfp (99.64% similarity), Paenibacillus sp. (99.79% similarity), and Bacillus cereus (100% similarity) were identified. As a result of the study, amplicons displayed by gel transilluminator were found to be between 1500 and 1550 bp. It was detected that these Pseudomonas spp. and Bacillus spp. protected the infection by Pythium deliense, Sclerotinia minör, and Alternaria solani on tomato (Aşkın 2008) and downy mildew on cucumber (Aşkın and Ozan 2013) under field conditions.
Mycelial compatibility groups
In all mates, a red line was not observed in the contact area of the 2 colonies, and complete fusion was observed; all isolates were compatible with each other. Sclerotia were formed at the junction (Fig. 2). As a result of mating type analysis of 32 isolates, all of them were compatible with each other. Thus, it can be concluded that only one mycelial compatibility group (MCG) was seen (Table 1) in S. rolfsii in turfgrass, in this study.
Mycelial compatibility of some Sclerotium rolfsii isolates
Table 1 Origin, number, MCGs, and disease severity values of Sclerotium rolfsii isolates isolated from turfgrass areas
To our knowledge, there is no MCG study on S. rolfsii on turfgrass in the world. However, in some studies performed in different crops, MCG of S. rolfsii isolates obtained from different hosts and geographical regions and even from the same regions and hosts were tested, and it has been reported that there is no relationship between geographic region and host specificization of the formation of MCG of S. rolfsii. But genetic variations among different MCGs have been reported in different crops (Adandonon et al. 2005).
Pathogenicity
Disease severity of 32 S. rolfsii isolates ranged between 83.74 and 92.87% in different turfgrass composition (Table 1). The fungus is a destructive pathogen that can infect more than 500 plant species commonly seen in tropical and subtropical regions (Smiley et al.1992). S. rolfsii has been found to cause damage to tomatoes, peppers, lettuce, bean, peanut, and sugarbeet in Turkey (Yaşar and Mert-Türk 2016). But, it has not been previously identified in turfgrass areas in Turkey.
In greenhouse experiments, all identified bacterial isolates were found to be effective in controlling the disease comparing to the severity of the disease in control treatment. The lowest disease severities were found at the treatments of 44bac (B. cereus) and 88bpf (S. rhizophila), being 8.00 ± 3.138 and 8.80 ± 3.138 %, respectively (P < 0.0001) (Table 2 and Fig. 2). The highest disease severity was obtained in the treatment of 88cpf (P. putida), being 40.27 ± 3.138 % compared to that in the control treatment (Table 2). The highest protection effect was obtained by the isolate 44bac (B. cereus) being 91.00%, followed by the isolates 88bpf (Stenotrophomonas rhizophila), being 90.11%, and 215b (Paenibacillus sp.), being 64.60% (Table 2 and Fig. 3).
Table 2 Effect of some bacterial strains against the infection by Sclerotium rolfsii on turfgrass
Effect of some bacterial isolates against southern blight caused by Sclerotium rolfsii in greenhouse experiments. a Stenotrophomonas rhizophila 88bpf, b Bacillus cereus 44bac (NK uninoculated plants, PK pathogen-inoculated plants)
For the biocontrol of S. rolfsii, several bacterial strains have been studied in the world, and effective results have been obtained in these studies, most of them belonging to the biocontrol of S. rolfsii on vegetables with the genera Pseudomonas and Bacillus (Rakh et al. 2011 and Tonelli et al. 2011). But, there are no adequate studies related to S. rolfsii on turfgrass.
The presence of S. rolfsii as a causal of southern blight disease on the turfgrass areas in Turkey has firstly revealed with this study. Both Bacillus cereus 44bac and Stenotrophomonas rhizophila 88bpf were found to be effective in controlling S. rolfsii under greenhouse conditions. These bacterial strains could be recommended in the management programs of the southern blight disease in turfgrass areas. However, further studies are needed to be experimented under field conditions.
Adandonon A, Aveling TAS, van derMerwe NA, Sanders G (2005) Genetic variation among Sclerotium isolates from Benin and South Africa, determined using mycelial compatibility and ITS rDNA sequence data. Aust Plant Pathol 34:19–25
Aşkın A (2008) The effects of non-pathogenic Pseudomonas on damping-off of tomato seedlings caused by some fungal pathogens in Ankara province. Ph D. Thesis, Ankara Univ. Grad. School of Nat. and App. Scis. Dept. of Plant Protec. No. of pages?
Aşkın A, Ozan S (2013) Orta Anadolu Bölgesinde örtü altı hıyar yetiştiriciliğinde mildiyö (Pseudoperonospora cubensis Berk. And Curt.) mücadelesinde Bacillus spp. izolatlarının kullanım olanaklarının araştırılması. Bitki Koruma Ürünleri Ve Makineleri Kongresi, 2-4 Nisan; Antalya, 57-68.
Aydoğdu M, Kurbetli İ, Ozan S (2016) First report of Sclerotium rolfsii causing crown rot on globe artichoke in Turkey. Plant Dis 100(10):2161
Balcı V, Gedikli N (2012) Golf Alanlarında Kullanılan Kimyasal İlaçların ve Gübrelerin Çevre ve Uygulayıcılar Üzerine Etkileri-Organik Yaklaşımlar. Spormetre Beden Eğitimi ve Spor Bilimleri Dergisi, 2011 4(4):141–148
Corwin B, Tisserat N, Fresenburg B (2007) Integrated Pest Management. Identification and Management Of Turfgrass Diseases. Plant Protection Programs College of Agriculture, Food and Natural Resources, Columbia, p 55
Harman GE, Lo CT (1996) The First Registered Biological Control Product for Turf Disease: Bio-Trek 22G by G. E. Turfgrass Trends, Cornell University, USA, p 8–14
Ichielevich-Auster M, Sneh B, Koltin Y, Barash I (1985) Suppression of damping-off caused by Rhizoctonia species by a nonpathogenic isolate of R. solani. Phytopathology 75:1080–1084
Kohn LM, Stavoski E, Carbone I, Royer J, Anderson JB (1991) Mycelial incompatibility and molecular markers identify genetic variability in field populations of Sclerotinia sclerotiorum. Phytopathology 81:480–485
Lane DJ (1991) 16S/23S rRNA Sequencing In nucleic acid techniques in bacterial systematics. In: Stackebrandt E, Goodfellow M (eds) . John Wiley and Sons, New York, NY, USA, pp 115–175
Mahadevakumar S, Yadav V, Tejaswini GS, Janardhana GR (2016) Morphological and molecular characterization of Sclerotium rolfsii associated with fruit rot of Cucurbita maxima. Euro J Plant Pathol 145(1):215–219
Mullen J (2001) Southern blight, southern stem blight, white mold. The Plant Health Instructor. DOI: https://doi.org/10.1094/PHI-I-2001-0104-01. https://www.apsnet.org/edcenter/intropp/lessons/fungi/Basidiomycetes/Pages/SouthernBlight.asp.
Poornima Sunkad G, Sudini H (2018) Molecular variability among the isolates of Sclerotium rolfsii causing stem and pod rot of groundnut collected from Karnataka. India Inter J Curr Microbiol App Sci 7(5):2925–2934
Punja ZK, Grogan RG (1983) Hyphal interactions and antagonism among field isolates and single-basidiospore strains of Athelia (Sclerotium) rolfsii. Phytopathology 73:1279–1284
Raaijmakers JM, de Bruijn I, Nybroe O, Ongena M (2010) Natural functions of lipopeptides from Bacillus and Pseudomonas: more than surfactants and antibiotics. Fems Microbiol Revs 34:1037–1062
Rakh RR, Raut LS, Dalvi SM, Manwar AV (2011) Biological control of Sclerotium rolfsii, causing stem rot of groundnut by Pseudomonas cf. monteilii 9. Recent Res Sci Technol 3:26–34
Sai LV, Anuradha P, Vijayalakshmi K, Reddy NPE (2010) Biocontrol of stem rot of groundnut incited by Sclerotium rolfsii and in vitro compatibility of potential native antagonists with fungicides. J Pure App Microbiol 4:565–570
Smiley R, Dernoeden P, Clarke B (1992) Compendium of Turfgrass Diseases, Second Edition, American Phytopathological Society. APS Press, St. Paul, p 96
Tonelli ML, Furlan A, Taurian T, Castro S, Fabra A (2011) Peanut priming induced by biocontrol agents. Physiol Molec Plant Pathol 75:100–105
Townsend GK, Heuberger JW (1943) Methods for estimating losses caused by diseases in fungicide experiments. Plant Dis Rep 27:340–343
White TJ, Bruns T, Lee S, Taylor JW (1990) Amplification and direct sequencing of fungal ribosomal RNA genes for phylogenetics. In: Innis MA, Gelfand DH, Sninsky JJ, White TJ (eds) PCR Protocols: A Guide to Methods and Applications. Academic Press, Inc, New York, pp 315–322
Yaşar İ, Mert-Türk F (2016) Mycelial compatible groups of the Sclerotium rolfsii isolates and comparison of virulence, VII International Scientific Agricultural Symposium "Agrosym 2016" Jahorina, 06-09 October 2016, Bosnia and Herzegovina.
Zhang XY, Yu XX, Yu Z, Xue YF, Qi LP (2014) A simple method based on laboratory inoculum and field inoculum for evaluating potato resistance to black scurf caused by Rhizoctonia solani. Breed Sci 64(2):156–163
A part of this study was supported by The Scientific and Technological Research Council of Turkey, project number: 114O400.
The part of the survey, isolation, and pathogenicity of S. rolfsii was funded by The Scientific and Technological Research Council of Turkey.
Plant Protection Central Research Institute, Ankara, Turkey
Filiz Ünal, Ayşe Aşkın & Ercan Koca
Soil, Fertilizer and Water Resources Central Research Institute, Ankara, Turkey
Mesut Yıldırır
Ankara University Faculty of Science Department of Biology, Dögol Caddesi 06100, Tandoğan, Ankara, Turkey
M. Ümit Bingöl
Filiz Ünal
Ayşe Aşkın
Ercan Koca
FU contributed to the survey, isolation, identification, pathogenicity, mycelial compatibility groups, and biological control studies of S. rolfsii isolates. All authors designed the study and wrote the manuscript, AA contributed to the provision of bacterial isolates and biological control studies. EK contributed to the identification of bacterial isolates. MY analyzed the data (statistical analyses), mycelial compatibility groups studies, and read and approved the final manuscript. MÜB identified of turfgrass species. All authors read and approved the final manuscript.
Correspondence to M. Ümit Bingöl.
The authors declare that they have no competing interests
Ünal, F., Aşkın, A., Koca, E. et al. Mycelial compatibility groups, pathogenic diversity and biological control of Sclerotium rolfsii on turfgrass. Egypt J Biol Pest Control 29, 44 (2019). https://doi.org/10.1186/s41938-019-0144-6
Mycelial compatibility group
Sclerotium rolfsii
Southern blight | CommonCrawl |
doubly periodic functions as tessellations (other than parallelograms)
I think of a snapshot of a single period of a doubly periodic function as one parallelogram-shaped tile in a tessellation. Could a function have a period that repeats like a honeycomb or some other not rectangular tessellation?
analysis special-functions tessellations elliptic-functions
J. M. is a poor mathematician
futurebirdfuturebird
$\begingroup$ What kind of functions are we talking about here? $\endgroup$ – Qiaochu Yuan Apr 28 '11 at 19:38
$\begingroup$ C--> C, complex $\endgroup$ – futurebird Apr 28 '11 at 19:50
$\begingroup$ The fact that there exist functions that "repeat like a honeycomb" does not contradict the fact that the fundamental domain is a parallelogram, and in fact a rectangle. Suppose two parallel sides of the hexagon are horizontal. Go from the center of one hexagon in a horizontal direction until you reach the next center of a hexagon (thus NOT one of the ones that are adjacent to the one you started in) and that's one side of a rectangle. Then go from that same center in a vertical direction until you reach the next center (of a hexagon that IS adjacent to the one where you started$\,\ldots\qquad$ $\endgroup$ – Michael Hardy Aug 25 '16 at 22:23
$\begingroup$ $\ldots\,$and that's another side of the rectangle. $\qquad$ $\endgroup$ – Michael Hardy Aug 25 '16 at 22:23
(many literature searches and Mathematica experiments later...)
The usual Jacobi and Weierstrass elliptic functions have as their "repeating unit" a parallelogram (which can be made rhomboidal or square through appropriate choices of parameters). It is known that apart from parallelograms, hexagons can tile the plane by translation; so, why can't there be a doubly periodic function that has a hexagonal repeating unit?
It turns out that A.C. Dixon (the guy whose book on elliptic functions Hans linked to), in a long 1890(!) paper, studied a class of elliptic functions (now named after him) based on the inversion of the Abelian integral
$$\int\frac{\mathrm dt}{\left(1-t^3\right)^{2/3}}=t {}_2 F_1\left({{\frac13\quad \frac23}\atop{\frac43}}\mid t^3\right)$$
where ${}_2 F_1\left({{a\quad b}\atop{c}}\mid x\right)$ is a Gaussian hypergeometric function.
There are two of these Dixon elliptic functions, $\operatorname{sm}(z,0)=\operatorname{sm}(z)$ and $\operatorname{cm}(z,0)=\operatorname{cm}(z)$, corresponding to the usual sine and cosine respectively. Both functions have a real period $\pi_3=B\left(\frac13,\frac13\right)$ (where $B(a,b)$ is the beta function) and a complex period $\pi_3\exp(2i\pi/3)$, and satisfy the following relations (reminiscent of usual trigonometric identities):
$$\begin{align*} &\operatorname{sm}\left(\frac{\pi_3}{3}-z\right)=\operatorname{cm}(z)\\ &\operatorname{sm}^3(z)+\operatorname{cm}^3(z)=1\\ &\operatorname{sm}^\prime(z)=\operatorname{cm}^2(z),\quad \operatorname{cm}^\prime(z)=-\operatorname{sm}^2(z) \end{align*}$$
and, most relevant to the purposes of this question, a rotational invariance:
$$\exp(-2i\pi/3)\operatorname{sm}(z\exp(2i\pi/3))=\operatorname{sm}(z),\quad \operatorname{cm}(z\exp(2i\pi/3)) =\operatorname{cm}(z)$$
Plots of the Dixon functions on the real line don't look very interesting:
but, as with the usual elliptic functions, the fun starts in the complex plane:
These contour plots clearly display the hexagonal structure of the Dixon functions. Here is a single "fundamental period hexagon" for $\operatorname{sm}(z)$:
Note that a section of the real line (in the plots above, $\left(-\frac{\pi_3}3,\frac{2\pi_3}{3}\right)$) corresponds to a chord of the period hexagon.
Both Dixon elliptic functions possess three poles (once you've identified the congruent poles in the period hexagon) and three zeros within the fundamental hexagon. Of course, one could go the usual route and consider the "repeating unit" of the Dixon function to be a particular rhombus; this is equivalent; since the rhombus can be appropriately dissected into a regular hexagon, and vice-versa.
The Dixon elliptic functions can also be expressed in terms of Weierstrass elliptic functions:
$$\operatorname{sm}(z)=\frac{6\wp\left(z;0,\frac1{27}\right)}{1-3\wp^\prime\left(z;0,\frac1{27}\right)}$$
$$\operatorname{cm}(z)=\frac{3\wp^\prime\left(z;0,\frac1{27}\right)+1}{3\wp^\prime \left(z;0,\frac1{27}\right)-1}$$
(there are also expressions for Dixon functions in terms of Jacobi elliptic functions, but they are rather complicated.)
Finally, if you're interested in knowing more about the Dixon elliptic functions (including combinatorial applications), this paper is a good starting point.
A Mathematica notebook for those interested in exploring the topic further is available from me upon request.
J. M. is a poor mathematicianJ. M. is a poor mathematician
$\begingroup$ So this makes me wonder: what about the other 15 wallpaper groups? $\endgroup$ – graveolensa May 2 '11 at 21:40
$\begingroup$ @deoxy: That's an interesting question, too. Maybe you should ask it as a separate question? $\endgroup$ – J. M. is a poor mathematician May 3 '11 at 5:11
$\begingroup$ Yeah, and what about tessellations in other geometries? $\endgroup$ – Raskolnikov May 3 '11 at 7:52
$\begingroup$ @J.M. this question: math.stackexchange.com/questions/36737/… $\endgroup$ – graveolensa May 3 '11 at 17:45
$\begingroup$ @Michael, "None of this should be taken to mean you don't get a parallelogram as a fundamental region." - yes, I did mention the dissection into a rhombus in this answer. $\endgroup$ – J. M. is a poor mathematician Dec 17 '16 at 20:16
I don't think so. If there are exactly two periods (not parallel), then we have a parallelogram, so in your case there must be at least three independent periods. But that implies that the function is constant (or multi-valued), as proved for example in this old book by Dixon on elliptic functions (see §32 on p. 19).
Hans LundmarkHans Lundmark
$\begingroup$ On the other hand, Hans displays something almost, but not quite hexagonal here (scroll down to the equianharmonic case of the Weierstrass $\wp$ function). $\endgroup$ – J. M. is a poor mathematician Apr 28 '11 at 19:40
$\begingroup$ I think it depends on how the tesilation works, I just realized that if you create you honeycomb bu translation it can be repartitioned in to parallelograms. Now if we ad a rotation, then it would not work. I think that this needs to be restricted to tesilations where you can pick up the whole plane and map it to itself after translation and rotation. --- still, it's not obvious to me that there are more than three periods... Can you expand on that? $\endgroup$ – futurebird Apr 28 '11 at 19:47
$\begingroup$ Well, as you say, you can have a honeycomb pattern, but that's really just a doubly periodic pattern in disguise (with periods 1 and $\exp(i\pi/3)$, say). If you really want to go beyond the case where there is a fundamental region in the shape of a parallelogram lurking somewhere, you will have to look for something with more than two periods, and that is apparently not possible. $\endgroup$ – Hans Lundmark Apr 28 '11 at 20:59
Not the answer you're looking for? Browse other questions tagged analysis special-functions tessellations elliptic-functions or ask your own question.
$f^3 + g^3=1$ for two meromorphic functions
elliptic functions on the 17 wallpaper groups
The torus as a projective plane curve $x^3+y^3+z^3=0$
Tilings and meromorphic functions
How is tessellation defined in Mathematics?
Can we express all doubly periodic functions as one of doubly periodic function?
Can I fix a tiled floor with only one wrong tile left?
Are there periodic functions without a smallest period?
Can the lack of obstruction to deforming a checkerboard tessellation be seen as part of a larger picture?
Search for coverings in hyperbolic tessellations
Relation between periodic and continuous functions?
Decomposing geodesic tessellations over a sphere into parallelograms
Conditions for an entire doubly periodic function to be constant
Finding doubly periodic solutions to partial differential equations | CommonCrawl |
Does anything guarantee that a field theory will have a lower bound on energy, so that a vacuum exists?
If a system of particles is bound, then it has negative energy relative to the same system disassembled into its separated parts. In the nonrelativistic limit, this negative energy is small compared to the sum of the masses of the constituent particles, so the mass of the bound system is still positive.
But relativistically, there is no obvious reason why this has to be true. For example, the electromagnetic charge radius of the pion is about 0.7 fm. A particle-in-a-box calculation for two massless particles in a box of this size gives a kinetic energy of about 1500 MeV, but the observed mass of a pion is about 130 MeV, which suggests an extremely delicate near-cancellation between the positive kinetic energy and the negative potential energy. I see no obvious reason why this couldn't have gone the other way, with the mass coming out negative.
Is one of the following correct?
Some general mechanism in QFT prevents negative masses.
Nothing in QFT prevents negative masses, but something does guarantee that there is always a lower bound on the energy, so that a vacuum exists. If pions could condense from the spontaneous creation of quark-antiquark pairs, then we would just redefine the vacuum.
Nothing guarantees a lower bound on energy. The parameters of the standard model could be chosen in such a way that there would be no lower bound on energy. We don't observe that our universe is that way, so we adjust the parameters so it doesn't happen.
If 1, what is the mechanism that guarantees safety?
If 2, what is it that guarantees that we can successfully redefine the vacuum? In the pion example, pions are bosons, so it's not like we can fill up all the pion states.
If 3, is this natural? Do we need fine-tuning?
Are there no general protections, but protection mechanisms that work in some cases? E.g., in the case of a Goldstone boson, we naturally get a zero mass. Do the perturbations that then make the mass nonzero always make it positive?
Related: Is negative mass for a bound system of two particles forbidden?
quantum-field-theory hilbert-space vacuum unitarity ground-state
Ben CrowellBen Crowell
It's not #1, because it's easy to write down QFTs which are unstable against pair production. One simple example is $$\mathcal{L} = \frac12 (\partial_\mu \phi)^2 + \frac12 m^2 \phi^2$$ which corresponds to particles with negative $m^2$. Since $E^2 = p^2 + m^2$, it is energetically favorable to produce infinitely many particles, since the rest energy is negative. (Note this corresponds to imaginary mass, not negative mass.) There's no vacuum state at all, so #2 doesn't hold either.
For a scalar field theory, as long as we have $$\mathcal{L} = \frac12 (\partial_\mu \phi)^2 - V(\phi)$$ where the potential $V(\phi)$ is bounded below, then we will have a vacuum state, simply because the Hamiltonian is bounded below. Heuristically, if we start, say, at a maximum of the potential rather than a minimum, particle production will occur until the vacuum expectation of the field is shifted to the minimum, which is our true vacuum state. Another way to phrase this is that the particles produced interact with each other (noninteracting particles correspond to a quadratic potential, which cannot have both a maximum and minimum), so as this process goes on it becomes less energetically favorable to create more particles, and the process stops at the true vacuum. All of this is a simplified description of what happened to the Higgs field at some point in the early universe.
Since it's essentially #3, I suppose your real question boils down to: how do people enforce the existence of a stable vacuum in the first place? Well, if you work on formal QFT, you simply assume it. That's called the spectrum condition, and it's a reasonable requirement to make of any field theory, like assuming that a spacetime is time orientable in general relativity.
If you're a model builder adding stuff to the Standard Model, there are a couple things you can use:
if your new particles are weakly coupled, the energies are just $E = \sqrt{p^2 + m^2}$ plus small corrections, so you can essentially read off the result from the potential (this covers most papers)
if your new particles are weakly coupled but the above treatment is too loose, you can try computing the effective potential perturbatively, e.g. the Coleman-Weinberg potential
if your new particles are strongly coupled but analogous to QCD, we assume it's fine because QCD has a stable vacuum, e.g. whenever any paper says "consider a confining hidden sector"
if your theory has spontaneously broken supersymmetry, the vacuum energy density is positive
The trickiest part of deciding whether the Standard Model itself has a stable vacuum is QCD. We know the naive vacuum isn't stable, and lattice simulations tell us there is a vacuum containing a so-called chiral condensate of quarks. So in some sense your question about the sign of the pion mass could have gone the other way, and in fact it already has, because the chiral condensate formed and we now define pions about that.
If you are allowed to assume that QCD with massless quarks has a stable vacuum, then it's straightforward to show that pions have positive $m^2$ once you account for quark masses. But actually showing that statement is difficult nonperturbative physics. I don't know how to do it, and I don't know if anybody knows.
$\begingroup$ Thanks for taking the time to write such a detailed answer! You say that if $V$ is bounded below, then the process of particle formation ends at the true vacuum. In the concept of the Dirac sea (which I guess is just an outdated heuristic?), does this result in an infinite positive energy, which we then have to subtract away somehow? $\endgroup$ – Ben Crowell Nov 11 '19 at 15:43
$\begingroup$ @BenCrowell I guess one can call the Dirac sea outdated, but it's just a special case of how we think about renormalization. In basically every nontrivial QFT, the vacuum energy density isn't equal to what we would naively expect (i.e. the Hamiltonian density evaluated at the classical vacuum). There are always corrections, which are formally infinite with an infinite cutoff and still large with a finite cutoff. But you can absorb this by adding a constant term to the Lagrangian, which has to have been there all along. $\endgroup$ – knzhou Nov 11 '19 at 18:21
$\begingroup$ @BenCrowell There's a different question which is, what happens if you start at a false vacuum and then transition to the true vacuum? In flat spacetime, energy is conserved, which means you end up with a big positive energy relative to the true vacuum, which in practice can manifest as radiation emitted or topological defects or whatever. In a cosmological context, this energy then starts to be redshifted away. $\endgroup$ – knzhou Nov 11 '19 at 18:23
$\begingroup$ Is it conceivable that under conditions like those before the Big Bang, the fields are unstable, but the resulting dynamics leads to conditions under which the fields are stable? $\endgroup$ – S. McGrew Nov 11 '19 at 18:26
$\begingroup$ @S.McGrew If by "big bang" you mean the moment when the universe became extremely hot, then not only is it conceivable, but you've actually just described inflation. The period of inflation occurs because some field is moving toward a stable vacuum, and during this process its potential energy drives accelerating expansion. $\endgroup$ – knzhou Nov 11 '19 at 18:29
Option 3 is the closest match, but it's a bit like saying "Nothing guarantees that spacetime has a Lorentzian signature." We normally only consider spacetimes that do, because so much else depends on it. It's a requirement, not a theorem.
Similarly, for relativistic QFT in flat spacetime, we normally only consider QFTs whose total energy has a finite lower bound. The Lorentz-symmetric statement of this condition is that the spectrum of the generators of spacetime translations is restricted to the future light-cone. This is called the spectrum condition. It's one of the basic conditions that we usually require, just like microcausality. Theories that don't satisfy these basic conditions are rejected as unphysical, because so many other things rely on them. For example, the spin-statistics theorem and the CPT theorem both rely on the spectrum condition.
That's for QFT in flat spacetime. For QFT in a generic curved spacetime, we lose translation symmetry, so there are no "generators of spacetime translations," and the spectrum condition becomes undefined. Candidate replacements have been proposed, like the microlocal spectrum condition, but as far as I know this is still an unsettled research topic. The goal, I suppose, is to find a condition that allows things like the spin-statistics theorem to be derived even in curved spacetime. (If I remember right, this has sort of already been done, but the approach that I'm vaguely remembering relies on the flat spacetime proof, and something about it didn't seem quite satisfying to me. If you're interested, I can try to find that paper and post a link.)
Returning to flat spacetime...
is this natural? Do we need fine-tuning?
Depends on what you mean. If we define the Hamiltonian (total energy operator) to be the generator of time-translations, then the constant term can be shifted by an arbitrary finite value with no observable effects. In that sense, there is no fine-tuning problem. But the real world includes gravity even if our favorite QFT doesn't, and gravity does care about that constant term in the Hamiltonian. In that sense, there is a fine-tuning problem, also known as the cosmological constant problem: if we define our QFT with a short-distance cutoff, then the constant term in the total energy (or the cosmological constant) is extremely sensitive to the precise value of the cutoff, even though the cutoff is artificial. It's not a "real" problem in a QFT that doesn't include gravity anyway, but it's a symptom that QFT and gravity probably don't get along with each other in the way we might have naively expected.
Ben Crowell
Chiral AnomalyChiral Anomaly
$\begingroup$ Thanks, this is very helpful, and complements knzhou's answer. Too bad I can only accept one. $\endgroup$ – Ben Crowell Nov 11 '19 at 15:43
The operational definition of Entropy is roughly proportional to the logarithm of the number of accessible microstates on any given equilibrium macrostate. For positive energy systems, the number of allowed microstates grow quickly with increasing system energy. If one expects that a negative-energy regime to be a mirrored version of the positive energy branch, it would lead to the conclusion that the number of microstates would grow quickly with decreasing system energy, and that the system would be thermodynamically unstable
If one removes the assumption of a large growth of microstates with decreasing accessible energy states, then one might have to consider that some theories with negative energy states can have stable or metastable vacuums. In such theories, a transition to a lower-than-typical-vacuum (or at least what we citizens of galaxies call a typical vacuum) can be enhanced only when far away of any other sources of matter or potentials, which might be a viable mechanism for dark energy
lurscherlurscher
Not the answer you're looking for? Browse other questions tagged quantum-field-theory hilbert-space vacuum unitarity ground-state or ask your own question.
Is negative mass for a bound system of two particles forbidden?
Nuclear physics from perturbative QFT
Physics in high lepton chemical potential
Creation of particle anti-particle pairs
What is the difference between quantum fluctuations and thermal fluctuations?
Can the fact that the vacuum energy in curved spacetime is not boost invariant be explained without mathematics?
What's the role of the Dirac vacuum sea in quantum field theory?
Can QFT be constructed without using vacuum? | CommonCrawl |
Homogenization of a stochastic viscous transport equation
EECT Home
Stabilization of the transmission wave/plate equation with variable coefficients on $ {\mathbb{R}}^n $
June 2021, 10(2): 333-351. doi: 10.3934/eect.2020069
Internal feedback stabilization for parabolic systems coupled in zero or first order terms
Elena-Alexandra Melnig 1,2,
Faculty of Mathematics, University "Al. I. Cuza" Iaşi, Romania
Octav Mayer Institute of Mathematics, Romanian Academy, Iaşi Branch, Romania
Received January 2020 Revised March 2020 Published June 2021 Early access June 2020
Fund Project: The author was supported by a grant of the Ministry of Research and Innovation, CNCS - UEFISCDI, project number PN-III-P4-ID-PCE-2016-0011
We consider systems of $ n $ parabolic equations coupled in zero or first order terms with $ m $ scalar controls acting through a control matrix $ B $. We are interested in stabilization with a control in feedback form. Our approach relies on the approximate controllability of the linearized system, which in turn is related to unique continuation property for the adjoint system. For the unique continuation we establish algebraic Kalman type conditions.
Keywords: Parabolic systems, Feedback stabilization, Unique continuation property, Kalman rank condition.
Mathematics Subject Classification: Primary: 35K40, 93D15, 93B52; Secondary: 35K57, 93B18.
Citation: Elena-Alexandra Melnig. Internal feedback stabilization for parabolic systems coupled in zero or first order terms. Evolution Equations & Control Theory, 2021, 10 (2) : 333-351. doi: 10.3934/eect.2020069
F. Ammar Khodja, A. Benabdallah, C. Dupaix and M. González-Burgos, A generalization of the Kalman rank condition for time-dependent coupled linear parabolic systems, Differ. Equ. Appl., 1 (2009), 427-457. doi: 10.7153/dea-01-24. Google Scholar
F. Ammar-Khodja, A. Benabdallah, C. Dupaix and M. González-Burgos, A Kalman rank condition for the localized distributed controllability of a class of linear parbolic systems, J. Evol. Equ., 9 (2009), 267-291. doi: 10.1007/s00028-009-0008-8. Google Scholar
V. Barbu and G. Wang, Internal stabilization of semilinear parabolic systems, J. Math. Anal. Appl., 285 (2003), 387-407. doi: 10.1016/S0022-247X(03)00405-0. Google Scholar
V. Barbu, Controllability and Stabilization of Parabolic Equations, Progress in Nonlinear Differential Equations and their Applications Vol. 90, Birkhäuser/Springer, Cham, 2018. doi: 10.1007/978-3-319-76666-9. Google Scholar
V. Barbu, I. Lasiecka and R. Triggiani, Abstract settings for tangential boundary stabilization of Navier-Stokes equations by high- and low-gain feedback controllers, Nonlinear Anal., 64 (2006) 2704–2746. doi: 10.1016/j.na.2005.09.012. Google Scholar
V. Barbu, I. Lasiecka and R. Triggiani, Tangential boundary stabilization of Navier-Stokes equations, Mem. Amer. Math. Soc., 181 (2006), 128 pp. doi: 10.1090/memo/0852. Google Scholar
V. Barbu, I. Lasiecka and R. Triggiani, Local exponential stabilization strategies of the Navier-Stokes equations, $d = 2, 3$, via feedback stabilization of its linearization, in Control of Coupled Partial Differential Equations, Internat. Ser. Numer. Math., Vol. 155, Birkhäuser, Basel, 2007. doi: 10.1007/978-3-7643-7721-2_2. Google Scholar
V. Barbu, S. S. Rodrigues and A. Shirikyan, Internal exponential stabilization to a nonstationary solution for 3D Navier-Stokes equations, SIAM J. Control Optim., 49 (2011), 1454-1478. doi: 10.1137/100785739. Google Scholar
V. Barbu and R. Triggiani, Internal stabilization of Navier-Stokes equations with finite-dimensional controllers, Indiana Univ. Math. J., 53 (2004), 1443-1494. doi: 10.1512/iumj.2004.53.2445. Google Scholar
V. Barbu and G. Wang, Feedback stabilization of periodic solutions to nonlinear parabolic-like evolution systems, Indiana Univ. Math. J., 54 (2005), 1521-1546. doi: 10.1512/iumj.2005.54.2663. Google Scholar
M. Duprez and P. Lissy, Positive and negative results on the internal controllability of parabolic equations coupled by zero- and first-order terms, J. Evol. Equ., 18 (2018), 659-680. doi: 10.1007/s00028-017-0415-1. Google Scholar
A. V. Fursikov and O. Yu. Imanuvilov, Controllability of evolution equations, Seoul National University, Research Institute of Mathematics, Global Analysis Research Center, Seoul, 1996. Google Scholar
M. González-Burgos and L. de Teresa., Controllability results for cascade systems of $m$ coupled parabolic PDEs by one control force, Port. Math., 67 (2010), 91-113. doi: 10.4171/PM/1859. Google Scholar
C. Lefter, Feedback stabilization of 2D Navier-Stokes equations with Navier slip boundary conditions, Nonlinear Anal., 70 (2009), 553-562. doi: 10.1016/j.na.2007.12.026. Google Scholar
C. Lefter, Feedback stabilization of magnetohydrodynamic equations, SIAM J. Control Optim., 49 (2011), 963-983. doi: 10.1137/070697124. Google Scholar
C. Lefter, Internal feedback stabilization of nonstationary solutions to semilinear parabolic systems, J. Optim. Theory Appl., 170 (2016), 960-976. doi: 10.1007/s10957-016-0964-4. Google Scholar
P. Lissy and E. Zuazua, Internal observability for coupled systems of linear partial differential equations, SIAM J. Control Optim., 57 (2019), 832-853. doi: 10.1137/17M1119160. Google Scholar
A. Lunardi, Interpolation theory, third edition, Appunti. Scuola Normale Superiore di Pisa (Nuova Serie), Vol. 16, Edizioni della Normale, Pisa, 2018. doi: 10.1007/978-88-7642-638-4. Google Scholar
A. Pazy, Semigroups of linear operators and applications to partial differential equations, Applied Mathematical Sciences, Vol. 44, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar
J.-C. Saut and B. Scheurer, Unique continuation for some evolution equations, J. Differential Equations, 66 (1987), 118-139. doi: 10.1016/0022-0396(87)90043-X. Google Scholar
R. Seeley, Norms and domains of the complex powers $A_{B}z$, Amer. J. Math., 93 (1971), 299-309. doi: 10.2307/2373377. Google Scholar
Guojie Zheng, Dihong Xu, Taige Wang. A unique continuation property for a class of parabolic differential inequalities in a bounded domain. Communications on Pure & Applied Analysis, 2021, 20 (2) : 547-558. doi: 10.3934/cpaa.2020280
Hamid Maarouf. Local Kalman rank condition for linear time varying systems. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021029
Muriel Boulakia. Quantification of the unique continuation property for the nonstationary Stokes problem. Mathematical Control & Related Fields, 2016, 6 (1) : 27-52. doi: 10.3934/mcrf.2016.6.27
Laurent Bourgeois. Quantification of the unique continuation property for the heat equation. Mathematical Control & Related Fields, 2017, 7 (3) : 347-367. doi: 10.3934/mcrf.2017012
Gunther Uhlmann, Jenn-Nan Wang. Unique continuation property for the elasticity with general residual stress. Inverse Problems & Imaging, 2009, 3 (2) : 309-317. doi: 10.3934/ipi.2009.3.309
Imene Aicha Djebour, Takéo Takahashi, Julie Valein. Feedback stabilization of parabolic systems with input delay. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021027
Zhongqi Yin. A quantitative internal unique continuation for stochastic parabolic equations. Mathematical Control & Related Fields, 2015, 5 (1) : 165-176. doi: 10.3934/mcrf.2015.5.165
Agnid Banerjee. A note on the unique continuation property for fully nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2015, 14 (2) : 623-626. doi: 10.3934/cpaa.2015.14.623
Cătălin-George Lefter, Elena-Alexandra Melnig. Feedback stabilization with one simultaneous control for systems of parabolic equations. Mathematical Control & Related Fields, 2018, 8 (3&4) : 777-787. doi: 10.3934/mcrf.2018034
Peng Gao. Unique continuation property for stochastic nonclassical diffusion equations and stochastic linearized Benjamin-Bona-Mahony equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2493-2510. doi: 10.3934/dcdsb.2018262
Peng Gao. Carleman estimates and Unique Continuation Property for 1-D viscous Camassa-Holm equation. Discrete & Continuous Dynamical Systems, 2017, 37 (1) : 169-188. doi: 10.3934/dcds.2017007
Giovanni Covi, Keijo Mönkkönen, Jesse Railo. Unique continuation property and Poincaré inequality for higher order fractional Laplacians with applications in inverse problems. Inverse Problems & Imaging, 2021, 15 (4) : 641-681. doi: 10.3934/ipi.2021009
Rohit Gupta, Farhad Jafari, Robert J. Kipka, Boris S. Mordukhovich. Linear openness and feedback stabilization of nonlinear control systems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (6) : 1103-1119. doi: 10.3934/dcdss.2018063
Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3641-3657. doi: 10.3934/dcdss.2020434
José G. Llorente. Mean value properties and unique continuation. Communications on Pure & Applied Analysis, 2015, 14 (1) : 185-199. doi: 10.3934/cpaa.2015.14.185
A. Alexandrou Himonas, Gerard Misiołek, Feride Tiǧlay. On unique continuation for the modified Euler-Poisson equations. Discrete & Continuous Dynamical Systems, 2007, 19 (3) : 515-529. doi: 10.3934/dcds.2007.19.515
Can Zhang. Quantitative unique continuation for the heat equation with Coulomb potentials. Mathematical Control & Related Fields, 2018, 8 (3&4) : 1097-1116. doi: 10.3934/mcrf.2018047
Ruth F. Curtain, George Weiss. Strong stabilization of (almost) impedance passive systems by static output feedback. Mathematical Control & Related Fields, 2019, 9 (4) : 643-671. doi: 10.3934/mcrf.2019045
Ionuţ Munteanu. Boundary stabilization of non-diagonal systems by proportional feedback forms. Communications on Pure & Applied Analysis, 2021, 20 (9) : 3113-3128. doi: 10.3934/cpaa.2021098
Thomas I. Seidman, Houshi Li. A note on stabilization with saturating feedback. Discrete & Continuous Dynamical Systems, 2001, 7 (2) : 319-328. doi: 10.3934/dcds.2001.7.319
Elena-Alexandra Melnig | CommonCrawl |
Nonlinear stability of solitary waves for a 2-d Benney--Luke equation
April 2005, 13(1): 219-237. doi: 10.3934/dcds.2005.13.219
Multi-dimensional dynamical systems and Benford's Law
Arno Berger 1,
Department of Mathematics and Statistics, University of Canterbury, Christchurch, New Zealand
Received December 2003 Revised November 2004 Published March 2005
One-dimensional projections of (at least) almost all orbits of manymulti-dimensional dynamical systems are shown to follow Benford's law,i.e. their (base $b$) mantissa distribution is asymptotically logarithmic,typically for all bases $b$. As a generalization and unificationof known results it is proved that under a (generic) non-resonance conditionon $A\in \mathbb C^{d\times d}$, for every $z\in \mathbb C^d$ real and imaginary part of each non-trivialcomponent of $(A^nz)_{n\in N_0}$ and $(e^{At}z)_{t\ge 0}$ follow Benford's law. Also,Benford behavior is found to be ubiquitous for several classes of non-linear maps anddifferential equations. In particular, emergence of the logarithmic mantissadistribution turns out to be generic for complex analytic maps $T$ with $T(0)=0$, $|T'(0)|<1$.The results significantly extend known facts obtained by other, e.g. number-theoretical methods,and also generalize recent findings for one-dimensional systems.
Keywords: shadowing., uniform distribution mod 1, Dynamical systems, attractor, Benford's law.
Mathematics Subject Classification: Primary: 11K06, 37A50, 60A10; Secondary: 28D05, 60F05, 70K55.
Citation: Arno Berger. Multi-dimensional dynamical systems and Benford's Law. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 219-237. doi: 10.3934/dcds.2005.13.219
Lianfa He, Hongwen Zheng, Yujun Zhu. Shadowing in random dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 355-362. doi: 10.3934/dcds.2005.12.355
Hiroshi Matano, Ken-Ichi Nakamura. The global attractor of semilinear parabolic equations on $S^1$. Discrete & Continuous Dynamical Systems - A, 1997, 3 (1) : 1-24. doi: 10.3934/dcds.1997.3.1
Noriaki Kawaguchi. Topological stability and shadowing of zero-dimensional dynamical systems. Discrete & Continuous Dynamical Systems - A, 2019, 39 (5) : 2743-2761. doi: 10.3934/dcds.2019115
P.E. Kloeden, Desheng Li, Chengkui Zhong. Uniform attractors of periodic and asymptotically periodic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2005, 12 (2) : 213-232. doi: 10.3934/dcds.2005.12.213
Tohru Nakamura, Shuichi Kawashima. Viscous shock profile and singular limit for hyperbolic systems with Cattaneo's law. Kinetic & Related Models, 2018, 11 (4) : 795-819. doi: 10.3934/krm.2018032
Nicolai Haydn, Sandro Vaienti. The limiting distribution and error terms for return times of dynamical systems. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 589-616. doi: 10.3934/dcds.2004.10.589
Xinyuan Liao, Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative non-autonomous lattice dynamical systems. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1087-1111. doi: 10.3934/cpaa.2007.6.1087
Caidi Zhao, Shengfan Zhou. Compact uniform attractors for dissipative lattice dynamical systems with delays. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 643-663. doi: 10.3934/dcds.2008.21.643
Michael Zgurovsky, Mark Gluzman, Nataliia Gorban, Pavlo Kasyanov, Liliia Paliichuk, Olha Khomenko. Uniform global attractors for non-autonomous dissipative dynamical systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 2053-2065. doi: 10.3934/dcdsb.2017120
Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809
Keonhee Lee, Kazumine Moriyasu, Kazuhiro Sakai. $C^1$-stable shadowing diffeomorphisms. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 683-697. doi: 10.3934/dcds.2008.22.683
Giovanni Forni, Howard Masur, John Smillie. Bill Veech's contributions to dynamical systems. Journal of Modern Dynamics, 2019, 14: ⅴ-xxv. doi: 10.3934/jmd.2019v
Alicia Cordero, José Martínez Alfaro, Pura Vindel. Bott integrable Hamiltonian systems on $S^{2}\times S^{1}$. Discrete & Continuous Dynamical Systems - A, 2008, 22 (3) : 587-604. doi: 10.3934/dcds.2008.22.587
Flavio Abdenur, Lorenzo J. Díaz. Pseudo-orbit shadowing in the $C^1$ topology. Discrete & Continuous Dynamical Systems - A, 2007, 17 (2) : 223-245. doi: 10.3934/dcds.2007.17.223
Nicolai T. A. Haydn, Kasia Wasilewska. Limiting distribution and error terms for the number of visits to balls in non-uniformly hyperbolic dynamical systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2585-2611. doi: 10.3934/dcds.2016.36.2585
Sonja Hohloch, Silvia Sabatini, Daniele Sepe. From compact semi-toric systems to Hamiltonian $S^1$-spaces. Discrete & Continuous Dynamical Systems - A, 2015, 35 (1) : 247-281. doi: 10.3934/dcds.2015.35.247
H. M. Hastings, S. Silberger, M. T. Weiss, Y. Wu. A twisted tensor product on symbolic dynamical systems and the Ashley's problem. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 549-558. doi: 10.3934/dcds.2003.9.549
Cedric Galusinski, Serguei Zelik. Uniform Gevrey regularity for the attractor of a damped wave equation. Conference Publications, 2003, 2003 (Special) : 305-312. doi: 10.3934/proc.2003.2003.305
Tomás Caraballo, David Cheban. On the structure of the global attractor for infinite-dimensional non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2013, 12 (1) : 281-302. doi: 10.3934/cpaa.2013.12.281
Arno Berger | CommonCrawl |
Instruments and methods of investigation
Objectives of the Millimetron Space Observatory science program and technical capabilities of its realization
I.D. Novikov a, b, S.F. Likhachev a, Yu.A. Shchekinov a, c, d, A.S. Andrianov a, A.M. Baryshev a, e, A.I. Vasyunin f, D.Z. Wiebe g, T. de Graauw a, e, A.G. Doroshkevich a, I.I. Zinchenko h, i, N.S. Kardashev a, V.I. Kostenko a, T.I. Larchenkova a, L.N. Likhacheva a, A.O. Lyakhovets a, D.I. Novikov a, S.V. Pilipenko a, A.F. Punanova f, A.G. Rudnitsky a, A.V. Smirnov a, V.I. Shematovich g
a Lebedev Physical Institute, Russian Academy of Sciences, Leninsky prosp. 53, Moscow, 119991, Russian Federation
b Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, Copenhagen, DK-2100, Denmark
c Southern Federal University, Bolshaya Sadovaya Str. 105/42, Rostov-on-Don, 344006, Russian Federation
d Raman Research Institute, C. V. Raman Avenue, Sadashiva Nagar, Bangalore, 560080, India
e Kapteyn Astronomical Institute, University of Groningen, PO Box 72, Groningen, 9700, the Netherlands
f Ural Federal University named after the First President of Russia B N Yeltsin, prosp. Mira 19, Ekaterinburg, 620002, Russian Federation
g Institute of Astronomy, Russian Academy of Sciences, ul. Pyatnitskaya 48, Moscow, 119017, Russian Federation
h Federal Research Center Institute of Applied Physics of the Russian Academy of Sciences, ul. Ulyanova 46, Nizhny Novgorod, 603000, Russian Federation
i N.I. Lobachevskii Nizhnii Novgorod State University, prosp. Gagarina 23, Nizhnii Novgorod, Russian Federation
We present the scientific program of the Spectr-M project aimed at the creation and operation of the Millimetron Space Observatory (MSO) planned for launch in the late 2020s. The unique technical capabilities of the observatory will enable broadband observations of astronomical objects from 50 μm to 10 mm wavelengths with a record sensitivity (up to ∼ 0.1 μJy) in the single-dish mode and with an unprecedented high angular resolution (∼ 0.1 μs) in the ground-space very long baseline interferometer (SVLBI) regime. The program addresses fundamental priority issues in of astrophysics and physics in general that can be solved only with the MSO capabilities: 1) the study of physical processes in the early Universe up to redshifts $z\sim 2\times 10^6$ through measuring μ-distortions of the cosmic microwave background (CMB) spectrum, and investigation of the structure and evolution of the Universe at redshifts z < 15 by measuring y-distortions of the CMB spectrum; 2) the investigation of the geometry of space-time around supermassive black holes (SMBHs) in the center of our Galaxy and M87 by imaging surrounding shadows, the study of plasma properties in the shadow formation regions, and the search for observational appearances of wormholes; 3) the study of observational appearances of the origin of life in the Universe — the search for water and biomarkers in the Galactic interstellar medium. Moreover, the technical capabilities of the MSO can help solve related problems, including the birth of the first galaxies and SMBHs (z ≿ 10), alternative approaches to measuring the Hubble constant, the physics of SMBHs in 'dusty' galactic nuclei, the study of protoplanetary disks and water transport in them, and the study of 'worlds with oceans' in the Solar System.
Keywords: submillimeter astronomy, Millimetron Space Observatory, supermassive black holes, wormholes, cosmic microwave background, early Universe, origin of galaxies, interstellar medium, water and biomarkers in the Galaxy, Solar System
PACS: 07.87.+v, 96.30.−t, 96.55.+z, 97.60.Lf, 98.80.Es (all)
URL: https://ufn.ru/en/articles/2021/4/e/
Citation: Novikov I D, Likhachev S F, Shchekinov Yu A, Andrianov A S, Baryshev A M, Vasyunin A I, Wiebe D Z, de Graauw T, Doroshkevich A G, Zinchenko I I, Kardashev N S, Kostenko V I, Larchenkova T I, Likhacheva L N, Lyakhovets A O, Novikov D I, Pilipenko S V, Punanova A F, Rudnitsky A G, Smirnov A V, Shematovich V I "Objectives of the Millimetron Space Observatory science program and technical capabilities of its realization" Phys. Usp. 64 386–419 (2021)
Received: 14th, April 2020, revised: 8th, December 2020, accepted: 7th, December 2020
Оригинал: Новиков И Д, Лихачёв С Ф, Щекинов Ю А, Андрианов А С, Барышев А М, Васюнин А И, Вибе Д З, де Граау Т, Дорошкевич А Г, Зинченко И И, Кардашёв Н С, Костенко В И, Ларченкова Т И, Лихачёва Л Н, Ляховец А О, Новиков Д И, Пилипенко С В, Пунанова А Ф, Рудницкий А Г, Смирнов А В, Шематович В И «Задачи научной программы космической обсерватории Миллиметрон и технические возможности её реализации» УФН 191 404–443 (2021); DOI: 10.3367/UFNr.2020.12.038898
References (352) Cited by (3) Similar articles (3) | CommonCrawl |
Home Our Books Mechanics Waves Optics Thermal Electromagnetism Modern
Anveshika Download About Us Contact Us
Heat Conduction in One Dimension
By Jitender Singh on Dec 21, 2019
In general, the temperature at a point (x) varies with time (t) i.e., it is a function $T(x,t)$. In the steady state, the temperature depends on x but not on time t i.e., $T(x,t)=T(x)$. In this state, the heat that reaches any cross-section is transmitted to the next without accumulation. The discussion in this article is limited conduction heat transfer in the steady state.
The rate of heat transfer through a rod of length x and cross-section area A whose two ends are maintained at a temperature T1 and T2 is given by \begin{align} \frac{\mathrm{d}Q}{\mathrm{d}t}=\frac{KA}{x}(T_1-T_2) \end{align} where K is the thermal conductivity of the material of the rod. The SI unit of thermal conductivity is W/(m-K). The dimensional formula of the thermal conductivity is MLT-3K-1. Note that heat is transferred from the end at higher temperature T1 to the end at the lower temperature T2.
The quantity dQ/dt is also called heat current or the rate of flow of heat. The quantity $(T_1-T_2)/x$ (and its differential form dT/dx) is called temperature gradient. The thermal resistance of a body is defined as the ratio of temperature difference to the heat current \begin{align} R=\frac{\Delta T}{\mathrm{d}Q/\mathrm{d}t}=\frac{x}{KA}. \end{align} In electrical circuits, equivalents of heat current, temperature difference and thermal resistance is electric current, potential difference and electrical resistance, respectively.
Consider two rods of thermal resistances $R_1$ and $R_2$. The effective thermal resistance of a system of two rods connected in series is given by \begin{align} R_s=R_1+R_2 \end{align} If these rods are of same physical dimensions and of thermal conductivities $K_1$ and $K_2$ then effective thermal conductivity of the system is \begin{align} K_s=\frac{K_1K_2}{K_1+K_2}. \end{align} The effective thermal resistance of a system of two rods connected in parallel is given by \begin{align} R_p=\frac{R_1R_2}{R_1+R_2}. \end{align} If rods is parrallel are of same physical dimensions but thermal conductivities $K_1$ and $K_2$ then effective thermal conductivity of the system is \begin{align} K_p=K_1+K_2. \end{align}
In general, a substance that is a good conductor of heat is also a good conductor of electricity. At a given temperature T, the ratio of thermal conductivity (K) to the electrical conductivity (σ) is constant i.e., K/σT= L (a constant). This is known as Wiedemann-Franz Law.
Solved Problems on Conduction Heat Transfer from IIT JEE
The end Q and R of two thin wires, PQ and RS, are soldered (IIT JEE 2016)
Two blocks connected in series and parallel ( IIT JEE 2013)
Two rods connected in series and parallel (IIT JEE 2014)
Two conducting cylinders of equal length but different radii connected in series (IIT JEE 2018)
Temperature at the junction of the three identical rods (IIT JEE 2001)
Three rods forms an isosceles triangle. Temperature at the vertex? (IIT JEE 1995)
Effective thermal conductivity of two concentric cylinders (IIT JEE 1988)
Evaporation of water and melting of ice kepts at two ends of a metal rod AB of length 10x (IIT JEE 2009)
MCQ on a composite block is made of slabs A, B, C, D, and E (IIT JEE 2011)
A point source of heat of power P kept at the centre of a spherical shell (IIT JEE 1991)
A double pane window used for insulating a room thermally from outside (IIT JEE 1997)
An electric heater is used in a room of total wall area 137 m2 ( IIT JEE 1986)
Questions on Conduction Heat Transfer
Question 1: For cooking the food, which of the following type of utensil is most suitable
A. High specific heat and low thermal conductivity
B. High specific heat and high thermal conductivity
C. Low specific heat and low thermal conductivity
D. Low specific heat and high thermal conductivity
Question 2: Two conducting walls of thickness d1 and d2, and thermal conductivity K1 and K2 are joined together. If temperatures on the outside surfaces are T1 and T2 then the temperature of common surface is
A. $\frac{K_1T_1d_1+K_2T_2d_2}{K_1d_1+K_2d_2}$
B. $\frac{K_1T_1+K_2T_2}{K_1+K_2}$
C. $\frac{K_1T_1+K_2T_2}{T_1+T_2}$
D. $\frac{K_1T_1d_2+K_2T_2d_1}{K_1d_2+K_2d_1}$
Question 3: Ice has formed on a shallow pond, and a steady state has been reached, with the air above the ice at -5.0°C and the bottom of the pond at 4°C. If the total depth of ice + water is 1.4 m, then he thickness of ice layer is (Assume that the thermal conductivities of ice and water are 0.40 and 0.12 cal/(m-°C), respectively.)
A. 1.1 m
B. 0.4 m
C. 2.1 m
D. 3.6 m
Stefan's Law
Kirchhoff's Law of Radiation
Wien's Displacement Law
See Our Book
IIT JEE Physics by Jitender Singh and Shraddhesh Chaturvedi
Our Books Disclaimer About Us Contact Us Download | CommonCrawl |
Why is the mass of atmosphere of Venus so much greater than that of the Earth?
Earth and Venus have a very similar gravity, but the mass of atmosphere on Venus is much greater (according to this wikipedia article 93 times larger). I know that the chemical composition and temperature are different, but there is just a lot more matter in the Venusian atmosphere. Why is this?
atmosphere planetary-science
$\begingroup$ Also, I think you'll find this question useful: space.stackexchange.com/questions/22856/… $\endgroup$ – Will Feb 3 '20 at 13:06
$\begingroup$ Voted to close because not earth science. The question would be better answered in the astronomy department. Something to read i found, the "why" seems still to be unclear: ui.adsabs.harvard.edu/abs/2013cctp.book...19B/abstract $\endgroup$ – user18607 Feb 3 '20 at 15:23
$\begingroup$ earthscience.meta.stackexchange.com/q/2/18590 $\endgroup$ – user18590 Feb 3 '20 at 15:32
$\begingroup$ The community's answers to Is planetary science on-topic? as well as my own experience here show that this is definitely considered on-topic! $\endgroup$ – uhoh Feb 4 '20 at 0:34
$\begingroup$ FAO those who think this is off-topic, there's a fairly clear consensus on this meta question that discussions of other (real) planets, at least in terms of processes that also exist on Earth, is on-topic here: earthscience.meta.stackexchange.com/questions/2/… $\endgroup$ – Semidiurnal Simon Feb 4 '20 at 8:47
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
It is thought that about 4 billion years ago, Venus had an atmosphere and oceans similar to Earth's. Owing to it's proximity to the sun and the fact that most of its primeval atmosphere was carbon dioxide, it heated up and the oceans evaporated away, causing more greenhouse heating as they did so. Water vapour is a potent greenhouse gas.
Earth's primeval atmosphere was also mainly CO2, but being about 30 million miles further away from the sun it did not experience the runaway greenhouse effect which overheated Venus. The seas did not evaporate into the atmosphere, as they did on Venus, where the water mixed with sulphur dioxide/trioxide to create clouds of sulphuric acid droplets.
Another thing which has thinned the Earth's atmosphere is billions of years of sequestration by living things, which have transformed atmospheric CO2 into limestone (CaCO3), other carbonates, and various forms of fossil hydrocarbons (Lange et al., 1983). The photosynthetic organisms which produced the hydrocarbons also produced huge quantities of oxygen, which today makes up 21 percent of our atmosphere.
If you could magically turn all the carbon sequestered away in the rocks back into CO2, and evaporate our oceans so that they went up into the atmosphere, the Earth would be much more like Venus, including the runaway greenhouse effect which has heated Venus's lower atmosphere to 460C. Being much further away from the sun than Venus, Earth would not heat up as much as that, but it would still be too hot for life.
Mars also had an atmosphere similar to Earth's and Venus's, but because of its much weaker gravitational field has been unable to hang onto them. However, the proportions of the gases in the Martian atmosphere are still similar to those on Venus, though far more tenuous.
Lange, M. A. & Ahrens, T. J.(1983):"Shock-Induced CO2-PRODUCTION from Carbonates and a PROTO-CO2-ATMOSPHERE on the Earth", LUNAR AND PLANETARY SCIENCE XIV, P. 419-420. http://adsabs.harvard.edu/full/1983LPI....14..419L
Michael WalsbyMichael Walsby
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – gerrit♦ Feb 4 '20 at 12:37
The facile answer is "Venus's atmosphere has many times the mass of Earth's atmosphere because it has many times the amount of gas". Which makes us ask wonder why that is.
Many planetary scientists have interpreted the relatively even distribuion of impact craters on the surface of Venus as a sign that the planet was resurfaced in some catastrophic event 500-700 million years ago:
From the craters visible in Magellan's Venus maps, scientists believe they are looking at a relatively young planetary surface, perhaps about 500 million years old. Since Venus formed at the same time as Earth 4.6 billion years ago, some event or events 500 million years ago must have resurfaced the planet. Scientists believe that this may have been the work of massive outpourings of lava from planet-wide volcanic eruptions.
-NASA, Magellan Summary Sheet.
The initial estimate of 500 million years has since been pushed back to a bit older, around 700 million years. (Earth was in a snowball phase about this time).
Whether the event on Venus was purely volcanic or the result of an impact is still an issue of debate. Regardless, this was a volcanic event many orders of magnitude greater than the Siberian Traps whose gases caused the end-Permian extinction on Earth.
Earth's atmosphere is replenished from volcanic outgassing. We can speculate that the catastrophe released an enormous amount of gas into Venus's atmosphere.
Between 700 and 750 million years ago, a near-global resurfacing event triggered the release of carbon dioxide from rock on the planet, which transformed its climate.
(Wikipedia: Life on Venus)
SpencerSpencer
Where did the atmosphere's nitrogen come from?
Why do some planets have lots of $\mathrm{N_2}$ and others none?
Why does Earth have abundant oxygen in the atmosphere?
Why do the dry and moist adiabatic lapse rates converge with height?
What "g" would be needed to keep helium on Earth?
Why does the emission of aircrafts at higher altitudes have a greater effect on the climate?
What was the density and composition of Earth's atmosphere during the Cretaceous warmest period?
Does the magnetic field really protect Earth from anything?
How much energy is required to hold the earth's atmosphere up against the forces of gravity?
Why is the creation of water from the combustion of hydrocarbons not listed as a cause for rising sea levels? | CommonCrawl |
Can tests for the convergence and divergence of series be used to create undecidable sentences?
Let f(k) be a recursive function which maps the set of positive integers into itself. Let T be a formalized theory which is axiomatizable and contains Peano's Arithmetic as a sub-theory. For example, T could be ZFC or Second order Arithmetic. Could there be a kind of "Incompleteness theorem" that states the following? "If T is consistent, then there always exists a total recursive f(k) such that it cannot be proved in T whether 1/f(1) + 1/f(2) + --- + 1/f(k) + --- is convergent or divergent". ------------- I was motivated to ask this because of the increasingly complicated test used to determine whether a series of positive real numbers is convergent or divergent. There seems to be some sort of partial ordering among these tests, so that if a test does not work for some particular series, a more complicated test-further along in the partial ordering-may work. -------- Note that a recursive series of positive rational numbers (which are not necessarily the reciprocals of integers) can also be accommodated in this framework. Each term of such a series may be expressed as a finite sum of so-called "Egyptian fractions"- since these always have a numerator equal to 1.
lo.logic
Garabed GulbenkianGarabed Gulbenkian
Let $\phi(n)$ be any bounded formula in the language of arithmetic. The sequence of rational numbers $$a_m = \begin{cases} 1/m & \text{if $\exists n \lt m\,\phi(n)$} \\ 1/2^m & \text{if $\forall n \lt m\,\lnot\phi(n)$} \end{cases}$$ is total computable and "$\sum_m a_m$ diverges" is logically equivalent to $\exists n \phi(n)$ over a PA. Taking $\phi(n)$ such that $\exists n \phi(n)$ is the Rosser sentence of a theory $T$ extending PA, we obtain a provably total sequence of rational numbers $a_m$ such that, if $T$ is consistent, then $T$ neither proves nor refutes that $\sum_m a_m$ is convergent.
François G. Dorais♦François G. Dorais
$\begingroup$ Very nice. I thought that something like this could be done but was not able to figure it out for myself. $\endgroup$ – Garabed Gulbenkian Feb 6 '16 at 21:37
Not the answer you're looking for? Browse other questions tagged lo.logic or ask your own question.
What is cardinality of the set of true undecidable minimal sentences in a formal theory of aritmetic
Succinctly naming big numbers: ZFC versus Busy-Beaver
Can the Riemann hypothesis be undecidable?
Godel 's Ladder: Undecidable PI_N sentences for N =2, 3, …
How can you formalize the metamathematics conventionally used to state Godel's theorem?
The continuum hypothesis and the diamond principle for $\aleph_1$
Can we have a theory $T$ that is complete for simple sentences in the language of $T$ that are weaker than $ Con(T)$?
Further study of "Elementary geometry" in the sense of Tarski
What can the approximation of a group by some class be used for? | CommonCrawl |
Brain Structure and Function
June 2018 , Volume 223, Issue 5, pp 2157–2179 | Cite as
Post-mortem inference of the human hippocampal connectivity and microstructure using ultra-high field diffusion MRI at 11.7 T
Justine Beaujoin
Nicola Palomero-Gallagher
Fawzi Boumezbeur
Markus Axer
Jeremy Bernard
Fabrice Poupon
Daniel Schmitz
Jean-François Mangin
Cyril Poupon
The human hippocampus plays a key role in memory management and is one of the first structures affected by Alzheimer's disease. Ultra-high magnetic resonance imaging provides access to its inner structure in vivo. However, gradient limitations on clinical systems hinder access to its inner connectivity and microstructure. A major target of this paper is the demonstration of diffusion MRI potential, using ultra-high field (11.7 T) and strong gradients (750 mT/m), to reveal the extra- and intra-hippocampal connectivity in addition to its microstructure. To this purpose, a multiple-shell diffusion-weighted acquisition protocol was developed to reach an ultra-high spatio-angular resolution with a good signal-to-noise ratio. The MRI data set was analyzed using analytical Q-Ball Imaging, Diffusion Tensor Imaging (DTI), and Neurite Orientation Dispersion and Density Imaging models. High Angular Resolution Diffusion Imaging estimates allowed us to obtain an accurate tractography resolving more complex fiber architecture than DTI models, and subsequently provided a map of the cross-regional connectivity. The neurite density was akin to that found in the histological literature, revealing the three hippocampal layers. Moreover, a gradient of connectivity and neurite density was observed between the anterior and the posterior part of the hippocampus. These results demonstrate that ex vivo ultra-high field/ultra-high gradients diffusion-weighted MRI allows the mapping of the inner connectivity of the human hippocampus, its microstructure, and to accurately reconstruct elements of the polysynaptic intra-hippocampal pathway using fiber tractography techniques at very high spatial/angular resolutions.
Diffusion MRI Human hippocampus Structural connectivity Neurite density Microstructure imaging
The online version of this article ( https://doi.org/10.1007/s00429-018-1617-1) contains supplementary material, which is available to authorized users.
The human hippocampal formation plays a critical role in learning and memory. Its regions appear to be specialized for preferential functions, such as the specific involvement of the dentate gyrus (DG) and the Cornu Ammonis (CA) subfield 3 in pattern separation and completion, respectively, and that of CA2 in social memory (Leutgeb et al. 2007; Hitti and Siegelbaum 2014). Furthermore, the structure and connectivity of hippocampal regions and layers are known to be selectively affected by multiple neurological disorders such as Alzheimer's disease or temporal lobe epilepsy, as well as by the normal process of aging (Zhou et al. 2008; Wang 2006; Dinkelacker et al. 2015; Coras 2014; Prull et al. 200; Wilson et al. 2006). Although anterior–posterior (ventral–dorsal in rodents) functional differences within the hippocampal formation have been reported in humans and experimental animals (Fanselow and Dong 2010; Poppenk et al. 2013), detailed anatomical studies are still not available in humans. The complexity of the hippocampus makes it one of the most mysterious regions in the central nervous system and also one of the most extensively studied. The boundaries between different hippocampal subfields have been described in the neuroanatomy literature using cytoarchitectonic features that require histological staining and microscopic resolution to visualize (Gloor 1997; Amaral and Insausti 1990; Duvernoy 2005), but there are still discrepancies concerning whether it makes sense to separate some regions or not.
Few in vivo imaging techniques are available to investigate the human hippocampus. Magnetic resonance imaging (MRI), and more precisely anatomical MRI (aMRI) and diffusion MRI (dMRI), remain the key modalities used nowadays. Studies using aMRI were first employed to segment the hippocampus and enable volumetric analysis to define early markers of Alzheimer's disease or depression (Jack et al. 1992; Videbech and Ravnkilde 2004; Boutet 2014). Several studies have also been conducted using diffusion tensor imaging (DTI) on clinical MRI systems to investigate the connectivity of the human hippocampus to better understand the anatomo-functional mapping of the limbic system (Adnan et al. 2015; Dinkelacker et al. 2015; Zeineh et al. 2012). However, clinical MRI systems have inherent limitations that prevent one from reaching ultra-high spatial and angular resolutions, mainly due to the characteristics of the available gradient coils. The recent development of the Connectome gradient, able to provide 300 mT/m, could offer an alternative, but it is still limited in spatial resolution (1.25 mm) due to the static field being kept moderate (3T) to prevent the strong mechanical constraints that would occur at ultra-high field (McNab 2013; Setsompop 2013).
Alternatively, ex vivo MR imaging can be performed using ultra-high field preclinical scanners. In addition to the ultra-high static field, these systems can be equipped with very strong gradients that can reach 1000 mT/m. The b-values can thus exceed 10,000 \(\hbox {s}/\hbox {mm}^{2}\), diffusion times can be short enough to reach short diffusion time regimes, and diffusion gradient pulses become closer to Dirac shapes. Finally, contrary to in vivo scans that cannot exceed a couple of hours, ex vivo imaging does not suffer from such a limitation. Specifically, ex vivo studies have thus been carried out with medial temporal lobe (MTL) samples containing the hippocampus (Shepherd et al. 2007; Augustinack 2010; Coras 2014; Colon-Perez 2015; Modo et al. 2016), where most of the authors carried out DTI-based tractography to map some of the larger connections of the hippocampus, such as the perforant pathway (Augustinack 2010; Coras 2014). Although they remain landmark studies, these studies rely on a model known to present strong limitations. First, it cannot model multiple fiber populations within a voxel, like bundle crossings or kissings, which is a weakness especially in the case of the hippocampus, because it contains multiple fiber crossings reflecting its complex circuitry. Second, diffusion tensor features like fractional anisotropy (FA) and mean diffusivity (MD) are inherently non-specific, and a reduction in their value can be associated with different types of microstructural changes. For example, a reduction in FA can be due to demyelination, edema, increased neurite dispersion, or other microstructural changes (Takahashi 2002; Beaulieu 2009; De Santis et al. 2014).
To model multiple fiber populations within a voxel, numerous reconstruction techniques have been developed during the last decades. Regarding modeling of regions with a complex fiber architecture, High Angular Resolution Diffusion Imaging (HARDI) is probably the most widely adopted (see Tournier et al. (2011) for a review of such models). HARDI models produce maps of orientation distribution functions (ODF), the peaks of which characterize the diffusion displacement profile. Since the hippocampus has a very complex fiber architecture with crossings, kissings, and splittings of fibers, it becomes mandatory to use HARDI models to robustly infer its structural connectivity using tractography. To our knowledge, HARDI models have only been used once on a human hippocampus, by Colon-Perez (2015), and not to study the human hippocampal connectivity in its entirety.
Several reconstruction techniques have recently emerged in diffusion MRI to characterize the tissue microstructure, yielding new applications aiming at doing virtual biopsy, and known as diffusion MR microscopy. They rely on the development of multi-compartmental models that estimate, for each voxel, the fraction of each compartments and its key characteristics (e.g., density or dimension). The first established technique relying on the Composite Hindered And Restricted ModEl of Diffusion (CHARMED) introduced by Assaf and Basser (2005) assumed a diffusion attenuation resulting from three compartments. This model laid the foundations of the two next techniques aiming at providing estimates of the axon diameter and density, i.e., AxCaliber (Assaf et al. 2008) and ActiveAx (Alexander et al. 2010), as well as their improvements (Zhang et al. 2011; De Santis et al. 2016). More recently, the Neurite Orientation Dispersion and Density Imaging (NODDI) reconstruction technique was introduced (Zhang et al. 2012) to quantify axon and dendrite densities (collectively known as neurites). These parameters have been shown to provide more specific characteristics of brain tissue microstructure than the quantitative parameters derived from the DTI model.
For the first time, we demonstrate on a human medial temporal lobe sample that ex vivo ultra-high field with strong gradients MRI at 11.7 T and 780 mT/m gives the opportunity to map not only the complex anatomy of the hippocampus, but also its inner connectivity and its organization at the mesoscopic scale. First, we investigate the use of combined anatomical and diffusion MRI to segment the inner structures of the hippocampus and the adjacent entorhinal cortex. Second, we demonstrate that HARDI allows one to accurately reconstruct elements of the polysynaptic pathway of the hippocampal formation. Third, we show that diffusion MR microscopy is a powerful technique, that gives access to insights about cell populations of the hippocampal tissues by revealing the laminar structure of the cornu ammonis (CA).
Tissue sample and container
The study was carried out on a post-mortem right temporal lobe from an 87-year-old male with no abnormal neuropathology findings obtained from the donor program of the Institute of Anatomy, Rostock, Germany. The specimen measured approximately \(38\times 50\times 55\ \hbox {mm}^{3}\), and contained the whole hippocampal formation and some of its surrounding structures. It was fixed with \(4\%\) formalin buffer 36 h after death and stored 38 months until further processing.
Prior to MRI acquisition, the sample was transferred for 1 week to a 0.1 M phosphate-buffered saline solution at \(4\,^{\circ }\hbox {C}\) to be rehydrated. Since acquisitions take place at room temperature (approximately \(20\,^{\circ }\hbox {C}\)), the specimen was placed into the imaging container 4–5 h prior to scanning and transferred to the magnet room to stabilize its temperature to \(20\,^{\circ }\hbox {C}\). This process is required to avoid effects related to temperature variations that would induce modifications of the local \(T_{2}\) relaxation time and the local apparent diffusion coefficient (ADC) up to a factor of 1.5–2 between 4 and \(20\,^{\circ }\hbox {C}\) (Thelwall et al. 2006). Note that the ADC of this post-mortem sample being scanned at \(20\,^{\circ }\hbox {C}\), instead of \(37\,^{\circ }\hbox {C}\) as for in vivo imaging, was already reduced by a factor of about 2. Furthermore, to avoid drying of the tissue during the experiment and to reduce imaging artifacts, the sample was immersed in a proton-free fluid, Fluorinert (FC-40, 3M Company, USA). This fluid does not provide any NMR signal and shares a similar susceptibility coefficient to the one of brain tissues, enabling one to avoid the induction of static magnetic field variations close to the interfaces between air and tissue that would induce local geometrical and intensity distortions. A dedicated container was manufactured to exactly fit the inner diameter of the MR coil antenna with a specific design aimed to prevent the formation of air bubbles that would be responsible for severe susceptibility-induced imaging artifacts. The suspension of the sample within its container is guaranteed by a plastic funnel that does not induce any MR signal and avoids motion artifacts during the commutation of the strong diffusion gradients.
MRI hardware
All the acquisitions were performed on a preclinical 11.7 T Bruker MRI system (BioSpec 117/16 USR Bruker MRI, Ettlingen, Germany) equipped with strong gradients (maximum gradient \(\hbox {magnitude} = 780\,\hbox {mT}/\hbox {m}\), slew-\(\hbox {rate} = 9660\ \hbox {T}/\hbox {m}/\hbox {s}\)) using a 60 mm transmit/receive volume coil. Although surface coils are known to provide better SNRs than volume coils, the 60 mm volume coil was preferred, because it corresponded to the best trade-off between the field-of-view (FOV) coverage and the dimensions of the sample. Furthermore, it enabled preservation of a relative homogeneity of the signal through its entire FOV.
Determination of the magnetic and diffusion properties of the sample
It was mandatory to calibrate the distributions of the magnetic and diffusion properties of the hippocampus sample to adequately tune the target diffusion-weighted multiple-shell imaging protocol required to apply the NODDI model. To this aim, a calibration MRI protocol was established including a series of experiments to infer the histograms of the \(T_{2}\) transverse relaxation time and the mean diffusivity D. Their analysis helps to define the maximum value of the diffusion sensitization b to be used. For a conventional Pulsed-Gradient-Spin-Echo (PGSE) diffusion-weighted imaging sequence, \(\hbox {b} =(\mathbf |G| \gamma \delta )^{2}(\varDelta -\delta /3)=\mathbf |q| ^{2}\tau\) with the approximation of rectangular gradients, with G the applied diffusion gradient vector, \(\gamma\) the nuclear gyromagnetic ratio for water protons, \(\delta\) the duration time of gradient pulses, \(\varDelta\) the time between two pulses, and \(\tau\) the diffusion time.
To reach high b-values, one can increase the gradient strength G, the gradient width \(\delta\), or the separation time between the two gradient pulses \(\varDelta\). However, to keep the gradient pulses close enough to Dirac pulses, thus preserving a Fourier relationship between the diffusion propagator and the diffusion NMR signal with respect to the q wavevector, \(\delta\) has to be kept at its minimum possible value, 4.3 ms in our case. Consequently, either \(\varDelta\) or G have to be raised to increase the diffusion sensitization. The side effect of increasing \(\varDelta\) is a net increase of the echo time \(T_\text {E}\). However, the PGSE sequence yields an NRM signal that integrates not only an exponential diffusion decay \(e^{-bD}\), but also a further exponential decay \(e^{-T_\text {E}/T_{2}}\) linked to the \(T_{2}\) transverse relaxation inherited from the spin-echo scheme present in a PGSE sequence.
An estimation of D and \(T_{2}\) was carried out to determine the range of usable echo time \(T_\text {E}\) and diffusion sensitization b, to avoid excessive signal loss.
Histogram of the transverse relaxation time \(T_{2}\)
Fixed tissues generally suffer from a significant reduction of their transverse relaxation time drastically reducing the time window available to acquire the signal with a good signal-to-noise ratio (SNR) (Pfefferbaum et al. 2004). To map the transverse relaxation time at each voxel of our sample, a standard multi-spin-multi-echo (MSME) pulse sequence (Meiboom and Gill 1958) was used. In total, 12 echoes were collected corresponding to 12 echo times linearly spaced between 6.4 and 76.8 ms. Imaging parameters for this sequence were: isotropic spatial resolution of \(300\,\upmu \hbox {m}\), 12 averages, \(\hbox {TR} = 16,000\,\hbox {ms}\), and a total scan duration of 10 h 14 min. Collected MSME data were then used to fit the log-linear model corresponding to the \(T_{2}\)-decay with a Levenberg–Marquardt algorithm carefully initialized to get robust estimates. The histogram of the \(T_{2}\) quantitative values computed from the voxels included in a precomputed mask of the sample and depicts two main modes of \(T_{2}\). The mode corresponding to the lower value is identified as the white matter \(T_{2}\) (\(T_{2w}\) \(\approx\) 36.3 ms), and that corresponding to the higher value is identified as grey matter \(T_{2}\) (\(T_{2g}\) \(\approx\) 46.4 ms): transverse relaxation time is lower where fibers are more concentrated, i.e., in the white matter. As the hippocampus is mainly composed of grey matter, we assumed its \(T_{2}\) value closer to 46 ms.
Histogram of the mean diffusivity D
Post-mortem tissues depict diffusion coefficients significantly lower (two to five times lower) than in vivo values. Consequently, probing the anisotropy of the diffusion process requires the use of inversely proportional higher b values, to obtain a comparable diffusion contrast to in vivo images (D'Arceuil et al. 2007). A histogram of the mean diffusivity D was inferred using the DTI model from a diffusion-weighted data set acquired with a single-shell sampling of the q-space at \(b=4500\, \hbox {s}/\hbox {mm}^2\) along 60 uniformly distributed diffusion directions, \(\hbox {TR}/\hbox {TE} = 9000/24.2\,\hbox {ms}\), \(\varDelta /\delta = 14.4/4.3\,\hbox {ms}\), matrix size: \(192 \times 192 \times 176\), and an isotropic resolution of \(300\,\upmu\)m. The distribution indicates a mean diffusivity of \(0.16 \times 10^{-3}\ \hbox {mm}^{2}/\hbox {s}\).
Determination of maximum \(T_\text {E}\) and b values
To prevent an excessive loss of signal, a lower acceptable limit of the product \(e^{\frac{-\text{TE}}{T_{2}}}\ e^{-bD}\) was set to 0.05, equally distributed over the two signal decays:
$$\begin{aligned} \left\{ \begin{array}{ccc} e^{\frac{-\text {TE}_{\text {max}}}{T_{2}}} &{} \ge &{} \sqrt{0.05}\\ e^{-b_{\text {max}}D} &{} \ge &{} \sqrt{0.05},\\ \end{array}\right. \end{aligned}$$
and hence
$$\begin{aligned} \left\{ \begin{array}{cccc} \text {TE}_{\text {max}} &{} \le &{} 69 &{}\\ e^{-b_{\text {max}}D} &{} \le &{} 9361 &{} \hbox {s}/\hbox {mm}^{2}.\\ \end{array}\right. \end{aligned}$$
In practice, \(TE_{\text {max}}\) was set to 59.1ms, which enabled a \(b_{\text {max}}\) of 10,000 \(\hbox {s}/\hbox {mm}^{2}\).
Imaging protocol
The imaging protocol included anatomical and diffusion scans. The anatomical scan was tuned to reach a very high spatial resolution to perform accurate manual segmentation of the hippocampal subfields and of the entorhinal cortex.
Anatomical scan
The anatomical image was acquired using a standard \(T_{2}\)-weighted spin-echo sequence with isotropic spatial resolution of \(200\,\upmu \hbox {m}\), matrix \(268 \times 268\); 256 slices, thickness \(200\,\upmu \hbox {m}\), \(\hbox {TR}/\hbox {TE}\) = 12,000\(/26.5\,\hbox {ms}\), 15 averages, and a total scan time of 10 h 46 min.
Diffusion scan
Diffusion data were acquired using a conventional PGSE sequence. The protocol included a first single-shell high angular resolution diffusion imaging (HARDI) data set used to infer the structural connectivity of the human hippocampal formation, to which a multiple-shell hybrid diffusion imaging (HYDI) data set was added to infer quantitative microstructural features using diffusion MR microscopy.
The structural connectivity was established from the HARDI data set collected at \(\hbox {b} = 4500\ \hbox {s}/\hbox {mm}^{2}\) along 500 directions uniformly distributed over a sphere. It was split into 15 blocks of 32 directions and one of 16 because of the memory limitation of the system. For each block, 6 \({b} = 0\) images were acquired. Scanning parameters were: isotropic spatial resolution of \(300\,\upmu \hbox {m}\); \(\hbox {TR}/\hbox {TE} = 9000/24.2\,\hbox {ms}\); \(\varDelta /\delta = 14.4/4.3\,\hbox {ms}\); matrix size: \(192 \times 192\); 176 slices; and a total scan time of 8 days and 18 h. The b value was calibrated taking into account the reduction of the average diffusivity from \(D = 0.7 \times 10^{-3}\,\hbox {mm}^2/\hbox {s}\) in vivo to \(D = 0.16 \times 10^{-3}\,\hbox {mm}^2/\hbox {s}\) in our case, to compensate for the loss of contrast due to the observed reduction factor. As mentioned earlier, the data set was used first to perform tractography, but also to map microstructural features when merged with the next HYDI data set.
In addition to the HARDI data set, a further multiple-shell HYDI data set was acquired for 3 different shells corresponding to 3 different diffusion sensitization at \(\hbox {b} = 4500\, \hbox {s}/\hbox {mm}^{2}\), \(\hbox {b} = 7500\, \hbox {s}/\hbox {mm}^{2}\) and \(\hbox {b}\) = 10,000 \(\hbox {s}/\hbox {mm}^{2}\). The choice of the b-values was carried out based on the usual sensitizations taken for NODDI or ActiveAx models with a scaling factor of around 3 applied to account for the attenuation of the diffusivities observed ex vivo. For each shell, 60 diffusion-weighted volumes were acquired along 60 uniformly distributed diffusion directions. The acquisition was divided into two blocks of 30 directions each. Each block lasted 12 h 57 min, giving a total of 3 days and 7 h. The parameters were tuned as listed in Table 1. It is important at this step to note that the three shells were acquired with linearly increasing separation times \(\varDelta\) of 14.4, 30.0, and 45.0 ms while keeping the gradient pulse width to \(\delta = 4.3\,\hbox {ms}\), to vary the diffusion time. This choice was motivated by the willingness to be able to use alternative models to NODDI in the future, such as ActiveAx, which also requires sampling of the diffusion time. A further specificity of the HYDI protocol was to use the minimum echo time for each shell and not to impose the largest of the three minimum echo times. As a consequence, the data stemming from each shell have to be preprocessed to remove the \(T_{2}\)-decay dependency, which is possible thanks to the quantitative \(T_{2}\) calibration performed previously to characterize the magnetic and diffusion properties of the hippocampus sample.
Parameters for the NODDI data set
b (\(\hbox {s}/\hbox {mm}^{2}\))
\(N_{\text {dir}}\)
TR (ms)
TE (ms)
\(\varDelta\) (ms)
G (mT/m)
Delineation of the inner and surrounding structure of the hippocampus
In-house developed software (by FP), PtkVoi, was used to trace the hippocampal subfields and surrounding structures. It was performed manually by two independent experts from the \(T_{2}\)-weighted anatomical scan, using anatomical landmarks and following the different strategies prescribed and detailed in the literature (Insausti 1998; Duvernoy 2005; Wisse et al. 2012; Insausti and Amaral 2012; Boutet 2014).
Segmentations were traced in the coronal plane and the three-dimensional consistency was ensured by checking the axial and sagittal planes, as well as the 3D shapes of the delineated structures with respect to their known morphology. Considering the main intra-hippocampal circuits, we endeavored to identify: the entorhinal cortex, the dentate gyrus (including the hilus), CA2/CA3, CA1, the alveus, the fimbria, and the subicular complex.
Inner structures of the hippocampus
The number of inner segmented structures of the hippocampus resulted from a trade-off between the feasibility to identify their boundaries on the high-resolution anatomical MRI data (thus depending on their contrast to noise ratio), and the target regions involved in the circuits of interest. The segmentation process used a coarse to fine strategy involving a first step in which the hippocampal head, body, and tail were identified, followed by a second step to delineate regions and/or layers.
Coarse scale: segmentation of head, body, and tail
The segregation of the hippocampus in three parts was based on the protocol presented in Boutet (2014). The very anterior limit is not present in our sample, because the head was partly cut. The most anterior part of the body corresponds to the first coronal slice where the median part of the uncus is no longer visible. The most posterior coronal slice of the body was identified at the level of the enlargement of the fimbria and the loss of the specific C-shape of the body.
Fine scale: segmentation of subregions
In a second step, we segmented the following structures (3):
Alveus and fimbria. These two structures belong to the white matter and are visible in the anatomical \(T_{2}\)-weighted MRI data by the hypointense contrast at the level of the outer boundary of the hippocampus. This specific contrast was exploited to guide their delineation. In addition, the change in orientation of the underlying fibers between the alveus (oriented mainly parallel to the coronal plane) and the fimbria (oriented mainly parallel to the sagittal plane) can be easily identified in diffusion ODF fields, thus allowing the identification of the boundary between the alveus and the fimbria. The lateral boundary of the fimbria was set at the end of the fibers orientation shift zone, where the fibers are entirely contained in the sagittal plane (white line in Fig. 1). The inferior lateral boundary of the alveus was set at the junction with the collateral eminence of the lateral ventricle. The separation between the alveus and the fimbria was only possible in the body and the tail of the hippocampus. At the level of the hippocampal head, all white matter was associated with the alveus.
The lacunosum-molecular layer of the CA1–CA3 regions. This zone can easily be identified in the anatomical \(T_{2}\)-weighted MRI scan as a dark line in the CA region of the hippocampus, corresponding to an area of very low concentration of neural bodies (which are mainly located in the pyramidal layer).
The pyramidal and radiatum layers of the CA1 region. This region is the thickest Ammon's field, while CA2 is the thinnest, thus easily enabling definition of the boundary between both portions of the hippocampus. The border with the subicular complex could often be identified based on differences in grey values, since the CA1 region appeared brighter when compared to the adjacent subicular component. In the tail region of the hippocampus, where differences in grey values were only very subtle, we applied the method suggested by Boutet (2014), which consists of tracing the largest diameter of the hilum and a perpendicular line passing by the medium of the diameter that corresponds to the target boundary.
The pyramidal layer and radiatum of the CA2 and CA3 regions. These two regions were merged into a single ROI (CA2/CA3), because the boundary between them is almost impossible to identify in MR images. Furthermore, not only CA3 neurons project to the CA1 region, but also CA2 pyramids make strong excitatory synaptic contacts with CA1 neurons (Chevaleyre and Siegelbaum 2010). The limit with the dentate gyrus was defined by the end of the ribbon-like aspect of the Ammon's horn (Boutet 2014).
The dentate gyrus. Its boundaries were defined by the other structures already segmented and the cisterna ambiens enlarging the cerebro-spinal fluid-filled space lateral to the cerebral crus.
The subicular complex. The lateral limit of this region was identified based on differences in cortical thickness, as described by Wisse et al. (2012). Thus, the border was defined by drawing a line between the most medial part of the grey matter and the most medial part of the white matter of the temporal stem.
The entorhinal cortex. Its lateral limit is slightly upstream of the collateral sulcus, itself located under the collateral eminence. In Wisse et al. (2012), the posterior border of the entorhinal cortex was set 0.7 mm beyond the hippocampal head which corresponds to four slices for our spatial resolution.
Close view of the shift in fiber orientation between alveus and fimbria. Yellow arrowheads highlight typical inferior–superior orientation of ODFs in the alveus (fibers in the coronal plane, as shown on upper left). Orange arrowheads highlight typical anterior–posterior orientation of ODFs in the fimbria (fibers in the sagittal plane, as shown on upper left). White line shows the boundary between alveus and fimbria, defined within the transition zone corresponding to the shift in the fiber orientation
Connectivity and microstructure mapping
As mentioned earlier, the diffusion-weighted imaging protocol included the acquisition of a multiple-shell HYDI data set at three different b values (4500, 7500, 10,000 \(\hbox {s}/\hbox {mm}^{2}\)) with minimum echo times set for each individual shell. This choice was motivated by the optimization of the SNR per shell, but it requires to correct for the \(T_{2}\)-weighting equal to \(e^{\frac{-\rm {TE}}{T_{2}}}\) that differs from one shell to another. To this aim, a correction was applied to the two larger shells at 7500 and 10,000 \(\hbox {s}/\hbox {mm}^{2}\) that consisted of multiplying voxelwise all the diffusion-weighted images by the compensating factors \(e{^{\frac{\varDelta {\text{TE }}}{ T_{2} (v) }}}\) for each voxel v. This resulted in an intensity comparable to the one obtained with an acquisition done with the TE used at \({b} = 4500\,\hbox {s}/\rm {mm}^{2}\) for all the shells.
The conventional PGSE sequence is not sensitive to field inhomogeneities in comparison with its corresponding echoplanar version, such that the collected DW data are free from distortions induced by eddy currents or susceptibility effects. Consequently, there is no need to apply any correction. In addition, the measured SNR established to 9.9, 7.6, and 4.2 for the three shells, respectively, corresponding to an increasing b-value, are large enough to approximate the Rician noise distribution by a Gaussian distribution, thus allowing the use of conventional mean-square estimators without loss of information. Finally, all the acquisitions were performed during 13 consecutive days avoiding the need to remove the sample from its container, and thus avoiding the presence of any motion between two diffusion-sensitized volumes. Consequently, the remaining transformation exiting between the two anatomical and diffusion data sets turns to be the simple rigid transformation between their two fields of view. This transformation was inferred using the registration tool of the Connectomist toolbox (Duclap et al. 2012; Assaf 2013) based on a mutual information matching criterion optimized using a standard Nelder–Mead simplex algorithm. After registration, the two data sets perfectly matched, allowing to navigate between the structures inferred from the anatomical scans and the connectivity or microstructural quantitative maps inferred from diffusion MRI scans.
Choice of the local reconstruction model
Inference of the local orientation distribution function of the diffusion process or of the fiber orientation distribution can be done using two classes of HARDI techniques: modeldependent reconstructions and model-free reconstructions. Among model-dependent reconstructions, that impose a specific impulse response of a fiber bundle to the diffusion process are the Ball and Sticks model of Behrens (2003), the Constrained Spherical Deconvolution of Tournier and Calamante (2007) or the Sharpening Deconvolution Transform of Descoteaux et al. (2009). Among model-free reconstructions, that do not make any assumption about the response of an heterogeneous population to the diffusion process, are the Diffusion Spectrum Imaging model of Wedeen et al. (2000), the analytical Q-ball model of Descoteaux et al. (2007), the Diffusion Orientation Transform of Özarslan et al. (2006), or the latest Simple Harmonic Oscillator-Based Reconstruction and Estimation (SHORE) propagator model of Özarslan et al. (2013).
The current trend is to use spherical deconvolution approaches to obtain sharp fiber orientation distributions resulting from the deconvolution of ODFs using an auto-estimated convolution kernel. While this approach is suitable to reconstruct the connectivity of the entire brain, it cannot be considered for this study of the inner hippocampal connectivity where the complex configuration of fibers does not enable the definition of an adequate convolution kernel: there is no equivalent of the corpus callosum within the hippocampal complex. Taking this into consideration, the analytical Q-ball model (Descoteaux et al. 2007; Descoteaux 2010) was adopted in our study. This model relies on the decomposition of the DW signal onto a modified spherical harmonics basis, linked to the decomposition of the ODF onto the same basis by the Funk–Hecke matrix. Its use is relevant in our case, because preclinical diffusion MRI allows one to use high diffusion sensitizations (\(\ge \,4500\ \hbox {s}/\hbox {mm}^2\)) with short echo times (thus preventing severe \(T_{2}\)-decay and consequently SNR loss) yielding sharp ODFs. The DTI model was also computed to compare with the aQBI model.
Inference of the structural connectivity using diffusion MRI
The diffusion analyses were performed using the Connectomist toolbox. The HARDI data set was used to compute fields of ODFs stemming from the analytical Q-ball model and from the DTI model, as well as conventional quantitative maps stemming from the tensor model, including the FA map, the MD maps, and the color-encoded direction (CED) map. The analytical Q-ball model reconstruction was applied using a spherical harmonics order 8 and a regularization factor \(\lambda = 0.006\), as defined in Descoteaux et al. (2007).
A streamline regularized deterministic (SRD) and a streamline regularized probabilistic (SRP) tractography algorithms (Perrin et al. 2005) were applied to the whole sample using the maps of aQBI and DTI ODFs previously computed, with the following parameters: 8 seeds per voxel yielding dense tractograms containing 11.6 millions of fibers, forward step \(70\,\upmu \hbox {m}\), maximum solid angle \(30^{\circ }\), minimum/maximum fiber length 0.5/100 mm to avoid loops, and discard spurious fibers. This results in four tractograms (DTI/SRD, QBI/SRD, DTI/SRP, and QBI/SRP) that will be used to analyze the differences in the inference of the structural connectivity with respect to the model and to the fiber tracking algorithm.
Following the connectomics approach introduced in Hagmann (2010) to macroscopically describe the level of connectivity between two sets of regions of interest, we computed, for the deterministic and the probabilistic tractogram obtained with the aQBI model, a \(22 \times 22\) connectivity matrix that, for each pair of hippocampal subfields, counts the number of connections present in the former tractogram and linking them. The matrix is symmetric, because efferent and afferent projections cannot be distinguished with diffusion MRI, and the exclusion of self-connections implies a zero diagonal line. This connectivity matrix gives a concise overview of the main hubs of connections present in the hippocampus. In practice, whole brain studies performed on clinical systems rely on data suffering from low SNRs at high b value, causing overweighting of short reconstructed fibers with respect to long ones. A simple way to counterbalance such effects is to normalize each point of the matrix by the logarithm of the average length of fibers connecting to the concerned subregions. In our case, the hippocampus remains a small structure internally connected with relatively short fibers (average length 34.15 mm; standard deviation of length 21.85 mm) and tractography was relying on a high SNR diffusion MRI data set, thus less prone to fiber tracking degenerescence. Consequently, there is no need to apply any normalization of the connectivity matrix.
Finally, elements of the trisynaptic pathway, one of the most extensively studied pathways of the brain, were reconstructed from the four tractograms. The trisynaptic pathway, as presented in Duvernoy (2005), is composed of three elements:
Perforant path: axons from neurons in the entorhinal cortex that make synaptic contacts with dendrites of the granule cells in the dentate gyrus, located in the molecular layer of the dentate gyrus.
Mossy fibers: the granule cells of the dentate gyrus project to the pyramids of the CA3 (and partly also CA2) region. Synaptic contacts are located in the lucidum layer, which is between the pyramidal and radiatum layers of CA3.
Schaffer collaterals: pyramids of the CA3 region send their axons via the alveus–fimbria–fornix to the mammillary bodies. In addition, collaterals of these axons terminate on the dendrites of CA1 pyramids. These synaptic contacts are located in the lacunosum-molecular layer of CA1.
The four tractograms (QBI/SRP, QBI/SRD, DTI/SRP, and DTI/SRD) were used to extract mossy fibers and the perforant pathway with the aim of performing a comparison of the accuracy of aQBI versus DTI as well as SRD versus SRP tractography algorithms in reconstructing known pathways. To extract trisynaptic elements from the four tractograms, analysis pipelines were developed to intersect the connectograms with the starting and ending regions of interest corresponding to the termination of each element, plus a set of intermediate regions crossed by fibers to avoid the selection of false positives. The four tractograms were analyzed using a single filtering pipeline for each pathway.
Inference of tissue microstructure using diffusion MRI
Because of its practical implementation, from the acquisition point of view with respect to alternative models such as ActiveAx and AxCaliber, the NODDI model has become very popular to map the tissue microstructure in vivo in the frame of clinical applications (Kunz 2014; Chang 2015; Jelescu et al. 2015; Kodiweera et al. 2016). This model was specifically designed to map the biodistribution of dendrites and axons in the brain. It has been mostly applied in vivo in human using low-field conventional MRI scanners, limited to the millimeter spatial resolution. However, some ex vivo preclinical studies have also been performed using NODDI in animal models [mice in Sepehrband et al. (2015) and monkeys in Alexander et al. (2010)]. However, to our knowledge, this is the first time that this model is used to explore the human hippocampus ex vivo. The NODDI model consists of four diffusive compartments (Alexander et al. 2010; Zhang et al. 2012) with no exchange between them. Each compartment contributes to the global diffusion attenuation A resulting from a linear combination of the individual signal attenuations associated with each compartment:
\(A_{\text {ic}}\), the signal attenuation stemming from the compartment of highly restricted water molecules trapped within axons and dendrites (i.e., neurites) modeled as cylinders of zero diameter (i.e., sticks) and characterized by a volume fraction \(f_{\text {ic}}\),
\(A_{\text {ec}}\), the signal attenuation stemming from the extra-cellular compartment of water molecules surrounding the neurites characterized by a volume fraction \(f_{\text {ec}}\). This compartment is modeled by a cylindrically symmetric tensor, assuming a Gaussian anisotropic diffusion independent from the diffusion time,
\(A_{\text {iso}}\), the signal attenuation stemming from the CSF compartment containing free molecules with an isotropic displacement probability and characterized by a volume fraction \(f_{\text {iso}}\),
\(A_{\text {stat}}\), the signal attenuation stemming from the compartment of stationary water molecules trapped within glial cells modeled as spheres of zero diameter (i.e., points) and characterized by a volume fraction \(f_{\text {stat}}\). This additional compartment results from the process of fixation, in particular with the \(4\%\) formaldehyde, that reduces the membrane permeability of glial cells due to an interaction with aquaporin channels (Thelwall et al. 2006).
The net diffusion signal attenuation A corresponds to the following linear combination:
$$\begin{aligned} A = f_{\text {ic}}\cdot a._{\text {ic}} + f_{\text {ec}}\cdot a._{\text {ec}} + f_{\text {iso}}\cdot a._{\text {iso}} + f_{\text {stat}}\cdot a._{\text {stat}} \end{aligned}$$
with \(f_{\text {ic}} + f_{\text {ec}} + f_{\text {stat}} + f_{\text {iso}} = 1\).
The signal from the stationary population remains unattenuated by diffusion weighting, yielding \(A_{\text {stat}} = 1\). Equation (3) can, therefore, be written as follows:
$$\begin{aligned} \begin{aligned} A&=(1-f_{\text {iso}}) \left[ f_{\text {ic}}^{'}\cdot a._{\text {ic}}+f_{\text {ec}}^{'}\cdot a._{\text {ec}} + f_{\text {stat}}^{'} \right] + f_{\text {iso}}\cdot a._{\text {iso}}\\&=(1-f_{\text {iso}})\left[ (1-f_{\text {stat}}^{'})(f_{\text {ic}}^{*}\cdot a._{\text {ic}}+f_{\text {ec}}^{*}\cdot a._{\text {ec}}) + f_{\text {stat}}^{'} \right] \\&\quad +\,f_{\text {iso}}\cdot a._{\text {iso}}\\ \end{aligned} \end{aligned}$$
with \(f_{\text {ic}}^{'} + f_{\text {ec}}^{'} + f_{\text {stat}}^{'} = 1\) and \(f_{\text {ic}}^{*} + f_{\text {ec}}^{*} = 1\), hence:
$$\begin{aligned} \begin{aligned} A&= (1-f_{\text {iso}})\left[ (1-f_{\text {stat}}^{'})(f_{\text {ic}}^{*}\cdot a._{\text {ic}}+(1-f_{\text {ic}}^{*})A_{\text {ec}}) + f_{\text {stat}}^{'} \right] \\&\quad +f_{\text {iso}}\cdot a._{\text {iso}}\\ \end{aligned} \end{aligned}$$
with \(f_{\text {stat}} = (1-f_{\text {iso}}) f_{\text {stat}}^{'}\) and \(f_{\text {ic}} = (1-f_{\text {iso}})(1-f_{\text {stat}}^{'}) f_{\text {ic}}^{*}\).
To speed up the fitting procedure, some parameters were fixed as suggested by Zhang et al. (2012). Watson's distribution was preferred to Bingham's distribution. Whereas, for in vivo studies, intrinsic and isotropic diffusivities are usually set to 1.7 \(\times 10^{-3}\) and \(3.0 \times 10^{-3}\ \hbox {mm}^{2}/\hbox {s}\), respectively (Zhang et al. 2012), they were set to \(0.16 \times 10^{-3}\ \hbox {mm}^{2}/\hbox {s}\) corresponding to the mean diffusivity found in the grey matter of our sample and to the diffusion coefficient of water at \(20\,^{\circ }\hbox {C}\), i.e., \(2.0 \times 10^{-3} \hbox {mm}^{2}/\hbox {s}\).
Statistics of the intra-cellular volume fraction \(f_{\text {ic}}\) (assumed to represent the neurite density) were analyzed for each segmented structure of the hippocampus from the histogram of its values within the structure, to study their variation according to the structure of interest.
Anatomical MRI/3D rendering of anatomical hippocampal structures
The anatomical T2-weighted MRI data set (\(200\ \upmu \hbox {m}\), Fig. 2a, b) presented a very good contrast and SNR (33.7), thus enabling accurate delineation of the entorhinal cortex and of several components of the hippocampal complex: dentate gyrus, pyramidal and radiatum layers of the CA1 and CA2/CA3 regions, lacunosum-molecular layer, alveus, fimbria, and subicular complex. Figure 3 shows series of coronal sections with all the delineated areas. A three-dimensional rendering of the manual segmentation of the hippocampal regions and layers as well as of the entorhinal cortex is available in Supplementary Material. The accuracy of the segmentations is a key factor to successfully discriminate the fiber tracts connecting them.
Raw images obtained with the anatomical acquisition at \(200\ \upmu \hbox {m}\) (a, b), and with the diffusion acquisition at \({b} = 4500\ \hbox {s}/\hbox {mm}^{2}\) (c)
The upper figure shows a sagittal view with references to all the coronal images. Coronal images of the hippocampal formation are shown in an anterior-to-posterior direction from a to h. The head is displayed in a–d, the body in e and f, and the tail in g and h. The segmentation is shown in a\(^{'}\)–h\(^{'}\)
DTI and Q-ball imaging
Figures 4, 5 depict the obtained color-encoded direction map, as well as the Q-ball ODF field and the tractogram obtained with a probabilistic algorithm superimposed on the \(T_{2}\)(\({b}=0\)) reference map. The two figures were obtained using the HARDI data set. The color-encoded maps shown in Fig. 4 reveal a plethora of fine anatomical details and the ODF peaks shown in Fig. 5b, c seem in good agreement with the underlying structural connectivity.
The high anisotropy in the fimbria, oriented in the sagittal plane (orange arrowheads in Figs. 1, 4), can be related to efferent axons from CA3, CA1, and the subicular complex, along with afferent axons from structures in the diencephalon and basal forebrain. These fibers run parallel to the septal–temporal axis of the hippocampus. This main orientation is also visible in Fig. 4a, where the fimbria appears in blue near the head, then pink, and almost red when it goes towards the fornix.
Regarding the alveus, oriented in the coronal plane (yellow arrowheads in Figs. 1 and 4), it contains the axons from the CA1 region and the subicular complex, reaching the fimbria through the alveus in an oblique septal direction. The warping of the fibers around the surface of the hippocampus is also visible in Fig. 4c, where the alveus appears green and then pink when it gets close to the fimbria.
The fiber orientation observed in the pyramidal layer (Figs. 4, 5) can be attributed to the projection of the large apical dendrites of CA1 and CA3 through the lucidum (only in CA3) and radiatum layers towards their termination in the lacunosum-molecular layer. It is also affected by the perforant pathway. Orthogonally to the projection of CA1, CA2, and CA3 dendrites towards the lacunosum-molecular layer, the Schaffer collaterals runs from CA3 to CA1 (probably corresponding to the red part in the pyramidal layer from CA3 to CA1 in Fig. 4c). Voxels between CA3 and CA1 contain multiple fiber crossings, which cannot be resolved by the DTI model. This demonstrates the relevance of using the HARDI/Q-ball model. Figure 5b shows ODFs recovering multiple fiber crossings in the pyramidal layer with a shape revealing two main peaks (one for each principal direction). Figure 5c clearly depicts ODFs with two main peaks that can also be attributed to the Schaffer collateral crossing the projection of CA1, CA2, and CA3 dendrites towards the lacunosum-molecular layer.
In the lacunosum-molecular layer, and to a lesser extent in the radiatum layer, the apical dendrites of CA-pyramids diverge orthogonally into the terminal arborizations that tend to run parallel to the hippocampal sulcus. This corresponds to the area in the radiatum and lacunosum-molecular layers with ODF orthogonal to the coronal plane (pink arrowheads in Figs. 4c, 5b).
Hence, although color-encoded maps obtained with the DTI model show the main directions in each voxel and lead to a partial understanding of the fibers pathways, HARDI/QBall model is mandatory to resolve fiber crossings involved in the Schaffer collaterals and the perforant pathway.
Color-encoded diffusion directions at \(300\,\upmu \hbox {m}\) and 500 directions. The white dotted lines indicate the correspondence between axial (a), sagittal (b), and coronal (c) slices. Orange arrowheads point at the fimbria, yellow arrowheads at the alveus, and pink arrowhead at the lacunosum-molecular layer. The color-encoding cross at the bottom left of the image depicts the direction of largest displacement probability orientation. \({A} \, \hbox {anterior}\), \({P}\, \hbox {posterior}\), \({M}\, \hbox {medial}\), \({L} \,\hbox {lateral}\), \({S}\, \hbox {superior}\), \({I} \,\hbox {inferior}\)
On the left, fusion of the \(T_{2}\)(\({b}=0\)) image and the color-encoded diffusion directions map obtained with the DTI model (a). The color-encoding cross at the bottom left depicts the direction of largest displacement probability orientation. On the right, the orientation diffusion functions field computed with the Q-ball model (b) with a zoom of ODFs showing a crossing at the level of the Schaffer collateral (c), and the SRP tractogram calculated with this field (d), superimposed on (a). Pink arrowhead point at the radiatum and lacunosum-molecular layers with ODF orthogonal to the coronal plane
Analysis of the connectivity matrix and reconstruction of elements of the polysynaptic pathway
Figures 6, 7 show the connectivity matrices of the hippocampal substructures, obtained with the streamline regularized deterministic (Fig. 6) and the streamline regularized probabilistic algorithms (Fig. 7) applied with Q-ball model. As a general trend, the two matrices reveal a higher level of connectivity in the head compared to the body and the tail. The high connectivity of the lacunosum-molecular layer with CA1 can be explained by the location of the apical dendrites of pyramidal neurons of CA1 in this layer. As regards the connectivity between CA1 and the alveus in the head, this is likely due to pyramids of the CA1 region that send their axons via the alveus–fimbria–fornix to the mammillary bodies and representing one source of output from the hippocampus. The same applies to connectivity between CA2/CA3 and the alveus. Moreover, the high connectivity between CA1 and the alveus can be due not only to the presence in this structure of efferent axons from CA1 pyramids, but also to the basal dendrites of the pyramidal neurons bending into the alveus, as described in both polysynaptic and direct pathways (Duvernoy 2005). It also reveals connections that extend through the length of the hippocampus. Connections within each structure are also found. The subicular complex of the body is connected with the subicular complex of the head and the subicular complex of the tail. This is also the case for the dentate gyrus or CA2/CA3. This is in agreement with a primate study (Kondo et al. 2008), showing that the projections of the dentate gyrus extend bidirectionally along much of the length of the hippocampus. Finally, longitudinal connectivity also occurs between related regions like the entorhinal cortex in the hippocampal head and subicular complex in its body. Thus, while the subfields are usually studied in the coronal plane, it appears that connections between subfields extend both in cross section and longitudinally. All these results are in agreement with a recent literature review (Strange et al. 2014) showing a gradient of connectivity that varies along the length of the hippocampus. Finally, the two matrices also depict different connectivity levels. For instance, the level of connectivity between the dentate gyrus and CA2/CA3 appears lower with probabilistic tractography than with deterministic tractography. Conversely, the level of connectivity between CA1 and the alveus is lower with the deterministic approach. These observations would benefit from further analysis and comparison with a gold standard, which is beyond the scope of this study.
Connectivity matrix of the hippocampal substructures obtained with the streamline regularized deterministic model. Each matrix element represents the number of fibers connecting the ROIs indicated by the column and by the row. Self-connections are excluded, which implies a zero diagonal line in the matrix. The map is symmetric, because efferent and afferent projections cannot be distinguished with diffusion MRI. The heat scale represents the number of fibers connecting the ROIs
Connectivity matrix of the hippocampal substructures obtained with the probabilistic model. Each matrix element represents the number of fibers connecting the ROIs indicated by the column and by the row. Self-connections are excluded, which implies a zero diagonal line in the matrix. The map is symmetric, because efferent and afferent projections cannot be distinguished with diffusion MRI. The heat scale represents the number of fibers connecting the ROIs
Figure 8 shows how elements of the polysynaptic pathway could be extracted using high field/strong gradients diffusion MR-based tractography with the QBI/SRP tractogram, and also illustrates the differences in the inference of the structural connectivity with respect to the model and to the fiber tracking algorithm. Mossy fibers could be extracted using the four approaches. Although probabilistic approaches display fibers with a more realistic distribution of their origins along the granular layer of the dentate gyrus. By contrast, when it comes to the extraction of the perforant pathway, probabilistic approaches were the only one to give satisfying results, thus playing in favour of the use of a probabilistic fiber tracking technique. The QBI/SRD method leads to a bundle with very few fibers and the DTI/SRD tractogram does not allow the extraction of any fiber of the perforant pathway. This can be attributed to the fact that probabilistic tractography with multiple fiber orientations shows more robustness to noise and more sensitivity than the standard deterministic tractography as it explores all possible options, allowing to temporarily select suboptimal directions during the streamlining process (Behrens et al. 2007). The comparison of the QBI/SRP and the DTI/SRP results shows the benefit arising from the use of Q-ball model, since the resulting bundle displays more fibers and with a more regular distribution.
3D-rendering of the perforant pathway and mossy fibers in the body of the hippocampus extracted from four tractograms (DTI/SRD, QBI/SRD, DTI/SRP, QBI/SRP) a–d mossy fibers in the body, from the granular layer of the dentate gyrus to the lucidum layer of CA3. 3D-rendering only shows the body segmentation; e, f perforant pathway from the entorhinal cortex to the molecular layer of the dentate gyrus in the body of the hippocampus. 3D-rendering shows the entire segmentation (head, body, and tail) with the same color code as the one presented in Fig. 3 and a transparency of 0.2, except for the entorhinal cortex (opaque grey) and the dentate gyrus in the body (opaque dark blue)
Analysis of the hippocampal tissue microstructure
Investigating neurite density in the hippocampus
Neurite density gradient in hippocampal grey matter. The intra-cellular volume fraction obtained with NODDI model was set with a limit at 0.2 to highlight gradients in grey matter regions. White circle indicates a region of higher intra-cellular volume fraction in grey matter. The heat scale represents the intra-cellular volume fraction. \({A} \, \hbox {anterior}\), \({P}\, \hbox {posterior}\), \({M}\, \hbox {medial}\), \({L} \,\hbox {lateral}\)
Neurite density gradient in hippocampal white matter. The intra-cellular volume fraction map obtained with NODDI model was determined with maximum limit set at 0.7 to highlight gradients in white matter regions where the axonal density is high. White circle indicates a zone of higher density in the posterior hippocampus. The heat scale represents the intra-cellular volume fraction. \({A} \,\hbox {anterior}\), \({P}\, \hbox {posterior}\), \({S} \,\hbox {superior}\), \({I} \, \hbox {inferior}\)
As already mentioned, the restricted intra-cellular volume fraction inferred from the NODDI model corresponds to the neurite density. Within the hippocampus, Figs. 9 and 10 clearly depict a positive gradient of this neurite density in the grey matter from the posterior to the anterior part of the hippocampus and, conversely, a negative gradient in the white matter.
Mean intra-cellular volume fractions were computed for each segmented structure and reported in Table 2. Table 2 also assesses the positive gradient of intra-cellular fraction in the anterior part of the hippocampus, with a higher neurite density in grey matter regions, especially in the subicular complex and CA1. This result is consistent with the results obtained with the tractography, which also revealed a higher connectivity in the head than in the body or tail portions of the hippocampus (Figs. 6, 7). Only CA2/CA3 does not follow the gradient described for the grey matter regions. That might be due to the fact that cell packing density within CA2/CA3 varies along the rostro-caudal axis of the hippocampus. It has already been described to be higher in the body than in the head part in monkeys (Willard et al. 2013). Partial volume effects occurring with the segmentation can also have an impact, e.g., voxels labeled as CA2/CA3, but actually containing white matter.
Table 2 also shows the increased intra-cellular fraction in the tail, with a higher neurite density in white matter, i.e., the alveus and fimbria. This result can be interpreted on the basis of the polysynaptic pathway, since the principal outputs to the cortex merge along the different rostro-caudal levels of the hippocampus in the fimbria, which could explain the gradual increase in axonal density in the fimbria along the anterior–posterior direction.
Mean intra-cellular fraction for each label
Mean \(f_{\text {ic}}\)
Entorhinal cortex
Subicular complex
CA2/CA3
Dentate gyrus
Lacunosum-molecular layer
Fimbria
Investigating the laminar structure of the Ammon's horn
The main hippocampal layers, referred to as layer I, II, and III in Fig. 11c taken from Duvernoy (2005), can be segmented from \(T_{2}\)-weighted images, but their contrast does not provide an indisputable boundary for each layer. As depicted in Fig. 11a, the boundary remains partially defined. Green dotted lines are the boundaries inferred only with the \(T_{2}\)-weighted contrast and, in orange, limits obtained with the combination of the T2-weighted image and the neurite density map (Fig. 11b). Figure 11b clearly demonstrates that the intra-cellular volume fraction of the neurite population significantly enhances the contrast between layers and facilitates their segregation. This is particularly true when considering the delineation of the boundary separating layers II and III. Furthermore, the underlying physical principle driving this novel contrast mechanism is coherent with the anatomical knowledge. Layer I, with a high neurite density, corresponds to the alveus and to the oriens layers. Layer II, in contrast, shows a very poor neurite density, and corresponds to the pyramidal layer, mostly composed of the somas of neurons. Layer III, adjoining the vestigial hippocampal sulcus, appears with a higher neurite density than the pyramidal layer, which is consistent with the fact that it contains arborizations of the apical dendrites of pyramidal neurons, and corresponding to the molecular zone of the CA region, i.e., to the radiatum and lacunosum-molecular layers.
The intra-cellular volume fraction map thus shows new contrasts, consistent with histology compartments, that could be applied in the future to other cortical brain to improve the quality of the segmentation at the level of cortical layers.
Comparison of the contrast between the three hippocampal layers using a a standard anatomical \(T_{2}\)-weighted spin-echo sequence and b the intra-cellular volume fraction map obtained with NODDI model. The heat scale represents the intra-cellular volume fraction. The theoretical layers in cornu Ammonis are drawn in c, adapted from Duvernoy (2005). Their boundaries have been recognized, based on the T2-weighted image, with the green dotted lines in a and also with the help of the intra-cellular volume fraction map as shown with the orange dotted lines in a and the white dotted lines in b
The main contributions of this work are:
The presentation of a novel ultra-high field \(T_{2}\)-weighted and diffusion MRI protocol providing high spatial and angular resolution data. It appears to be useful to propose a novel segmentation approach of the human hippocampus subfields that combines the available sources of contrast to enhance their segregation.
A proof-of-concept that ultra-high field high angular resolution diffusion imaging (UHF-HARDI) can robustly map the inner connectivity of the hippocampus and give evidence of a higher level of connectivity in the uncal region than in the body or tail portions. This result holds regardless of the tractography approach. The inference of known pathways was provided to illustrate the potential of UHF-HARDI. The comparison between the results obtained from the four tractograms highlights the need for using a probabilistic tractography algorithm and, when it comes to the reconstruction of more complex pathways like the perforant pathway, the benefit arising from the use of Q-ball model.
A proof-of-concept that ultra-high field diffusion MR microscopy could play an increasing role in the future to decipher the cytoarchitecture of the hippocampus. In our case, it provides evidence of a rostro-caudal heterogeneity that could be associated with differences in gene expression patterns and could support the long-axis functional specialization of the human hippocampus (Strange et al. 2014). Indeed, the anterior portion of the hippocampus, but not its posterior one, is activated by tasks probing emotional and motivational aspects of cognitive processes (Kim and Fanselow 1992; Viard et al. 2011). Furthermore, whereas the caudal portion of the hippocampus is involved in the encoding of memories or in the local component of spatial representation, its rostral part plays a crucial role in retrieval processes and in the global component of spatial representation (Poppenk et al. 2013; Zeidman et al. 2015; Zeidman and Maguire 2016).
Comparison with existing studies
Several studies have been published to design a robust segmentation pipeline of the hippocampal substructures. Most of those relying on MRI were designed to exploit a single contrast, either \(T_{2}\) or \(T_{2}^{*}\) weighting (Wisse et al. 2012; Boutet 2014; Adler et al. 2014; Yushkevich 2009). The segmentation pipeline proposed in this study combines several contrasts stemming from \(T_{2}\)-weighted anatomical MRI and diffusion-weighted MRI to take benefit from all the available information. Not only the scalar information of diffusion MRI data were exploited, but also the angular profiles of ODFs. Their singularities could be used to better identify the boundaries between substructures of the hippocampal formation when the \(T_{2}\) contrast did not provide enough valuable information, e.g., the transition from alveus to fimbria.
Most existing studies aiming at investigating hippocampal microstructure rely on a diffusion imaging protocol acquired using a single-shell sampling of the q-space, therefore, not compatible with the use of the novel models emerging from the field of diffusion MR microscopy, requiring multiple-shell HYDI acquisition schemes. Most of them used the diffusion tensor model (DTI) and investigated the variations of FA, ADC, or just the contrast of the \(T_{2}\)-weighted image between subregions and/or the layers of the hippocampus (Shepherd et al. 2007; Coras 2014). Unfortunately, those invariant DTI-based scalar features are not specific to a particular cellular organization. A decrease in FA may correspond either to the degenerescence of axons, or to their spreading. A decrease of the ADC may correspond either to a reduction of the axon diameter or to disorders in the extra-cellular space. For instance, Coras (2014) showed the seven hippocampal layers (i.e., alveus, sr pyramidale, sr radiatum, sr lacunosum, sr moleculare, as well as the sr moleculare and granule cell layer of the DG) identified with the contrast of an \(T_{2}\)-weighted image for a healthy sample. Sclerotic hippocampal samples depicted only four layers, and the non-specificity of the method prevented from establishing the causes of this alteration. The HYDI acquisition scheme implemented in our study allowed us to use more advanced models of diffusion MRI giving more specific features to characterize the microstructure, like the neurite density inferred from the NODDI model. Such quantitative features lead to a better understanding of the variations occurring in the rostro-caudal direction at the cellular level and make correlations with the long-axis functional specialization of the human hippocampus.
Established pathways have already been reconstructed or identified in other studies. Zeineh et al. (2012) reconstructed, from in vivo data, the best-known pathways of the medial temporal lobe including the perforant pathway and the Schaffer collaterals. Coras (2014) also identified the perforant pathway on a DTI-based tractogram and Augustinack (2010) reconstructed it from ex vivo DTI-based data. However, our study was one of the first to go beyond diffusion tensor imaging to probe the circuits and the inner connectivity of the human hippocampus. Assuming a Gaussian distribution of water molecule displacements, DTI can support only one fiber population per voxel, thus being unable to render complex fiber configurations like crossings, kissings, or splittings. HARDI models, to which the analytical Q-ball belongs, were designed to go beyond this limitation, and are particularly suitable for the hippocampus where such configurations are likely to happen. Finally, it is the first time that connectivity matrices were used to assess the gradient of connectivity existing along the rostro-caudal axis of the hippocampus. Such observations were hypothesized from functional studies, but to our knowledge, have never been investigated from a structural point of view using diffusion MRI and tools coming from the field of connectomics.
In this study, we have established methods to delineate subfields and substructures of the hippocampal formation to infer their structural connectivity and their microstructure. We have given evidence that using diffusion-based microstructural maps enables the segmentation of smaller hippocampal structures, like its lamination. Investigating this potential should be generalized in the future to develop a novel MRI-based post-mortem atlas of the hippocampal complex. We have also demonstrated that UHF diffusion MRI using a preclinical system allows the reconstruction of hippocampal known pathways, like the perforant path and the mossy fibers. To go a step further, clustering techniques applied to the connectogram would provide clusters of co-localized fibers sharing similar geometries and belonging to the same white matter tract. Combined with the integration of more samples to better capture the intersubject variability, fiber clustering should accelerate the construction of a probabilistic atlas of the hippocampal inner connectivity. However, this construction is out of the scope of this paper.
In the present study, the sample is fixed by immersion, and there is a risk for the fixation to be inhomogeneous, as the fixative has to diffuse from the surface to the deepest structures. The time it takes for fixative to permeate through the brain can lead to higher fixation time for the deepest structures, inducing higher degradation of the tissue, for instance from autolysis. This inhomogeneity could be a confound when reporting gradients in microstructure maps. To minimise this risk, the sample was immersed 2 years, which is enough to entirely fix the tissue. In case of high inhomogeneities resulting from the fixation, clear non-anatomical borders would appear in the structural MRI images. Since no kinds of severe contrasts (independent from anatomical structures) were observed in the high-resolution MRI measurements, the homogeneity of the fixation can be assumed. In addition, the hippocampus is located in the periphery of the brain. There is then little risk of fixation inhomogeneities. However, it is impossible to completely eliminate this confound with one specimen and a further study would benefit from having more samples.
Another aspect that could impact the diffusion contrast and the quality of our results is the choice of the diffusion sensitization. Given the reduction factor of the mean diffusivity from \({D} = 0.7\times 10^{-3}\ \hbox {mm}^2/\hbox {s}\) (standard value reported in vivo) to \({D} = 0.16\times 10^{-3}\ \hbox {mm}^{2}/\hbox {s}\), the diffusion attenuation at \({b}=4500\,\hbox {s}/\hbox {mm}^{2}\) should be equivalent to that of an in vivo scan performed at \({b}=1000\ \hbox {s}/\hbox {mm}^{2}\). A higher diffusion sensitization is generally recommended to obtain sharp ODFs (Hess et al. 2006). The use of a single shell at \(4500\,\hbox {s}/\hbox {mm}^2\) was motivated by the literature, that typically mention the use of b values of at least \(4000\,\hbox {s}/\hbox {mm}^2\) to scan post-mortem pieces. In particular, Dyrby et al. (2011) demonstrated that any HARDI acquisition with a b value between 2000 and \(8000\,\hbox {s}/\hbox {mm}^2\) allows for the inference of multiple fiber populations from ex vivo fixed specimens scanned at room temperature, with an optimal value around \(4000\,\hbox {s}/\hbox {mm}^2\). Furthermore, the application of a probabilistic fiber tracking method contributed to more robustly manage fiber crossings than deterministic approaches. It would also be of great value to investigate alternative reconstructions using the acquired HYDI data set, to go beyond HARDI models. Advanced models could be considered, like the mean average propagator (MAP-MRI) reconstruction (Özarslan et al. 2013) or the fiber orientation distribution (FOD) reconstruction (Jeurissen et al. 2014), relying both on a multiple-shell sampling of the q-space. On the one hand, MAP-MRI would provide the estimation of further information like the return-to-origin probability, sensitive to compartment sizes, or non-Gaussianity, providing insights about the tissue complexity. On the other hand, FODs inferred from multiple-shell acquisitions are based on a multi-tissue constrained spherical deconvolution that would provide more precise fiber orientation estimates at the interface between tissues, thus yielding improved tractograms. In further works, such HYDI-based models may improve the quality of the obtained tractograms. Nevertheless, this investigation is beyond the scope of this study.
Diffusion MR microscopy has become a growing topic of interest in the diffusion MRI community, and models are constantly improving. Nowadays, the NODDI model has become very popular due to its easy implementation from the acquisition protocol to the analysis pipeline. Alternative models should be investigated in the future, in particular those probing further features like the mean axon diameter or the myelin water fraction. Because we took a special care to establish an acquisition protocol densely sampling the q and diffusion time spaces, the ActiveAx model could be investigated in the future using our HYDI diffusion data set. Its investigation is ongoing, but beyond the scope of this preliminary study. The latest improvements of the model now integrate a time dependence for the extra-axonal space that should allow to finely probe maps of the mean axon diameter within the hippocampus with fewer bias.
Validation and comparison with alternative modalities
Ex vivo MRI is able to bridge the gap from the in vivo world to meso-scale configurations with a spatial resolution of 100–\(200\ \upmu \hbox {m}\). In this study, we chose to limit our spatial resolution to 200–\(300\ \upmu \hbox {m}\), respectively, for the anatomical and DW images to reach high b values (10,000\(\ \hbox {s}/\hbox {mm}^{2}\)) with a reasonable SNR to explore the properties of the tissue.
Novel optical methods are able to go down even further, to the microscopic scale. These methods include, in particular, optical coherence tomography (OCT), serial two-photon (STP) tomography, and 3D-polarized light imaging (3D-PLI). STP tomography (Ragan 2012) combines fluorescence imaging with two-photon microscopy, but requires the use of histochemical dyes to label the cells with type-specific fluorescent proteins, which is not the case of the two following methods, therefore, being sensitive only to the targeted cell populations. OCT (Magnain 2015) is a high-resolution (up to \(1\,\upmu \hbox {m}\) in plane) optical technique analogous to ultrasound imaging as it measures the backscattered light of the sample, and is sensitive to differences in the refraction index in tissue. 3D-PLI (Axer 2011) gives the opportunity to observe the 3D orientation of the myelinated fibers without any staining procedure, thanks to the birefringence of the myelin sheath with an in-plane resolution of \(1.3\,\upmu \hbox {m}\) and slices of \(70\,\upmu \hbox {m}\). In contrast with other optical methods, whole human brain imaging is feasible, even if axons with a diameter at the range of the spatial resolution cannot be distinguished. This optical technique has already been applied to ex vivo human hippocampi by Zeineh et al. (2016), but the results have only been compared with in vivo DTI-based color-encoded maps. Despite their remarkable spatial resolution, optical methods also present inherent limitations compared with MRI. First, the sample has to be cut into slices for PLI and STP, and at least into blocks with a flat surface for OCT (Magnain 2014). Furthermore, contrary to dMRI, no real 3D acquisition is possible. Given the extremely large image size, supercomputing facilities are then required to precisely align the serial sections and produce three-dimensional reconstructions. Finally, diffusion MRI gives access to quantitative microstructural characteristics, like axonal density and diameter, which is not easily feasible using the novel optical methods described above.
Despite its own limitations, 3D-PLI can probably be considered as mature enough for the validations of diffusion MRI (Zeineh et al. 2016; Mollink et al. 2017). The human hippocampus sample scanned in the frame of this study is actually being analyzed using the 3D-PLI setup of our research partner. Tractography will be also performed from the 3D-PLI data and compared to the results of this work.
Clinical prospects
This work has been done at an intermediate mesoscopic scale, between the micrometer obtained with optical methods and the millimeter obtained with in vivo MRI. It prefigures what could be achieved with ultra-high field clinical MRI. This is just the beginning of a new era of brain exploration. Advances in knowledge thanks to the ex vivo study, as well as microstructural models and hardware improvement, will allow, in fine, to consider a translational approach to reach the in vivo clinical routine.
Nowadays, there is no atlas of the human hippocampus connectivity and its microstructure at the mesoscopic scale. However, it would be of greatest value to clinical and cognitive neurosciences. Several tools are available to segment the hippocampus from MRI data [the object-based ROI module of the Anatomist software (http://www.brainvisa.info/index.html) in Boutet (2014), FIRST (FSL) or Freesurfer in Morey (2009)], and numerical atlases of the human hippocampus subfields have been established (Chupin 2009; Yushkevich 2010; Iglesias 2015). However, to our knowledge, no numerical atlas of their connectivity or their microstructure has been established. However, in most pathologies affecting the hippocampus, there is a need to better understand which subfields are affected, at what rate, and if the modifications induced by the pathology affect the neuronal cell bodies and/or their connections. For instance, Coras (2014) showed, in the case of hippocampal sclerosis (HS), that type 1 and 2 depicted different rates of cell loss with a more pronounced cell loss in CA3 and CA4 in type 1. Both kinds of HS samples depicted a contrast that did not allow the discrimination of the seven hippocampal layers contrary to normal samples, likely because of pathological shrinkage and fiber alteration.
Regarding structural modifications occurring with the normal process of aging, it is known that the hippocampus undergoes a particular volume decrease with age. MRI studies have suggested that, in typical aging, volume loss is more specific to CA1 and DG/CA3 subregions (Mueller and Weiner 2009). This volume loss is probably not due to neuronal cell loss (Riddle 2007), but rather to synapse loss (Burke and Barnes 2006), and occurs especially in the cortical inputs into the hippocampus such as the perforant pathway (Yassa et al. 2011). Wilson et al. (2006) also suggested that changes strengthen the auto-associative network of the CA3, amplifying the completion pattern (retrieval of previously stored information from a partial cue). The subject studied in this paper was an 87-year-old male. Regarding our results, that would mean that the outputs of the entorhinal cortex may be less significant than in a young hippocampus. As self-connections are excluded, there is no impact of the strengthening of the auto-associative network of CA3. Comparing these results to others obtained with a young hippocampus could highlight the reduction of the perforant path induced by the process of normal aging.
From a fundamental point of view, having access to a fine description of the hippocampal anatomy, including its subfields, its connectivity, and its microstructure, and being able to perform functional imaging using various memory tasks, opens the way to an improved functional neuroanatomy of the sensory, short-term, working, and long-term memories. Better understanding the neural networks driving these various cognitive processes might be useful to design, for instance, novel educational tools to improve the efficacy of young children to learn.
Finally, the protocol established for the human hippocampus could be generalized for the entire brain, and ensuing findings may help to push forward tractography algorithms. One of the limitations in tractography is that when a technique shows high sensitivity, i.e., a high rate of true positives, it most likely will show low specificity, i.e., a high rate of false positives (Thomas et al. 2014). Therefore, adding constraints arising from anatomical priors, like fine connectivity or microstructural characteristics, is intended to drastically reduce false positives creation.
This study forms a proof-of-concept of how ultra-high field MRI with strong gradients can be applied to analyze hippocampal connectivity and microstructure ex vivo. It introduces a unique acquisition and segmentation protocol, and demonstrates that diffusion-weighted MRI offers a new opportunity to map the inner structural connectivity and microstructure of the human hippocampus, in good agreement with histology and current functional studies. The tractography and microstructure models highlight a higher connectivity and neurite density in the anterior hippocampus, whereas the intra-cellular volume fraction map reveals the laminar structure of the Ammon's horn and could be used to improve segmentation protocols. In the future, these results could be of potential benefit to better correlate hippocampal atrophy, observed at low field in Alzheimer's patients, with modifications of its inner connectivity and neurite density.
This project has received funding from the European Union's Horizon 2020 Framework Programme for Research and Innovation under Grant Agreement no. 720270 (Human Brain Project SGA1).
Supplementary material 1 (mp4 139142 KB)
Adler DH, Pluta J, Kadivar S, Craige C, Gee JC, Avants BB, Yushkevich PA (2014) Histology-derived volumetric annotation of the human hippocampal subfields in postmortem MRI. Neuroimage 84:505–523CrossRefPubMedGoogle Scholar
Adnan A, Barnett A, Moayedi M, McCormick C, Cohn M, McAndrews MP (2015) Distinct hippocampal functional networks revealed by tractography-based parcellation. Brain Struct Funct:1–14Google Scholar
Alexander DC, Hubbard PL, Hall MG, Moore EA, Ptito M, Parker GJM, Dyrby TB (2010) Orientationally invariant indices of axon diameter and density from diffusion MRI. Neuroimage 52(4):1374–1389CrossRefPubMedGoogle Scholar
Amaral DG, Insausti R (1990) Hippocampal formation. In: Paxinos G (eds). The Human Nervous System, pp 711–755Google Scholar
Assaf Y, Basser PJ (2005) Composite hindered and restricted model of diffusion (CHARMED) MR imaging of the human brain. Neuroimage 27(1):48–58CrossRefPubMedGoogle Scholar
Assaf Y, Blumenfeld-Katzir T, Yovel Y, Basser PJ (2008) AxCaliber: a method for measuring axon diameter distribution from diffusion MRI. Magn Reson Med 59(6):1347–1354CrossRefPubMedPubMedCentralGoogle Scholar
Assaf Y et al (2013) The CONNECT project: combining macro- and micro-structure. Neuroimage 80:273–282CrossRefPubMedGoogle Scholar
Augustinack J (2010) Direct visualization of the perforant pathway in the human brain with ex vivo diffusion tensor imagingGoogle Scholar
Axer M et al (2011) A novel approach to the human connectome: ultra-high resolution mapping of fiber tracts in the brain. Neuroimage 54(2):1091–1101CrossRefPubMedGoogle Scholar
Beaulieu C (2009) The biological basis of diffusion anisotropy. Diffusion MRI: from quantitative measurement to in vivo neuroanatomy, pp 105–126Google Scholar
Behrens TEJ et al (2003) Characterization and propagation of uncertainty in diffusion-weighted MR imaging. Magn Reson Med 50(5):1077–1088CrossRefPubMedGoogle Scholar
Behrens TEJ, Berg HJ, Jbabdi S, Rushworth MFS, Woolrich MW (2007) Probabilistic diffusion tractography with multiple fibre orientations: what can we gain? Neuroimage 34(1):144–155CrossRefPubMedGoogle Scholar
Boutet C et al (2014) Detection of volume loss in hippocampal layers in Alzheimer's disease using 7 T MRI: a feasibility study. NeuroImage Clin 5:341–348CrossRefPubMedPubMedCentralGoogle Scholar
Burke SN, Barnes CA (2006) Neural plasticity in the ageing brain. Nat Rev Neurosci 7(1):30–40CrossRefPubMedGoogle Scholar
Chang YS et al (2015) White matter changes of neurite density and fiber orientation dispersion during human brain maturation. PLoS One 10(6):e0123656CrossRefPubMedPubMedCentralGoogle Scholar
Chevaleyre V, Siegelbaum SA (2010) Strong CA2 pyramidal neuron synapses define a powerful disynaptic cortico-hippocampal loop. Neuron 66(4):560–572CrossRefPubMedPubMedCentralGoogle Scholar
Chupin M et al (2009) Fully automatic hippocampus segmentation and classification in Alzheimer's disease and mild cognitive impairment applied on data fromADNI. Hippocampus 19(6):579CrossRefPubMedPubMedCentralGoogle Scholar
Colon-Perez LM et al (2015) High-field magnetic resonance imaging of the human temporal lobe. NeuroImage Clin 9:58–68CrossRefPubMedPubMedCentralGoogle Scholar
Coras R et al (2014) 7T MRI features in control human hippocampus and hippocampal sclerosis: an ex vivo study with histologic correlations. Epilepsia 55(12):2003–2016CrossRefPubMedGoogle Scholar
D'Arceuil HE, Westmoreland S, de Crespigny AJ (2007) An approach to high resolution diffusion tensor imaging in fixed primate brain. Neuroimage 35(2):553–565CrossRefPubMedGoogle Scholar
De Santis S, Jones DK, Roebroeck A (2016) Including diffusion time dependence in the extra-axonal space improves in vivo estimates of axonal diameter and density in human white matter. NeuroImage 130:91–103CrossRefPubMedPubMedCentralGoogle Scholar
De Santis S, Drakesmith M, Bells S, Assaf Y, Jones DK (2014) Why diffusion tensor MRI does well only some of the time: variance and covariance of white matter tissue microstructure attributes in the living human brain. Neuroimage 89:35–44CrossRefPubMedPubMedCentralGoogle Scholar
Descoteaux M (2010) High angular resolution diffusion MRI: from local estimation to segmentation and tractography. PhD Thesis, INRIA SophiaAntipolis, France, p 49Google Scholar
Descoteaux M, Angelino E, Fitzgibbons S, Deriche R (2007) Regularized, fast, and robust analytical Q-ball imaging. Magn Reson Med 58(3):497–510CrossRefPubMedGoogle Scholar
Descoteaux M, Deriche R, Knosche TR, Anwander A (2009) Deterministic and probabilistic tractography based on complex fibre orientation distributions. IEEE Trans Med Imaging 28(2):269–286CrossRefPubMedGoogle Scholar
Dinkelacker V, Valabregue R, Thivard L, Lehericy S, Baulac M, Samson S, Dupont S (2015) Hippocampal-thalamic wiring in medial temporal lobe epilepsy: enhanced connectivity per hippocampal voxel. Epilepsia 56(8):1217–1226CrossRefPubMedGoogle Scholar
Duclap D et al (2012) Connectomist-2.0: a novel diffusion analysis toolbox for BrainVISA. In: 29th ESMRMB. Lisbone, PortugalGoogle Scholar
Duvernoy HM (2005) The human hippocampus: functional anatomy, vascularization and serial sections with MRI. Springer, BerlinGoogle Scholar
Dyrby TB, Baaré WFC, Alexander DC, Jelsing J, Garde E, Søgaard LV (2011) An ex vivo imaging pipeline for producing high-quality and high-resolution diffusion-weighted imaging datasets. Hum Brain Mapp 32.4:544–563CrossRefGoogle Scholar
Fanselow MS, Dong H-W (2010) Are the dorsal and ventral hippocampus functionally distinct structures? Neuron 65(1):7–19CrossRefPubMedPubMedCentralGoogle Scholar
Gloor P (1997) The temporal lobe and limbic system. Oxford University Press, USAGoogle Scholar
Hagmann P et al (2010) MR connectomics: principles and challenges. J Neurosci Methods 194.1:34–45CrossRefGoogle Scholar
Hess CP, Mukherjee P, Han ET, Xu D, Vigneron DB (2006) Q-ball reconstruction of multimodal fiber orientations using the spherical harmonic basis. Magn Reson Med 56(1):104–117CrossRefPubMedGoogle Scholar
Hitti FL, Siegelbaum SA (2014) The hippocampal CA2 region is essential for social memory. Nature 508(7494):88–92CrossRefPubMedPubMedCentralGoogle Scholar
Iglesias JE et al (2015) A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: application to adaptive segmentation of in vivo MRI. Neuroimage 115:117–137CrossRefPubMedPubMedCentralGoogle Scholar
Insausti R, Amaral DG (2012) Hippocampal formation. In: Mai JK, Paxinos G (eds) The human nervous system, 3rd edn. Academic Press, Amsterdam, pp 896–942CrossRefGoogle Scholar
Insausti R et al (1998) MR volumetric analysis of the human entorhinal, perirhinal, and temporopolar cortices. Am J Neuroradiol 19(4):659–671PubMedGoogle Scholar
Jack CR, Petersen RC, O'Brien PC, Tangalos EG (1992) MR-based hippocampal volumetry in the diagnosis of Alzheimer's disease. Neurology 42(1):183–183CrossRefPubMedGoogle Scholar
Jelescu IO, Veraart J, Adisetiyo V, Milla SS, Novikov DS, Fieremans E (2015) One diffusion acquisition and different white matter models: how does microstructure change in human early development based on WMTI and NODDI? Neuroimage 107:242–256CrossRefPubMedGoogle Scholar
Jeurissen B, Tournier J-D, Dhollander T, Connelly A, Sijbers J (2014) Multi-tissue constrained spherical deconvolution for improved analysis ofmulti-shell diffusionMRI data. NeuroImage 103:411–426CrossRefPubMedGoogle Scholar
Kim J, Fanselow MS (1992) Modality-specific retrograde amnesia of fear. Science 256(5):675–677CrossRefPubMedGoogle Scholar
Kodiweera C, Alexander AL, Harezlak J, McAllister TW, Yu-Chien W (2016) Age effects and sex differences in human brain white matter of young to middle-aged adults: a DTI, NODDI, and q-space study. NeuroImage 128:180–192CrossRefPubMedGoogle Scholar
Kondo H, Lavenex P, Amaral DG (2008) Intrinsic connections of the macaque monkey hippocampal formation: I. Dentate gyrus. J Comp Neurol 511(4):497–520CrossRefPubMedPubMedCentralGoogle Scholar
Kunz N et al (2014) Assessing white matter microstructure of the newborn with multi-shell diffusion MRI and biophysical compartment models. Neuroimage 96:288–299CrossRefPubMedGoogle Scholar
Leutgeb JK, Leutgeb S, Moser M-B, Moser EI (2007) Pattern separation in the dentate gyrus and CA3 of the hippocampus. Science 315.5814:961–966CrossRefGoogle Scholar
Magnain C et al (2014) Blockface histology with optical coherence tomography: a comparison with Nissl staining. NeuroImage 84:524–533CrossRefPubMedGoogle Scholar
Magnain C et al (2015) Optical coherence tomography visualizes neurons in human entorhinal cortex. Neurophotonics 2(1):015004–015004CrossRefPubMedPubMedCentralGoogle Scholar
McNab JA et al (2013) The Human Connectome Project and beyond: initial applications of 300 mT/mgradients. Neuroimage 80:234–245CrossRefPubMedGoogle Scholar
Meiboom S, Gill D (1958) Modified spin-echo method for measuring nuclear relaxation times. Rev Sci Instrum 29(8):688–691CrossRefGoogle Scholar
Modo M, Kevin Hitchens T, Liu JR, Mark Richardson R (2016) Detection of aberrant hippocampal mossy fiber connections: ex vivo mesoscale diffusion MRI and microtractography with histological validation in a patient with uncontrolled temporal lobe epilepsy. Hum Brain Mapp 37(2):780–795CrossRefPubMedGoogle Scholar
Mollink J et al (2017) Evaluating fibre orientation dispersion in white matter: comparison of diffusion MRI, histology and polarized light imaging. NeuroImageGoogle Scholar
Morey RA et al (2009) A comparison of automated segmentation and manual tracing for quantifying hippocampal and amygdala volumes. Neuroimage 45(3):855–866CrossRefPubMedGoogle Scholar
Mueller SG, Weiner MW (2009) Selective effect of age, Apo e4, and Alzheimer's disease on hippocampal subfields. Hippocampus 19(6):558–564CrossRefPubMedPubMedCentralGoogle Scholar
Özarslan E, Shepherd TM, Vemuri BC, Blackband SJ, Mareci TH (2006) Resolution of complex tissue microarchitecture using the diffusion orientation transform (DOT). NeuroImage 31(3):1086–1103CrossRefPubMedGoogle Scholar
Özarslan E, Koay CG, Shepherd TM, Komlosh ME, Okan İrfanoǧlu M, Pierpaoli C, Basser PJ (2013) Mean apparent propagator (MAP) MRI: a novel diffusion imaging method for mapping tissue microstructure. NeuroImage 78:16–32CrossRefPubMedPubMedCentralGoogle Scholar
Perrin M et al (2005) Fiber tracking in q-ball fields using regularized particle trajectories. Information processing in medical imaging. Springer, Berlin, pp 52–63CrossRefGoogle Scholar
Pfefferbaum A, Sullivan EV, Adalsteinsson E, Garrick T, Harper C (2004) Postmortem MR imaging of formalin-fixed human brain. Neuroimage 21(4):1585–1595CrossRefPubMedGoogle Scholar
Poppenk J, Evensmoen HR, Moscovitch M, Nadel L (2013) Long-axis specialization of the human hippocampus. Trends Cogn Sci 17(5):230–240CrossRefPubMedGoogle Scholar
Prull MW, Gabrieli JDE, Bunge SA (2000) Age-related changes in memory: a cognitive neuroscience perspectiveGoogle Scholar
Ragan T et al (2012) Serial two-photon tomography for automated ex vivo mouse brain imaging. Nat Methods 9(3):255–258CrossRefPubMedPubMedCentralGoogle Scholar
Riddle DR (2007) Brain aging: models, methods, and mechanisms. CRC Press, Boca RatonCrossRefGoogle Scholar
Sepehrband F, Clark KA, Ullmann JFP, Kurniawan ND, Leanage G, Reutens DC, Yang Z (2015) Brain tissue compartment density estimated using diffusion-weighted MRI yields tissue parameters consistent with histology. Hum Brain Mapp 36(9):3687–3702CrossRefPubMedPubMedCentralGoogle Scholar
Setsompop K et al (2013) Pushing the limits of in vivo diffusion MRI for the Human Connectome Project. Neuroimage 80:220–233CrossRefPubMedPubMedCentralGoogle Scholar
Shepherd TM, Özarslan E, Yachnis AT, King MA, Blackband SJ (2007) Diffusion tensor microscopy indicates the cytoarchitectural basis for diffusion anisotropy in the human hippocampus. Am J Neuroradiol 28(5):958–964PubMedGoogle Scholar
Strange BA, Witter MP, Lein ES, Moser EI (2014) Functional organization of the hippocampal longitudinal axis. Nat Rev Neurosci 15(10):655–669CrossRefPubMedGoogle Scholar
Takahashi M et al (2002) Magnetic resonance microimaging of intraaxonal water diffusion in live excised lamprey spinal cord. Proc Nat Acad Sci 99(25):16192–16196CrossRefPubMedPubMedCentralGoogle Scholar
Thelwall PE, Shepherd TM, Stanisz GJ, Blackband SJ (2006) Effects of temperature and aldehyde fixation on tissue water diffusion properties, studied in an erythrocyte ghost tissue model. Magn Reson Med 56(2):282–289CrossRefPubMedGoogle Scholar
Thomas C, Ye Frank Q, Okan Irfanoglu M, Modi P, Saleem KS, Leopold DA, Pierpaoli C (2014) Anatomical accuracy of brain connections derived from diffusion MRI tractography is inherently limited. Proc Natl Acad Sci 111(46):16574–16579CrossRefPubMedPubMedCentralGoogle Scholar
Tournier J-D, Calamante F (2007) Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution. NeuroImage 35.4:1459–1472CrossRefGoogle Scholar
Tournier J-D, Mori S, Leemans A (2011) Diffusion tensor imaging and beyond. Magn Reson Med 65(6):1532–1556CrossRefPubMedPubMedCentralGoogle Scholar
Viard A, Doeller CF, Hartley T, Bird CM, Burgess N (2011) Anterior hippocampus and goal-directed spatial decision making. J Neurosci 31(12):4613–4621CrossRefPubMedGoogle Scholar
Videbech P, Ravnkilde B (2004) Hippocampal volume and depression: ameta-analysis ofMRI studies. Am J Psychiatry 161(11):1957–1966CrossRefPubMedGoogle Scholar
Wang L et al (2006) Changes in hippocampal connectivity in the early stages of Alzheimer's disease: evidence from resting state fMRI. Neuroimage 31(2):496–504CrossRefPubMedGoogle Scholar
Wedeen VJ, Reese TG, Tuch DS, Weigel MR, Dou JG, Weiskoff RM, Chessler D (2000) Mapping fiber orientation spectra in cerebral white matter with Fourier transform diffusion MRI. In: Proceedings of the 8th annual meeting of ISMRM, Denver, p 82Google Scholar
Willard SL, Riddle DR, Elizabeth Forbes M, Shively CA (2013) Cell number and neuropil alterations in subregions of the anterior hippocampus in a female monkey model of depression. Biol Psychiatry 74(12):890–897CrossRefPubMedGoogle Scholar
Wilson IA, Gallagher M, Eichenbaum H, Tanila H (2006) Neurocognitive aging: prior memories hinder new hippocampal encoding. Trends Neurosci 29(12):662–670CrossRefPubMedPubMedCentralGoogle Scholar
Wisse LEM, Gerritsen L, Zwanenburg JJM, Kuijf HJ, Luijten PR, Biessels GJ, Geerlings MI (2012) Subfields of the hippocampal formation at 7T MRI: in vivo volumetric assessment. Neuroimage 61(4):1043–1049CrossRefPubMedGoogle Scholar
Yassa MA, Mattfeld AT, Stark SM, Stark CEL (2011) Age-related memory deficits linked to circuit-specific disruptions in the hippocampus. Proc Nat Acad Sci 108(21):8873–8878CrossRefPubMedPubMedCentralGoogle Scholar
Yushkevich PA et al (2009) A high-resolution computational atlas of the human hippocampus from postmortem magnetic resonance imaging at 9.4 T. Neuroimage 44.2:385–398CrossRefGoogle Scholar
Yushkevich PA et al (2010) Nearly automatic segmentation of hippocampal subfields in in vivo focal T2-weighted MRI. Neuroimage 53(4):1208–1224CrossRefPubMedPubMedCentralGoogle Scholar
Zeidman P, Lutti A, Maguire EA (2015) Investigating the functions of subregions within anterior hippocampus. cortex 73:240–256CrossRefPubMedPubMedCentralGoogle Scholar
Zeidman P, Maguire EA (2016) Anterior hippocampus: the anatomy of perception, imagination and episodic memory. Nat Rev Neurosci 17.3:173–182CrossRefGoogle Scholar
Zeineh MM, Holdsworth S, Skare S, Atlas SW, Bammer R (2012) Ultra-high resolution diffusion tensor imaging of the microscopic pathways of the medial temporal lobe. Neuroimage 62(3):2065–2082CrossRefPubMedPubMedCentralGoogle Scholar
Zeineh MM et al (2016) Direct visualization and mapping of the spatial course of fiber tracts at microscopic resolution in the human hippocampus. Cerebral Cortex, bhw010Google Scholar
Zhang H, Hubbard PL, Parker GJM, Alexander DC (2011) Axon diameter mapping in the presence of orientation dispersion with diffusion MRI. Neuroimage 56(3):1301–1315CrossRefPubMedGoogle Scholar
Zhang H, Schneider T, Wheeler-Kingshott CA, Alexander DC (2012) NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain. Neuroimage 61(4):1000–1016CrossRefPubMedGoogle Scholar
Zhou Y, Dougherty JH, Hubner KF, Bai B, Cannon RL, Hutson RK (2008) Abnormal connectivity in the posterior cingulate and hippocampus in early Alzheimer's disease and mild cognitive impairment. Alzheimer's Dement 4(4):265–270CrossRefGoogle Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.CEA NeuroSpin/UNIRSGif-sur-YvetteFrance
2.Université Paris-SaclayOrsayFrance
3.France Life ImagingOrsayFrance
4.Forschungszentrum JülichJülichGermany
5.Department of Psychiatry, Psychotherapy and Psychosomatics, Medical FacultyRWTH AachenAachenGermany
6.CEA NeuroSpin/UNATIGif-sur-YvetteFrance
7.CATI Neuroimaging Platformhttp://catineuroimaging.com
Beaujoin, J., Palomero-Gallagher, N., Boumezbeur, F. et al. Brain Struct Funct (2018) 223: 2157. https://doi.org/10.1007/s00429-018-1617-1
Received 17 February 2017 | CommonCrawl |
Login | Create
Sort by: Relevance Date Users's collections Twitter
Group by: Day Week Month Year All time
Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity)
Dark Energy Survey Year 1 Results: Cosmological Constraints from Galaxy Clustering and Weak Lensing (1708.01530)
DES Collaboration: T. M. C. Abbott, F. B. Abdalla, A. Alarcon, J. Aleksić, S. Allam, S. Allen, A. Amara, J. Annis, J. Asorey, S. Avila, D. Bacon, E. Balbinot, M. Banerji, N. Banik, W. Barkhouse, M. Baumer, E. Baxter, K. Bechtol, M. R. Becker, A. Benoit-Lévy, B. A. Benson, G. M. Bernstein, E. Bertin, J. Blazek, S. L. Bridle, D. Brooks, D. Brout, E. Buckley-Geer, D. L. Burke, M. T. Busha, D. Capozzi, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, R. Cawthon, C. Chang, N. Chen, M. Childress, A. Choi, C. Conselice, R. Crittenden, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, R. Das, T. M. Davis, C. Davis, J. De Vicente, D. L. DePoy, J. DeRose, S. Desai, H. T. Diehl, J. P. Dietrich, S. Dodelson, P. Doel, A. Drlica-Wagner, T. F. Eifler, A. E. Elliott, F. Elsner, J. Elvin-Poole, J. Estrada, A. E. Evrard, Y. Fang, E. Fernandez, A. Ferté, D. A. Finley, B. Flaugher, P. Fosalba, O. Friedrich, J. Frieman, J. García-Bellido, M. Garcia-Fernandez, M. Gatti, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, M. S. S. Gill, K. Glazebrook, D. A. Goldstein, D. Gruen, R. A. Gruendl, J. Gschwend, G. Gutierrez, S. Hamilton, W. G. Hartley, S. R. Hinton, K. Honscheid, B. Hoyle, D. Huterer, B. Jain, D. J. James, M. Jarvis, T. Jeltema, M. D. Johnson, M. W. G. Johnson, T. Kacprzak, S. Kent, A. G. Kim, A. King, D. Kirk, N. Kokron, A. Kovacs, E. Krause, C. Krawiec, A. Kremin, K. Kuehn, S. Kuhlmann, N. Kuropatkin, F. Lacasa, O. Lahav, T. S. Li, A. R. Liddle, C. Lidman, M. Lima, H. Lin, N. MacCrann, M. A. G. Maia, M. Makler, M. Manera, M. March, J. L. Marshall, P. Martini, R. G. McMahon, P. Melchior, F. Menanteau, R. Miquel, V. Miranda, D. Mudd, J. Muir, A. Möller, E. Neilsen, R. C. Nichol, B. Nord, P. Nugent, R. L. C. Ogando, A. Palmese, J. Peacock, H.V. Peiris, J. Peoples, W. J. Percival, D. Petravick, A. A. Plazas, A. Porredon, J. Prat, A. Pujol, M. M. Rau, A. Refregier, P. M. Ricker, N. Roe, R. P. Rollins, A. K. Romer, A. Roodman, R. Rosenfeld, A. J. Ross, E. Rozo, E. S. Rykoff, M. Sako, A. I. Salvador, S. Samuroff, C. Sánchez, E. Sanchez, B. Santiago, V. Scarpine, R. Schindler, D. Scolnic, L. F. Secco, S. Serrano, I. Sevilla-Noarbe, E. Sheldon, R. C. Smith, M. Smith, J. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, G. Tarle, D. Thomas, M. A. Troxel, D. L. Tucker, B. E. Tucker, S. A. Uddin, T. N. Varga, P. Vielzeuf, V. Vikram, A. K. Vivas, A. R. Walker, M. Wang, R. H. Wechsler, J. Weller, W. Wester, R. C. Wolf, B. Yanny, F. Yuan, A. Zenteno, B. Zhang, Y. Zhang, J. Zuntz
March 1, 2019 astro-ph.CO
We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg$^2$ of $griz$ imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while blind to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat $\Lambda$CDM and $w$CDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for $\Lambda$CDM) or 7 (for $w$CDM) cosmological parameters including the neutrino mass density and including the 457 $\times$ 457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions, and from their combination obtain $S_8 \equiv \sigma_8 (\Omega_m/0.3)^{0.5} = 0.783^{+0.021}_{-0.025}$ and $\Omega_m = 0.264^{+0.032}_{-0.019}$ for $\Lambda$CDM for $w$CDM, we find $S_8 = 0.794^{+0.029}_{-0.027}$, $\Omega_m = 0.279^{+0.043}_{-0.022}$, and $w=-0.80^{+0.20}_{-0.22}$ at 68% CL. The precision of these DES Y1 results rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for $S_8$ and $\Omega_m$ are lower than the central values from Planck ...
Dark Energy Survey Year 1 Results: Weak Lensing Shape Catalogues (1708.01533)
J. Zuntz, E. Sheldon, S. Samuroff, M. A. Troxel, M. Jarvis, N. MacCrann, D. Gruen, J. Prat, C. Sánchez, A. Choi, S. L. Bridle, G. M. Bernstein, S. Dodelson, A. Drlica-Wagner, Y. Fang, R. A. Gruendl, B. Hoyle, E. M. Huff, B. Jain, D. Kirk, T. Kacprzak, C. Krawiec, A. A. Plazas, R. P. Rollins, E. S. Rykoff, I. Sevilla-Noarbe, B. Soergel, T. N. Varga, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, C. Davis, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, T. F. Eifler, J. Estrada, A. E. Evrard, A. Fausti Neto, E. Fernandez, B. Flaugher, P. Fosalba, J. Frieman, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, J. Gschwend, G. Gutierrez, W. G. Hartley, K. Honscheid, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, K. Kuehn, S. Kuhlmann, N. Kuropatkin, O. Lahav, T. S. Li, M. Lima, M. A. G. Maia, M. March, P. Martini, P. Melchior, F. Menanteau, C. J. Miller, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, R. L. C. Ogando, N. Roe, A. K. Romer, A. Roodman, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, R. H. Wechsler, Y. Zhang
Sept. 7, 2018 astro-ph.CO
We present two galaxy shape catalogues from the Dark Energy Survey Year 1 data set, covering 1500 square degrees with a median redshift of $0.59$. The catalogues cover two main fields: Stripe 82, and an area overlapping the South Pole Telescope survey region. We describe our data analysis process and in particular our shape measurement using two independent shear measurement pipelines, METACALIBRATION and IM3SHAPE. The METACALIBRATION catalogue uses a Gaussian model with an innovative internal calibration scheme, and was applied to $riz$-bands, yielding 34.8M objects. The IM3SHAPE catalogue uses a maximum-likelihood bulge/disc model calibrated using simulations, and was applied to $r$-band data, yielding 21.9M objects. Both catalogues pass a suite of null tests that demonstrate their fitness for use in weak lensing science. We estimate the 1$\sigma$ uncertainties in multiplicative shear calibration to be $0.013$ and $0.025$ for the METACALIBRATION and IM3SHAPE catalogues, respectively.
Astronomers' and Physicists' Attitudes Towards Education & Public Outreach: A Case Study with The Dark Energy Survey (1805.04034)
A. Farahi, R. R. Gupta, C. Krawiec, A. A. Plazas, R. C. Wolf
May 11, 2018 physics.ed-ph
We present a case study of physicists' and astronomers' attitudes towards education and public outreach (EPO) using 131 survey responses from members of the Dark Energy Survey. We find a disparity between the types of EPO activities scientists deem valuable and those in which they participate. Most respondents are motivated to engage in EPO by a desire to educate the public. Lack of time is the main deterrent to engagement, but a perceived cultural stigma surrounding EPO is also a factor. We explore the value of centralized EPO efforts and conclude with a list of recommendations for increasing scientists' engagement.
Dark Energy Survey Year 1 Results: Cosmological Constraints from Cosmic Shear (1708.01538)
M. A. Troxel, N. MacCrann, J. Zuntz, T. F. Eifler, E. Krause, S. Dodelson, D. Gruen, J. Blazek, O. Friedrich, S. Samuroff, J. Prat, L. F. Secco, C. Davis, A. Ferté, J. DeRose, A. Alarcon, A. Amara, E. Baxter, M. R. Becker, G. M. Bernstein, S. L. Bridle, R. Cawthon, C. Chang, A. Choi, J. De Vicente, A. Drlica-Wagner, J. Elvin-Poole, J. Frieman, M. Gatti, W. G. Hartley, K. Honscheid, B. Hoyle, E. M. Huff, D. Huterer, B. Jain, M. Jarvis, T. Kacprzak, D. Kirk, N. Kokron, C. Krawiec, O. Lahav, A. R. Liddle, J. Peacock, M. M. Rau, A. Refregier, R. P. Rollins, E. Rozo, E. S. Rykoff, C. Sánchez, I. Sevilla-Noarbe, E. Sheldon, A. Stebbins, T. N. Varga, P. Vielzeuf, M. Wang, R. H. Wechsler, B. Yanny, T. M. C. Abbott, F. B. Abdalla, S. Allam, J. Annis, K. Bechtol, A. Benoit-Lévy, E. Bertin, D. Brooks, E. Buckley-Geer, D. L. Burke, A. Carnero Rosell, M. Carrasco Kind, J. Carretero, F. J. Castander, M. Crocce, C. E. Cunha, C. B. D'Andrea, L. N. da Costa, D. L. DePoy, S. Desai, H. T. Diehl, J. P. Dietrich, P. Doel, E. Fernandez, B. Flaugher, P. Fosalba, J. García-Bellido, E. Gaztanaga, D. W. Gerdes, T. Giannantonio, D. A. Goldstein, R. A. Gruendl, J. Gschwend, G. Gutierrez, D. J. James, T. Jeltema, M. W. G. Johnson, M. D. Johnson, S. Kent, K. Kuehn, S. Kuhlmann, N. Kuropatkin, T. S. Li, M. Lima, H. Lin, M. A. G. Maia, M. March, J. L. Marshall, P. Martini, P. Melchior, F. Menanteau, R. Miquel, J. J. Mohr, E. Neilsen, R. C. Nichol, B. Nord, D. Petravick, A. A. Plazas, A. K. Romer, A. Roodman, M. Sako, E. Sanchez, V. Scarpine, R. Schindler, M. Schubnell, M. Smith, R. C. Smith, M. Soares-Santos, F. Sobreira, E. Suchyta, M. E. C. Swanson, G. Tarle, D. Thomas, D. L. Tucker, V. Vikram, A. R. Walker, J. Weller, Y. Zhang
April 30, 2018 astro-ph.CO
We use 26 million galaxies from the Dark Energy Survey (DES) Year 1 shape catalogs over 1321 deg$^2$ of the sky to produce the most significant measurement of cosmic shear in a galaxy survey to date. We constrain cosmological parameters in both the flat $\Lambda$CDM and $w$CDM models, while also varying the neutrino mass density. These results are shown to be robust using two independent shape catalogs, two independent \photoz\ calibration methods, and two independent analysis pipelines in a blind analysis. We find a 3.5\% fractional uncertainty on $\sigma_8(\Omega_m/0.3)^{0.5} = 0.782^{+0.027}_{-0.027}$ at 68\% CL, which is a factor of 2.5 improvement over the fractional constraining power of our DES Science Verification results. In $w$CDM, we find a 4.8\% fractional uncertainty on $\sigma_8(\Omega_m/0.3)^{0.5} = 0.777^{+0.036}_{-0.038}$ and a dark energy equation-of-state $w=-0.95^{+0.33}_{-0.39}$. We find results that are consistent with previous cosmic shear constraints in $\sigma_8$ -- $\Omega_m$, and see no evidence for disagreement of our weak lensing data with data from the CMB. Finally, we find no evidence preferring a $w$CDM model allowing $w\ne -1$. We expect further significant improvements with subsequent years of DES data, which will more than triple the sky coverage of our shape catalogs and double the effective integrated exposure time per galaxy. | CommonCrawl |
Back to Chemistry Stack Exchange Return to the main site
Formatting Sandbox II: please test stuff here
Spoiler warning: Be aware that this page contains a lot of MathJax, so it will probably need quite a while to load completely.
User of the old sandbox will have noticed that with the amount of answers gathered there, it is very hard on your computer to get past the posts to the answer field. (Even deleting answers would not solve this problem for 10k users.)
Here you have a new playground... have fun.
(Use this for testing stuff instead of bumping random posts to the mainspace a million times.)
Before you delete a post here, please reduce it to one line without MathJax.
Old formatting sandboxes:
Formatting Sandbox I: please test stuff here
discussion editing formatting faq-proposed
orthocresol
Wrong space around stretchy parentheses
$$\begin{align} y&=a\cdot b/(c\cdot d)\tag{ok}\\[6pt] y&=a\cdot b/\left(c\cdot d\right)\tag{?!}\\[6pt] y&=a\cdot b/{\left(c\cdot d\right)}\tag{ok}\\[6pt] y&=a\cdot b/\mathord{\left(c\cdot d\right)}\tag{ok} \end{align}$$
$$\begin{align} y&=\sum\limits_{i = 1}^n a_i\cdot b_i/(c_i^2\cdot d_i^2)\tag{ok}\\[6pt] y&=\sum\limits_{i = 1}^n a_i\cdot b_i/\left(c_i^2\cdot d_i^2\right)\tag{?!}\\[6pt] y&=\sum\limits_{i = 1}^n a_i\cdot b_i/{\left(c_i^2\cdot d_i^2\right)}\tag{ok}\\[6pt] y&=\sum\limits_{i = 1}^n a_i\cdot b_i/\mathord{\left(c_i^2\cdot d_i^2\right)}\tag{ok} \end{align}$$
$$\begin{align} f(x)&=x^2\tag{ok}\\[6pt] f\left(x\right)&=x^2\tag{?!}\\[6pt] f{\left(x\right)}&=x^2\tag{ok}\\[6pt] f\mathord{\left(x\right)}&=x^2\tag{ok} \end{align}$$
$$\begin{align} y&=\log(x)\tag{?}\\[6pt] y&=\log\left(x\right)\tag{?}\\[6pt] y&=\log{\left(x\right)}\tag{?}\\[6pt] y&=\log\mathord{\left(x\right)}\tag{?} \end{align}$$
Loong
$\begingroup$ This is weird here too. But, your example does the same thing with LaTeX which mine doesn't. We probably need a \renewcommand that manipulates the padding. $\endgroup$ – pentavalentcarbon Oct 13 '16 at 14:22
Decision threshold and detection limit
General aspects
$$\textbf{Quantities and symbols}\\ \begin{array}{ll} \hline \text{Symbol}&\text{Name}\\ \hline y&\text{Estimate of the measurand}\\[-3pt] &\text{(e.g. measurement result of the measurand)}\\ u{\left(y\right)}&\text{Standard uncertainty of the measurand}\\[-3pt] &\text{(associated with the measurement result }y)\\ \tilde y&\text{True value of the measurand }\\ \tilde u{\left(\tilde y\right)}&\text{Standard uncertainty of an estimator of the measurand}\\[-3pt] &\text{(as a function of the true value of the measurand }\tilde y)\\ y^*&\text{Decision threshold}\\ y^\#&\text{Detection limit}\\ \hline x&\text{Estimate of the input quantity}\\ \hline \alpha&\text{Probability of the error of the first kind}\\ \beta&\text{Probability of the error of the second kind}\\ k_{1-\alpha}&\text{Quantile of the standardized normal distribution for the probability }\alpha\\ k_{1-\beta}&\text{Quantile of the standardized normal distribution for the probability }\beta\\ \hline \end{array}$$
In general, the result of a measurement $y$ is only an approximation or estimate of the value of the measurand and thus is complete only when accompanied by a statement of the uncertainty $u{\left(y\right)}$ of that estimate. Nevertheless, it is understood that the result of the measurement $y$ is the best estimate of the value of the measurand. The alternative best estimate $\hat y$, which explicitly takes into account the fact that the measurand is non-negative according to ISO 11929 and which differs from the measurement result $y$, is not used in the following examples.
For the provision and numerical calculation of the decision threshold $y^*$ and of the detection limit $y^\#$, the standard uncertainty $\tilde u$ of the measurand is needed as a function $\tilde u{\left(\tilde y\right)}$ of the true value $\tilde y$ of the measurand. This function is be determined in a way similar to $u{\left(y\right)}$ in accordance with ISO/IEC Guide 98-3. Usually, $\tilde u{\left(\tilde y\right)}$ can be explicitly specified, provided that $\tilde u{\left(\tilde y\right)}$ is given as a function of the primary measurement result $x$, which is taken as input quantity in the model of the evaluation $y{\left(x\right)}$ for the calculation of $y$. In this case, $y$ is be formally replaced by $\tilde y$ and the equation for $y{\left(x\right)}$ is solved for $x$. The result replaces $x$ in the equation of $u{\left(y\right)}$, which finally yields $\tilde u{\left(\tilde y\right)}$.
The decision threshold $y^*$ is a value of the measurand that allows the conclusion that the physical effect of interest is present if the measurement result $y$ exceeds the decision threshold $y^*$; i.e. a determined measurement result $y$ is only significant for the true value of the measurand to differ from zero $(\tilde y>0)$ if it is larger than the decision threshold $(y>y^*)$. Otherwise, the result cannot be attributed to the physical effect; nevertheless, it cannot be concluded that the physical effect is absent.
If the physical effect is really absent $(\tilde y=0)$, the probability of taking the wrong decision that the effect is present $(\tilde y>0)$ shall not exceed the specified probability $\alpha$ (error of the first kind). The choice of the probability $\alpha$ of the error of the first kind depends on the application. A frequently cited choice is $\alpha=5\ \%=0.05$. The corresponding quantile of the standardized normal distribution for the probability $1-\alpha=0.95$ is $k_{1-\alpha}\approx1.645$.
According to ISO 11929, the equation for the decision threshold $y^*$ is given as
$$y^*=k_{1-\alpha}\cdot\tilde u{\left(0\right)}\tag1$$
where $\tilde u{\left(0\right)}$ is the standard uncertainty $\tilde u{\left(\tilde y\right)}$ of the measurand for a true value of the measurand of $\tilde y=0$.
The detection limit $y^\#$ is the smallest true value of the measurand, for which (by applying the decision rule according to the decision threshold $y^*$) the probability of the wrong assumption that the physical effect of interest is absent (error of the second kind) does not exceed the specified probability $\beta$. A frequently cited choice is $\beta=5\ \%=0.05$. The corresponding quantile of the standardized normal distribution for the probability $1-\beta=0.95$ is $k_{1-\beta}\approx1.645$.
According to ISO 11929, the equation for the detection limit $y^\#$ is given as
$$y^\#=y^*+k_{1-\beta}\cdot\tilde u{\left(y^\#\right)}\tag2$$
where $\tilde u{\left(y^\#\right)}$ is the standard uncertainty $\tilde u{\left(\tilde y\right)}$ of the measurand for a true value of the measurand of $\tilde y=y^\#$.
The comparison of the detection limit with a given guideline value allows a decision on whether or not the measurement procedure satisfies the requirements set forth by the guideline value and is therefore suitable for the intended measurement purpose. The measurement procedure satisfies the requirement if the detection limit is smaller than the guideline value.
The determined measurement result $y$ of the measurand is compared with the decision threshold $y^*$. If $y\gt y^*$, the physical effect quantified by the measurand is recognized as present, and the primary measurement result $y$ and its standard uncertainty $u{\left(y\right)}$ are reported. Otherwise, it is decided that the effect is absent. In this case the value of the detection limit $y^\#$ is reported as $\lt y^\#$.
Example 1 – net count rate (without calibration factor)
$$\textbf{Quantities and symbols}\\ \begin{array}{ll} \hline \text{Symbol}&\text{Name}\\ \hline n_\mathrm g&\text{Number of counted pulses of the gross effect}\\ t_\mathrm g&\text{Measurement duration of the measurement of the gross effect}\\ r_\mathrm g&\text{Estimate of the gross count rate}\\ \hline n_0&\text{Number of counted pulses of the background effect}\\ t_0&\text{Measurement duration of the measurement of the background effect}\\ r_0&\text{Estimate of the background count rate}\\ \hline r_\mathrm n&\text{Estimate of the net count rate}\\ u{\left(r\right)}&\text{Standard uncertainty of the net count rate}\\[-3pt] &\text{(associated with the measurement result }r)\\ \tilde r_\mathrm n&\text{True value of the net count rate}\\ \tilde u{\left(\tilde r\right)}&\text{Standard uncertainty of an estimator of the net count rate}\\[-3pt] &\text{(as a function of the true value of the net count rate }\tilde r)\\ r_\mathrm n^*&\text{Decision threshold of the net count rate}\\ r_\mathrm n^\#&\text{Detection limit of the net count rate}\\ \hline \alpha&\text{Probability of the error of the first kind}\\ \beta&\text{Probability of the error of the second kind}\\ k_{1-\alpha}&\text{Quantile of the standardized normal distribution for the probability }\alpha\\ k_{1-\beta}&\text{Quantile of the standardized normal distribution for the probability }\beta\\ \hline \end{array}$$
This example relates to a sample of radioactive material. In particular, the measurand is the net count rate $\rho_\mathrm n$ of the sample. It is determined from counting the gross effect and the background effect with preselection of time.
The estimate of the gross count rate is given by
$$r_\mathrm g=\frac{n_\mathrm g}{t_\mathrm g}\tag3$$
and the estimate of the background count rate is given by
$$r_0=\frac{n_0}{t_0}\tag4$$
Assuming Poisson statistics for the gross counts
$$\begin{align} u{\left(n_\mathrm g\right)}&=\sqrt{n_\mathrm g}\tag5\\[6pt] u^2{\left(n_\mathrm g\right)}&=n_\mathrm g\tag6\\[6pt] u{\left(r_\mathrm g\right)}&=\frac{u{\left(n_\mathrm g\right)}}{t_\mathrm g}\tag7\\[6pt] &=\frac{\sqrt{n_\mathrm g}}{t_\mathrm g}\tag8\\[6pt] u^2{\left(r_\mathrm g\right)}&=\frac{u^2{\left(n_\mathrm g\right)}}{t_\mathrm g^2}\tag9\\[6pt] &=\frac{n_\mathrm g}{t_\mathrm g^2}=\frac{r_\mathrm g}{t_\mathrm g}\tag{10} \end{align}$$
as well as for the background counts
$$\begin{align} u{\left(n_0\right)}&=\sqrt{n_0}\tag{11}\\[6pt] u^2{\left(n_0\right)}&=n_0\tag{12}\\[6pt] u{\left(r_0\right)}&=\frac{u{\left(n_0\right)}}{t_\mathrm g}\tag{13}\\[6pt] &=\frac{\sqrt{n_0}}{t_0}\tag{14}\\[6pt] u^2{\left(r_0\right)}&=\frac{u^2{\left(n_0\right)}}{t_0^2}\tag{15}\\[6pt] &=\frac{n_0}{t_0^2}=\frac{r_0}{t_0}\tag{16} \end{align}$$
The standard uncertainties $u{\left(t_\mathrm g\right)}$ and $u{\left(t_0\right)}$ of the measurement durations $t_\mathrm g$ and $t_0$ are neglected since the measurement duration can be measured far more exactly than all the other quantities involved and can thus be taken as a constant.
The model of evaluation for the net count rate $r_\mathrm n$ is
$$\begin{align} r_\mathrm n&=r_\mathrm g-r_0\tag{17}\\[6pt] &=\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\tag{18}\\[6pt] \end{align}$$
The corresponding uncertainty is
$$\begin{align} u^2{\left(r_\mathrm n\right)}&=u^2{\left(r_\mathrm g\right)}+u^2{\left(r_0\right)}\tag{19}\\[6pt] &=\frac{n_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}=\frac{r_\mathrm g}{t_\mathrm g}+\frac{r_0}{t_0}\tag{20}\\[6pt] u{\left(r_\mathrm n\right)}&=\sqrt{\frac{n_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}}=\sqrt{\frac{r_\mathrm g}{t_\mathrm g}+\frac{r_0}{t_0}}\tag{21} \end{align}$$
Standard uncertainty of the estimator as a function of the true value of the measurand
According to $\text(18)$, the equation for the true value $\tilde r_\mathrm n$ of the net count rate is expected as
$$\begin{alignat}{2} &&\tilde r_\mathrm n&=\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\tag{22}\\[6pt] &\Leftrightarrow\quad&n_\mathrm g&=\left(\tilde r_\mathrm n+\frac{n_0}{t_0}\right)\cdot t_\mathrm g\tag{23} \end{alignat}$$
and according to $\text(19)$ and $\text(20)$, the corresponding equation for the standard uncertainty $\tilde u$ of the the true value $\tilde r_\mathrm n$ of the net count rate is expected as
$$\begin{align} \tilde u^2{\left(\tilde r_\mathrm n\right)}&=u^2{\left(r_\mathrm g\right)}+u^2{\left(r_0\right)}\tag{24}\\[6pt] &=\frac{n_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}\tag{25}\\[6pt] \end{align}$$
Inserting $\text{(23)}$ into $\text{(25)}$ yields:
$$\begin{align} \tilde u^2{\left(\tilde r_\mathrm n\right)}&=\frac{\left(\tilde r_\mathrm n+\frac{n_0}{t_0}\right)\cdot t_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}\tag{26}\\[6pt] &=\frac{\tilde r_\mathrm n}{t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)\tag{27}\\[6pt] \tilde u{\left(\tilde r_\mathrm n\right)}&=\sqrt{\frac{\tilde r_\mathrm n}{t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)}\tag{28} \end{align}$$
Decision threshold
According to $\text{(1)}$, the equation for the decision threshold of the net count rate is:
$$r_\mathrm n^*=k_{1-\alpha}\cdot\tilde u{\left(0\right)}\tag{29}$$
Inserting $\text{(28)}$ with $\tilde r_\mathrm n=0$ yields:
$$\begin{align} r_\mathrm n^*&=k_{1-\alpha}\cdot\sqrt{\frac0{t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)}\tag{30}\\[6pt] &=k_{1-\alpha}\cdot\sqrt{\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)}\tag{31} \end{align}$$
According to $\text{(2)}$, the equation for the detection limit of the net count rate is:
$$r_\mathrm n^\#=r_\mathrm n^*+k_{1-\beta}\cdot\tilde u{\left(r_\mathrm n^\#\right)}\tag{32}$$
Inserting $\text{(28)}$ with $\tilde r_\mathrm n=r_\mathrm n^\#$ yields:
$$r_\mathrm n^\#=r_\mathrm n^*+k_{1-\beta}\cdot\sqrt{\frac{r_\mathrm n^\#}{t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)}\tag{33}$$
Since, according to $\text{(32)}$, $r_\mathrm n^\#$ depends on $\tilde u{\left(r_\mathrm n^\#\right)}$ and, according to $\text{(28)}$, $\tilde u{\left(r_\mathrm n^\#\right)}$ depends on $r_\mathrm n^\#$, the detection limit $r_\mathrm n^\#$ cannot be directly calculated using $\text{(33)}$. In principle, such equations can be solved for the detection limit; however, the procedure can be elaborate and the result can be unwieldy. In this case, solving $\text{(33)}$ for $r_\mathrm n^\#$ yields:
$$r_\mathrm n^\#=r_\mathrm n^*+\frac{k_{1-\beta}\cdot\left(k_{1-\beta}\cdot t_0+\sqrt{4\cdot r_\mathrm n^*\cdot t_0^2\cdot t_\mathrm g+k_{1-\beta}^2\cdot t_0^2+4\cdot n_0\cdot t_0\cdot t_\mathrm g+4\cdot n_0\cdot t_\mathrm g^2}\right)}{2\cdot t_\mathrm g\cdot t_0}\tag{34}$$
In practice, however, this step is usually unnecessary since typical spreadsheet software can automatically calculate formulas with circular references (i.e. when a formula refers back to its own cell) such as $\text{(33)}$ by using iteration (i.e. the repeated recalculation of a worksheet until a specific numeric condition is met). In Microsoft Office Excel, for example, iterative calculations are turned off by default and have to be enabled in the calculation options section. If necessary, $r_\mathrm n^\#\approx2\cdot r_\mathrm n^*$ may be used as initial approximation for the iteration.
$$\textbf{Numerical examples}\\ \begin{array}{llll|llll|l} \hline n_\mathrm g&t_\mathrm g&n_0&t_0&r_\mathrm n&u{\left(r_\mathrm n\right)}&r_\mathrm n^*&r_\mathrm n^\#&\text{Reported}\ r_\mathrm n\\ &\text{in}\ \mathrm s&&\text{in}\ \mathrm s&\text{in}\ \mathrm s^{-1}&\text{in}\ \mathrm s^{-1}&\text{in}\ \mathrm s^{-1}&\text{in}\ \mathrm s^{-1}&\text{in}\ \mathrm s^{-1}\\ \hline \hphantom{0}150&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&0.833&0.264&0.388&0.820&0.83\pm0.26\\ \hphantom{0}140&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&0.667&0.258&0.388&0.820&0.67\pm0.26\\ \hphantom{0}130&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&0.500&0.253&0.388&0.820&0.50\pm0.25\\ \hphantom{0}120&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&0.333&0.247&0.388&0.820&\lt0.82\\ \hphantom{0}110&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&0.167&0.242&0.388&0.820&\lt0.82\\ \hline \hphantom{0}150&\hphantom{0}60&6000&3600&0.833&0.205&0.276&0.598&0.83\pm0.21\\ \hphantom{0}140&\hphantom{0}60&6000&3600&0.667&0.198&0.276&0.598&0.67\pm0.20\\ \hphantom{0}130&\hphantom{0}60&6000&3600&0.500&0.191&0.276&0.598&0.50\pm0.19\\ \hphantom{0}120&\hphantom{0}60&6000&3600&0.333&0.184&0.276&0.598&0.33\pm0.18\\ \hphantom{0}110&\hphantom{0}60&6000&3600&0.167&0.176&0.276&0.598&\lt0.60\\ \hline 1500&600&\hphantom{0}100&\hphantom{00}60&0.833&0.179&0.288&0.580&0.83\pm0.18\\ 1400&600&\hphantom{0}100&\hphantom{00}60&0.667&0.178&0.288&0.580&0.67\pm0.18\\ 1300&600&\hphantom{0}100&\hphantom{00}60&0.500&0.177&0.288&0.580&0.50\pm0.18\\ 1200&600&\hphantom{0}100&\hphantom{00}60&0.333&0.176&0.288&0.580&0.33\pm0.18\\ 1100&600&\hphantom{0}100&\hphantom{00}60&0.167&0.176&0.288&0.580&\lt0.58\\ \hline \end{array}$$
Example 2 – activity (with calibration factor)
$$\textbf{Quantities and symbols}\\ \begin{array}{ll} \hline \text{Symbol}&\text{Name}\\ \hline n_\mathrm g&\text{Number of counted pulses of the gross effect}\\ t_\mathrm g&\text{Measurement duration of the measurement of the gross effect}\\ r_\mathrm g&\text{Estimate of the gross count rate}\\ \hline n_0&\text{Number of counted pulses of the background effect}\\ t_0&\text{Measurement duration of the measurement of the background effect}\\ r_0&\text{Estimate of the background count rate}\\ \hline r_\mathrm n&\text{Estimate of the net count rate}\\ \tilde r_\mathrm n&\text{True value of the net count rate}\\ r_\mathrm n^*&\text{Decision threshold of the net count rate}\\ r_\mathrm n^\#&\text{Detection limit of the net count rate}\\ \hline \varphi&\text{Calibration factor}\\ u{\left(\varphi\right)}&\text{Standard uncertainty of the calibration factor}\\ \hline A&\text{Estimate of the activity}\\ u{\left(A\right)}&\text{Standard uncertainty of the activity}\\[-3pt] &\text{(associated with the measurement result }A)\\ \tilde A_\mathrm n&\text{True value of the activity}\\ \tilde u{\left(\tilde y\right)}&\text{Standard uncertainty of an estimator of the activity}\\[-3pt] &\text{(as a function of the true value of the activity }\tilde A)\\ A_\mathrm n^*&\text{Decision threshold of the activity}\\ A_\mathrm n^\#&\text{Detection limit of the activity}\\ \hline \alpha&\text{Probability of the error of the first kind}\\ \beta&\text{Probability of the error of the second kind}\\ k_{1-\alpha}&\text{Quantile of the standardized normal distribution for the probability }\alpha\\ k_{1-\beta}&\text{Quantile of the standardized normal distribution for the probability }\beta\\ \hline \end{array}$$
This example is an expansion of the first example (see above). Again, it relates to a sample of radioactive material. In this case, the measurand is the activity $A$ (in $\mathrm{Bq}$) of the sample. It is determined from the net count rate $r_\mathrm n$ by multiplication by a calibration factor $\varphi$. This calibration factor may include various calibration, correction or influence quantities, or conversion factors that apply to the sample preparation and the actual measurement procedure. The value of the calibration factor may also be obtained from the measurement of the net count rate for a calibration source with a known activity.
$$A=\varphi\cdot r_\mathrm n\tag{35}$$
It is usually convenient to calculate the corresponding standard uncertainty $u{\left(A\right)}$ based on the propagation of the relative standard uncertainties:
$$\begin{align} \left(\frac{u{\left(A\right)}}{A}\right)^2&=\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\left(\frac{u{\left(r_\mathrm n\right)}}{r_\mathrm n}\right)^2\tag{36}\\[6pt] \frac{u{\left(A\right)}}{A}&=\sqrt{\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\left(\frac{u{\left(r_\mathrm n\right)}}{r_\mathrm n}\right)^2}\tag{37}\\[6pt] u{\left(A\right)}&=\sqrt{\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\left(\frac{u{\left(r_\mathrm n\right)}}{r_\mathrm n}\right)^2}\cdot A\tag{38}\\[6pt] \end{align}$$
Inserting $\text{(35)}$ yields:
$$\begin{align} u{\left(A\right)}&=\sqrt{\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\left(\frac{u{\left(r_\mathrm n\right)}}{r_\mathrm n}\right)^2}\cdot\varphi\cdot r_\mathrm n\tag{39}\\[6pt] &=\sqrt{r_\mathrm n^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot u^2{\left(r_\mathrm n\right)}}\tag{40}\\[6pt] u^2{\left(A\right)}&=r_\mathrm n^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot u^2{\left(r_\mathrm n\right)}\tag{41} \end{align}$$
Note that the same result may be obtained using the general equation for the law of propagation of uncertainty as described in ISO/IEC Guide 98-3:
$$\begin{align} u^2{\left(A\right)}&=\left(\frac{\partial A}{\partial\varphi}\right)^2 u^2{\left(\varphi\right)}+\left(\frac{\partial A}{\partial r_\mathrm n}\right)^2 u^2{\left(r_\mathrm n\right)}\tag{42}\\[6pt] &=r_\mathrm n^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot u^2{\left(r_\mathrm n\right)}\tag{43} \end{align}$$
The general equation may be useful in case of complicated expressions for the standard uncertainty. Note that, when the nonlinearity of the considered function is significant, it might be necessary to include higher-order terms in the Taylor series expansion in the expression for its uncertainty.
Inserting $\text{(17)}$ or $\text{(18)}$ into $\text{(35)}$ yields the complete model of evaluation:
$$\begin{align} A&=\varphi\cdot\left(r_\mathrm g-r_0\right)\tag{44}\\[6pt] &=\varphi\cdot\left(\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\right)\tag{45} \end{align}$$
Inserting $\text{(18)}$ and $\text{(20)}$ in $\text{(41)}$ or $\text{(43)}$ yields the corresponding uncertainty:
$$\begin{align} u^2{\left(A\right)}&=\left(\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\right)^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot\left(\frac{n_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}\right)\tag{46}\\[6pt] u{\left(A\right)}&=\sqrt{\left(\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\right)^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot\left(\frac{n_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}\right)}\tag{47} \end{align}$$
According to $\text{(45)}$, the equation for the true value $\tilde A$ of the activity is expected as
$$\begin{alignat}{2} &&\tilde A&=\varphi\cdot\left(\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\right)\tag{48}\\[6pt] &\Leftrightarrow\quad&n_\mathrm g&=\left(\frac{\tilde A}{\varphi}+\frac{n_0}{t_0}\right)\cdot t_\mathrm g\tag{49} \end{alignat}$$
and according to $\text{(46)}$, the corresponding equation for the standard uncertainty $\tilde u$ of the true value $\tilde A$ of the activity is expected as
$$\tilde u^2{\left(\tilde A\right)}=\left(\frac{n_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\right)^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot\left(\frac{n_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}\right)\tag{50}\\[6pt]$$
$$\begin{align} \tilde u^2{\left(\tilde A\right)}&=\left(\frac{\left(\frac{\tilde A}{\varphi}+\frac{n_0}{t_0}\right)\cdot t_\mathrm g}{t_\mathrm g}-\frac{n_0}{t_0}\right)^2\cdot u^2{\left(\varphi\right)}+\varphi^2\cdot\left(\frac{\left(\frac{\tilde A}{\varphi}+\frac{n_0}{t_0}\right)\cdot t_\mathrm g}{t_\mathrm g^2}+\frac{n_0}{t_0^2}\right)\tag{51}\\[6pt] &=\tilde A^2\cdot\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\varphi^2\cdot\left(\frac{\tilde A}{\varphi\cdot t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)\right)\tag{52}\\[6pt] \tilde u{\left(\tilde A\right)}&=\sqrt{\tilde A^2\cdot\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\varphi^2\cdot\left(\frac{\tilde A}{\varphi\cdot t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)\right)}\tag{53} \end{align}$$
According to $\text{(1)}$, the equation for the decision threshold of the activity is:
$$A^*=k_{1-\alpha}\cdot\tilde u{\left(0\right)}\tag{54}$$
Inserting $\text{(53)}$ with $\tilde A=0$ yields:
$$\begin{align} A^*&=k_{1-\alpha}\cdot\sqrt{0^2\cdot\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\varphi^2\cdot\left(\frac0{\varphi\cdot t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)\right)}\tag{55}\\[6pt] &=k_{1-\alpha}\cdot\varphi\cdot\sqrt{\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)}\tag{56} \end{align}$$
Note that the decision threshold $A^*$ of the activity $A$ is not affected by the uncertainty $u{\left(\varphi\right)}$ of the calibration factor $\varphi$.
A comparison of $\text{(31)}$ and $\text{(56)}$ shows that the decision threshold $A^*$ of the activity $A$ is equal to the decision threshold $r_\mathrm n^*$ of the net count rate $r_\mathrm n$ multiplied by the calibration factor $\varphi$:
$$A^*=\varphi\cdot r_\mathrm n^*\tag{57}$$
which is similar to the initial model that is expressed as $\text{(35)}$. However, the analogous simple relationship does not apply to the following detection limit.
According to $\text{(2)}$, the equation for the detection limit of the activity is:
$$A^\#=A^*+k_{1-\beta}\cdot\tilde u{\left(A^\#\right)}\tag{58}$$
Inserting $\text{(53)}$ with $\tilde A=A^\#$ yields:
$$A^\#=A^*+k_{1-\beta}\cdot\sqrt{{A^\#}^2\cdot\left(\frac{u{\left(\varphi\right)}}{\varphi}\right)^2+\varphi^2\cdot\left(\frac{A^\#}{\varphi\cdot t_\mathrm g}+\frac{n_0}{t_0}\cdot\left(\frac1{t_\mathrm g}+\frac1{t_0}\right)\right)}\tag{59}$$
Similar to the case of $\text{(33)}$, according to $\text{(58)}$, $A^\#$ depends on $u{\left(A^\#\right)}$ and, according to $\text{(53)}$, $u{\left(A^\#\right)}$ depends on $A^\#$. Therefore, $A^\#$ cannot be directly calculated using $\text{(59)}$. Again, it is usually unnecessary to solve $\text{(59)}$ for $A^\#$ since typical spreadsheet software can automaticall calculate formulas with circular references by using iteration.
$$\textbf{Numerical examples}\\ \begin{array}{llllll|llll|l} \hline n_\mathrm g&t_\mathrm g&n_0&t_0&\varphi&u{\left(\varphi\right)}&A&u{\left(A\right)}&A^*&A^\#&\text{Reported}\ A\\ &\text{in}\ \mathrm s&&\text{in}\ \mathrm s&\text{in}\ \mathrm{Bq\ s}&\text{in}\ \mathrm{Bq\ s}&\text{in}\ \mathrm{Bq}&\text{in}\ \mathrm{Bq}&\text{in}\ \mathrm{Bq}&\text{in}\ \mathrm{Bq}&\text{in}\ \mathrm{Bq}\\ \hline \hphantom{0}150&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&4.0&0.2&3.333&1.067&1.551&3.304&3.3\pm1.1\\ \hphantom{0}140&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&4.0&0.2&2.667&1.041&1.551&3.304&2.7\pm1.0\\ \hphantom{0}130&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&4.0&0.2&2.000&1.016&1.551&3.304&2.0\pm1.0\\ \hphantom{0}120&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&4.0&0.2&1.333&0.991&1.551&3.304&\lt3.3\\ \hphantom{0}110&\hphantom{0}60&\hphantom{0}100&\hphantom{00}60&4.0&0.2&0.667&0.967&1.551&3.304&\lt3.3\\ \hline \hphantom{0}150&\hphantom{0}60&6000&3600&4.0&0.2&3.333&0.838&1.106&2.408&3.3\pm0.8\\ \hphantom{0}140&\hphantom{0}60&6000&3600&4.0&0.2&2.667&0.805&1.106&2.408&2.7\pm0.8\\ \hphantom{0}130&\hphantom{0}60&6000&3600&4.0&0.2&2.000&0.771&1.106&2.408&2.0\pm0.8\\ \hphantom{0}120&\hphantom{0}60&6000&3600&4.0&0.2&1.333&0.738&1.106&2.408&1.3\pm0.8\\ \hphantom{0}110&\hphantom{0}60&6000&3600&4.0&0.2&0.667&0.705&1.106&2.408&\lt2.4\\ \hline 1500&600&\hphantom{0}100&\hphantom{00}60&4.0&0.2&3.333&0.734&1.150&2.334&3.3\pm0.7\\ 1400&600&\hphantom{0}100&\hphantom{00}60&4.0&0.2&2.667&0.724&1.150&2.334&2.7\pm0.7\\ 1300&600&\hphantom{0}100&\hphantom{00}60&4.0&0.2&2.000&0.716&1.150&2.334&2.0\pm0.7\\ 1200&600&\hphantom{0}100&\hphantom{00}60&4.0&0.2&1.333&0.707&1.150&2.334&1.3\pm0.7\\ 1100&600&\hphantom{0}100&\hphantom{00}60&4.0&0.2&0.667&0.703&1.150&2.334&\lt2.3\\ \hline \end{array}$$
ISO 11929:2010 Determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measurements of ionizing radiation – Fundamentals and application
ISO/IEC Guide 98-3:2008 Uncertainty of measurement Part 3: Guide to the expression of uncertainty in measurement (GUM:1995)
JCGM 100:2008 Evaluation of measurement data – Guide to the expression of uncertainty in measurement
ISO/IEC Guide 99:2007 International vocabulary of metrology – Basic and general concepts and associated terms (VIM)
JCGM 200:2012 International Vocabulary of Metrology – Basic and General Concepts and Associated Terms (VIM 3rd edition)
ISO 80000-1:2009 Quantities and units – Part 1: General
26 revs
$\begingroup$ I have looked into it, but could not find a valid way how to break these long names in the first column. I believe there is no handling of \parboxes implemented. Right now it just wouldn't fit into the margins and the units are swallowed by the right menu... $\endgroup$ – Martin - マーチン♦ Aug 3 '16 at 4:31
$\begingroup$ Loong, are you bored? $\endgroup$ – M.A.R. ಠ_ಠ Aug 7 '16 at 20:14
Image size test for small mobile devices
1) width and height
2) width
(test results)
Normal resolution (6.25 KB):
2 subpixels per pixel (10.74 KB):
2 or 3 subpixels per pixel (34.56 KB):
Comparison of font size
(use of MathJax font for function graphs and similar images)
Image (png):
MathJax: $$0\quad1\quad2\quad3\quad4\quad5\quad6\quad7\quad8\quad9\quad10$$ $$x\ \text{axis title}$$
\left< \right> outputs the same thing as \left\langle \right\rangle and is much easier to type!
\left< test^2 \middle| test^3 \right>
$\left< test^2 \middle| test^3 \right>$
If $\text{MathJax} \implies \LaTeX$ this wouldn't be such a problem...
\left\langle \psi \middle| \hat{A} \middle| \psi \right\rangle
$$\Large \left\langle \psi \middle| \hat{A} \middle| \psi \right\rangle$$
\left\langle \chi_{\mu} | \mathbf{rr}^{T} | \chi_{\nu} \right\rangle
$$\Large \left\langle \chi_{\mu} | \mathbf{rr}^{T} | \chi_{\nu} \right\rangle$$
\left\langle \chi_{\mu} \middle| \mathbf{rr}^{T} \middle| \chi_{\nu} \right\rangle
$$\Large \left\langle \chi_{\mu} \middle| \mathbf{rr}^{T} \middle| \chi_{\nu} \right\rangle$$
\left\langle \chi_{\mu} \left| \mathbf{rr}^{T} \right| \chi_{\nu} \right\rangle
$$\Large \left\langle \chi_{\mu} \left| \mathbf{rr}^{T} \right| \chi_{\nu} \right\rangle$$
\left\langle \, \chi_{\mu} \middle| \left. \! \mathbf{rr}^{T} \right| \chi_{\nu} \! \right\rangle
$$\Large \left\langle \, \chi_{\mu} \middle| \left. \! \mathbf{rr}^{T} \right| \chi_{\nu} \! \right\rangle$$
ಠ_ಠ
pentavalentcarbon
$\begingroup$ $$\Huge \color{\red}{\text{ಠ_ಠ}}$$ $\endgroup$ – orthocresol♦ Nov 4 '16 at 14:05
Nested environments for the vertical alignment inside a table
$$ \begin{array}{clr} \hline \text{Reaction} & E^\circ~(\pu{V}) \\ \hline \begin{align} \ce{Fe^3+ + e- &-> Fe^2+} \\ \ce{Cu^2+ + 2e- &-> Cu} \\ \ce{Fe^3+ + 3e- &-> Fe} \\ \ce{Fe^2+ + 2e- &-> Fe} \end{align} & \begin{array}{r} +0.77 \\ +0.34 \\ -0.04 \\ -0.41 \end{array} \\ \hline \end{array} $$
Polymer bond through bracket
$$\require{enclose}\ce{\enclose{horizontalstrike}{(}HNCH2CONHCH2CH2NHCO\enclose{horizontalstrike}{)}}$$
edited Apr 6 at 12:06
andselisk
Smallcaps experiments
D-Glucose and L-rhamnose.
(plain text; incorrect normal size capital D/L)
(<small>D/L</small> supported already or not?)
ᴅ-Glucose and ʟ-rhamnose.
(U+1D05/U+029F LATIN LETTER SMALL CAPITAL D/L; potentially problematic)
(<span style="font-variant: small-caps">d/l</span> unsupported (ever?), problematic – plaintext/non-functional rendering has lowercase d/l)
$\ce{\small\text{D}\normalsize\text{-Glucose}}$ and $\ce{\small\text{L}\normalsize\text{-rhamnose}}$.
($\ce{\small\text{D/L}\normalsize}$)
mykhal
Comment test sandbox ↓
$\begingroup$ Keyboard <kbd> test: <kbd>Ctrl</kbd>+<kbd>C</kbd> <kbd>Ctrl</kbd>+<kbd>V</kbd> $\endgroup$ – mykhal Oct 8 '18 at 17:30
$\begingroup$ [ tour ] teplate test: tour $\endgroup$ – mykhal Nov 29 '18 at 9:42
$\begingroup$ italic test (1*R*)-, (1<i>R</i>)-, (1_R_)-, :( $\endgroup$ – mykhal Jan 4 at 17:40
$\begingroup$ italic ................ (1_R_) $\endgroup$ – mykhal Jan 13 at 14:28
$\begingroup$ (2*R*)- ............. Fooboo Foo_boo_ (1*R*,2*S*)- $\endgroup$ – mykhal Jan 14 at 17:28
$\begingroup$ (4aR,8aR)-decahydroquinoline $\endgroup$ – mykhal Jan 15 at 13:07
To find the the effective mass we have to find total energy $$E=\int_m\frac12u^2\,\mathrm dm$$ where $E$ is total energy and $\mathrm dm=\left(\frac{\mathrm dy}{L}\right)m$ which gives us
$$E=\int_0^L\frac12u^2\left(\frac{\mathrm dy}L\right)m$$ We assume that the velocity in a spring varies linearly with $y$ giving us $u=\frac{vy}L$ which on sustituting gives us
$$E=\frac12\frac mL\int_0^L\left(\frac{vy}L\right)^2\,\mathrm dy$$ which on solving gives
$$E=\frac12\frac m3v^2$$
edited Jun 16 at 7:36
Advil Sell
HBr, and HCl are strong acids.
answered Jun 16 at 7:38
$\begingroup$ This answer actually was "HI, HBr, and HCl are strong acids." However, salutations (e.g. "Hi") are automatically removed, which leaves "HBr, and HCl are strong acids." $\endgroup$ – Loong♦ Jun 16 at 7:41
Here are some test GIFs and a picture.
edited Jul 4 at 23:42
Ed V
Not the answer you're looking for? Browse other questions tagged discussion editing formatting faq-proposed .
This site is for discussion about Chemistry Stack Exchange. You must have an account there to participate.
Hidden points of editing you probably didn't know
A new policy of closure: November 2016
Do we want small caps?
Is there a site on ChemSE to practice mhchem and mathjax?
Forcing italic for e.g. R/S stereodescriptors in comments
Why do we downvote without commenting?
Wrong alternative invocation of MathJax
Is "thanking" and "personal info" stuff allowed in posts?
Editing gone wild
Editing/presentation of chemical structures (to chemdraw or not to chemdraw)
Let's do the bounty dance!
Please exercise your votes
Group theory tables
Introducing: Spring Cleaning
The Great Retagging Event — Episode 2: Spring Cleaning
Lo and behold, the return of editing galore | CommonCrawl |
Contraction (operator theory)
(Redirected from Contraction)
contracting operator, contractive operator, compression
A bounded linear mapping $T$ of a Hilbert space $H$ into a Hilbert space $H _ { 1 }$ with $\| T \| \leq 1$. For $H = H _ { 1 }$, a contractive operator $T$ is called completely non-unitary if it is not a unitary operator on any $T$-reducing subspace different from $\{ 0 \}$. Such are, for example, the one-sided shifts (in contrast to the two-sided shifts, which are unitary). Associated with each contractive operator $T$ on $H$ there is a unique orthogonal decomposition, $H = H _ { 0 } \otimes H _ { 1 }$, into $T$-reducing subspaces such that $T _ { 0 } = T | _ { H _ { 0 } }$ is unitary and $T _ { 1 } = T | _ { H _ { 1 } }$ is completely non-unitary. $T = T _ { 0 } \otimes T _ { 1 }$ is called the canonical decomposition of $T$.
A dilation of a given contractive operator acting on $H$ is a bounded operator $B$ acting on some large Hilbert space $K \supset H$ such that $T ^ { n } = P B ^ { n }$, $n = 1,2 , \dots,$ where $P$ is the orthogonal projection of $K$ onto $H$. Every contractive operator in a Hilbert space $H$ has a unitary dilation $U$ on a space $K \supset H$, which, moreover, is minimal in the sense that $K$ is the closed linear span of $\{ U ^ { n } H \} _ { n = - \infty } ^ { + \infty }$ (the Szökefalvi-Nagy theorem). Minimal unitary dilations and functions of them, defined via spectral theory, allow one to construct a functional calculus for contractive operators. This has been done essentially for bounded analytic functions in the open unit disc $D$ (the Hardy class $H ^ { \infty }$). A completely non-unitary contractive operator $T$ belongs, by definition, to the class $C _ { 0 }$ if there is a function $u \in H ^ { \infty }$, $u ( \lambda ) \not \equiv 0$, such that $u ( T ) = 0$. The class $C _ { 0 }$ is contained in the class $C_{00}$ of contractive operators $T$ for which $T ^ { n } \rightarrow 0$, $T ^ { * n } \rightarrow 0$ as $n \rightarrow \infty$. For every contractive operator of class $C _ { 0 }$ there is the so-called minimal function $m _ { T } ( \lambda )$ (that is, an inner function $u \in H ^ { \infty }$, $| u ( \lambda ) | \leq 1$ in $D$, $| u ( e ^ { i t } ) | = 1$ almost-everywhere on the boundary of $D$) such that $m _ { T } ( T ) = 0$ and $m _ { T } ( \lambda )$ is a divisor of all other inner functions with the same property. The set of zeros of the minimal function $m _ { T } ( \lambda )$ of a contractive operator $T$ in $D$, together with the complement in the unit circle of the union of the arcs along which $m _ { T } ( \lambda )$ can be analytically continued, coincides with the spectrum $\sigma ( T )$. The notion of a minimal function of a contractive operator $T$ of class $C _ { 0 }$ allows one to extend the functional calculus for this class of contractive operators to certain meromorphic functions in $D$.
The theorem on unitary dilations has been obtained not only for individual contractive operators but also for discrete, $\{ T ^ { n } \}$, $n = 0,1 , \ldots,$ and continuous, $\{ T ( s ) \}$, $0 \leq s \leq \infty$, semi-groups of contractive operators.
As for dissipative operators (cf. Dissipative operator), also for contractive operators a theory of characteristic operator-valued functions has been constructed and, on the basis of this, also a functional model, which allows one to study the structure of contractive operators and the relations between the spectrum, the minimal function and the characteristic function (see [1]). By the Cayley transformation
\begin{equation*} A = ( I + T ) ( I - T ) ^ { - 1 } , \quad 1 \notin \sigma _ { p } ( T ), \end{equation*}
a contractive operator $T$ is related to a maximal accretive operator $A$, that is, $A$ is such that $i A$ is a maximal dissipative operator. Constructed on this basis is the theory of dissipative extensions $B_0$ of symmetric operators $A _ { 0 }$ (respectively, Philips dissipative extensions $i B _ { 0 }$ of conservative operators $i A _ { 0 }$).
The theories of similarity, quasi-similarity and unicellularity have been developed for contractive operators. The theory of contractive operators is closely connected with the prediction theory of stationary stochastic processes and scattering theory. In particular, the Lax–Philips scheme [2] can be considered as a continual analogue of the Szökefalvi-Nagy–Foias theory of contractive operators of class $C_{00}$.
[1] B. Szökefalvi-Nagy, Ch. Foiaş, "Harmonic analysis of operators in Hilbert space" , North-Holland (1970) (Translated from French)
[2] P.D. Lax, R.S. Philips, "Scattering theory" , Acad. Press (1967)
A reducing subspace for an operator $T$ is a closed subspace $K$ such that there is a complement $K ^ { \prime }$, i.e. $H = K \oplus K ^ { \prime }$, such that both $K$ and $K ^ { \prime }$ are invariant under $T$, i.e. $T ( K ) \subset K$, $T ( K ^ { \prime } ) \subset K ^ { \prime }$.
[a1] I.C. [I.Ts. Gokhberg] Gohberg, M.G. Krein, "Introduction to the theory of linear nonselfadjoint operators" , Transl. Math. Monogr. , 18 , Amer. Math. Soc. (1969) (Translated from Russian)
[a2] I.C. [I.Ts. Gokhberg] Gohberg, M.G. Krein, "Theory and applications of Volterra operators in Hilbert space" , Amer. Math. Soc. (1970) (Translated from Russian)
Contraction. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Contraction&oldid=19348
Retrieved from "https://encyclopediaofmath.org/index.php?title=Contraction_(operator_theory)&oldid=51671"
TeX semi-auto
TeX done | CommonCrawl |
AMS Home Publications Membership Meetings & Conferences News & Public Outreach Notices of the AMS The Profession Programs Government Relations Education Giving to the AMS About the AMS
MathSciNet® Member Directory Bookstore Journals Employment Services Giving to the AMS
Bookstore MathSciNet® Meetings Journals Membership Employment Services Giving to the AMS About the AMS
The AMS website will be down on Saturday December 11th from 8:30 am to approximately 11:30 am for maintenance.
ISSN 1088-6850(online) ISSN 0002-9947(print)
Journals Home Search My Subscriptions Subscribe
Your device is paired with
for another days.
Previous issue | This issue | Most recent issue | All issues (1900–Present) | Next issue | Previous article | Articles in press | Recently published articles | Next article
$L_{\infty }{}_{\lambda }$-equivalence, isomorphism and potential isomorphism
Authors: Mark Nadel and Jonathan Stavi
Journal: Trans. Amer. Math. Soc. 236 (1978), 51-74
MSC: Primary 02H10; Secondary 02K05, 02H13
DOI: https://doi.org/10.1090/S0002-9947-1978-0462942-3
MathSciNet review: 0462942
Full-text PDF Free Access
Abstract | References | Similar Articles | Additional Information
Abstract: It is well known that two structures are ${L_{\infty \omega }}$-equivalent iff they are potentially isomorphic [that is, isomorphic in some (Cohen) extension of the universe]. We prove that no characterization of ${L_{\infty \lambda }}$-equivalence along these lines is possible (at least for successor cardinals $\lambda$) and the potential-isomorphism relation that naturally comes to mind in connection with ${L_{\infty \lambda }}$ is often not even transitive and never characterizes ${ \equiv _{\infty \lambda }}$ for $\lambda > \omega$. A major part of the work is the construction of ${\kappa ^ + }$-like linear orderings (also Boolean algebras) A, B such that ${N_{{\kappa ^ + }}}({\mathbf {A}},{\mathbf {B}})$, where ${N_\lambda }({\mathbf {A}},{\mathbf {B}})$ means: A and B are nonisomorphic ${L_{\infty \lambda }}$-equivalent structures of cardinality $\lambda$.
References [Enhancements On Off] (What's this?)
K. Jon Barwise, Absolute logics and $L_{\infty \omega }$, Ann. Math. Logic 4 (1972), 309–340. MR 337483, DOI https://doi.org/10.1016/0003-4843%2872%2990002-2
Jon Barwise, Back and forth through infinitary logic, Studies in model theory, Math. Assoc. Amer., Buffalo, N.Y., 1973, pp. 5–34. MAA Studies in Math., Vol. 8. MR 0342370
K. Jon Barwise, Axioms for abstract model theory, Ann. Math. Logic 7 (1974), 221–265. MR 376337, DOI https://doi.org/10.1016/0003-4843%2874%2990016-3
M. Benda, Reduced products and nonstandard logics, J. Symbolic Logic 34 (1969), 424–436. MR 250860, DOI https://doi.org/10.2307/2270907
J. E. Baumgartner, L. A. Harrington, and E. M. Kleinberg, Adding a closed unbounded set, J. Symbolic Logic 41 (1976), no. 2, 481–482. MR 434818, DOI https://doi.org/10.2307/2272248
Jon Barwise (ed.), The syntax and semantics of infinitary languages, Lecture Notes in Mathematics, No. 72, Springer-Verlag, Berlin-New York, 1968. MR 0234827
C. C. Chang and H. J. Keisler, Model theory, North-Holland, Amsterdam, 1973. J. Conway, Ph.D. Thesis, Cambridge, England, 196-.
Paul C. Eklof, On the existence of $\kappa $-free abelian groups, Proc. Amer. Math. Soc. 47 (1975), 65–72. MR 379694, DOI https://doi.org/10.1090/S0002-9939-1975-0379694-0
J. Gregory and L. Harrington (unpublished).
Haim Gaifman and E. P. Specker, Isomorphism types of trees, Proc. Amer. Math. Soc. 15 (1964), 1–7. MR 168484, DOI https://doi.org/10.1090/S0002-9939-1964-0168484-2
Jaakko Hintikka and Veikko Rantala, A new approach to infinitary languages, Ann. Math. Logic 10 (1976), no. 1, 95–115. MR 439576, DOI https://doi.org/10.1016/0003-4843%2876%2990026-7
John E. Hutchinson, Model theory via set theory, Israel J. Math. 24 (1976), no. 3-4, 286–304. MR 437336, DOI https://doi.org/10.1007/BF02834760
Thomas J. Jech, Lectures in set theory, with particular emphasis on the method of forcing, Lecture Notes in Mathematics, Vol. 217, Springer-Verlag, Berlin-New York, 1971. MR 0321738
---, The axiom of choice, North-Holland, Amsterdam, 1973.
R. Björn Jensen, The fine structure of the constructible hierarchy, Ann. Math. Logic 4 (1972), 229–308; erratum, ibid. 4 (1972), 443. With a section by Jack Silver. MR 309729, DOI https://doi.org/10.1016/0003-4843%2872%2990001-0
A Mekler, Ph.D. Thesis, Stanford Univ., Calif., 1976.
J. A. Makowsky, Saharon Shelah, and Jonathan Stavi, $\Delta $-logics and generalized quantifiers, Ann. Math. Logic 10 (1976), no. 2, 155–192. MR 457146, DOI https://doi.org/10.1016/0003-4843%2876%2990021-8
M. Nadel, Model theory in admissible sets, Ph.D. Thesis, Univ. of Wisconsin, 1971.
Mark Nadel, Scott sentences and admissible sets, Ann. Math. Logic 7 (1974), 267–294. MR 384471, DOI https://doi.org/10.1016/0003-4843%2874%2990017-5
M. Nadel and J. Stavi, ${L_{\infty \lambda }}$-equivalence, isomorphism and potential isomorphism of structures, Notices Amer. Math. Soc. 22 (1975), p. A-644. Abstract #75T-E59. J. B. Paris, Solution to a problem of Max Dickman (to appear).
Gerald E. Sacks, Saturated model theory, W. A. Benjamin, Inc., Reading, Mass., 1972. Mathematics Lecture Note Series. MR 0398817
J. R. Shoenfield, Unramified forcing, Axiomatic Set Theory (Proc. Sympos. Pure Math., Vol. XIII, Part I, Univ. California, Los Angeles, Calif., 1967) Amer. Math. Soc., Providence, R.I., 1971, pp. 357–381. MR 0280359
J. Stavi, Superstationary sets and their applications (unpublished). ---, On ${L_{\infty \lambda }}$-equivalence of Boolean algebra rings and groups, Notices Amer. Math. Soc. 22 (1975), A-714. Abstract 75T-E77. W. W. Tait, Equivalence in ${L_{\infty \lambda }}$ and isomorphism (to appear).
K. J. Barwise, Absolute logics and ${L_{\infty \omega }}$, Ann. Math. Logic 4 (1972), 309-340. MR 49 #2252. ---, Back and forth through infinitary logic, Studies in Model Theory (M. Morley, ed.), MAA Studies in Math., vol. 8, Math. Assoc. Amer., Buffalo, New York, 1973, pp. 5-34. MR 49 #7116. ---, Axioms for abstract model theory, Ann. Math. Logic 7 (1974), 221-265. M. Benda, Reduced products and nonstandard logics, J. Symbolic Logic 34 (1969), 424-436. MR 40 #4092. J. Baumgartner, L. Harrington and E. M. Kleinberg, Adding a closed unbounded set, J. Symbolic Logic 41 (1976), 481-482. C. C. Chang, Some remarks on the model theory of infinitary languages, The Syntax and Semantics of Infinitary Languages (J. Barwise, ed.), Lecture Notes in Math., vol. 72, Springer-Verlag, Berlin and New York, 1968, pp. 36-63. C. C. Chang and H. J. Keisler, Model theory, North-Holland, Amsterdam, 1973. J. Conway, Ph.D. Thesis, Cambridge, England, 196-. P. Eklof, On the existence of K-free abelian groups, Proc. Amer. Math. Soc. 47 (1975), 65-72. J. Gregory and L. Harrington (unpublished). H. Gaifman and E. P. Specker, Isomorphism types of trees, Proc. Amer. Math. Soc. 15 (1964), 1-7. MR 29 #5746. J. Hintikka and V. Rantala, A new approach to infinitary languages, Ann. Math. Logic 10 (1976), 95-115. J. Hutchinson, Model theory via set theory (to appear). T. J. Jech, Lectures in set theory, with particular emphasis on the method of forcing, Lecture Notes in Math., vol. 217, Springer-Verlag, Berlin and New York, 1971. MR 48 # 105. ---, The axiom of choice, North-Holland, Amsterdam, 1973. R. B. Jensen, The fine structure of the constructible hierarchy, Ann. Math. Logic 4 (1972), 229-308. MR 46 #8834. H. J. Keisler, Formulas with linearly ordered quantifiers, The Syntax and Semantics of Infinitary Languages (J. Barwise, ed.), Lecture Notes in Math., vol. 72, Springer-Verlag, Berlin and New York, 1968, pp. 96-130. A Mekler, Ph.D. Thesis, Stanford Univ., Calif., 1976. J. A Makowski, S. Shelah and J. Stavi, $\Delta$-logics and generalized quantifiers, Ann. Math. Logic 10 (1976), 155-192. M. Nadel, Model theory in admissible sets, Ph.D. Thesis, Univ. of Wisconsin, 1971. ---, Scott sentences and admissible sets, Ann. Math. Logic 7 (1974), 264-294. M. Nadel and J. Stavi, ${L_{\infty \lambda }}$-equivalence, isomorphism and potential isomorphism of structures, Notices Amer. Math. Soc. 22 (1975), p. A-644. Abstract #75T-E59. J. B. Paris, Solution to a problem of Max Dickman (to appear). G. E. Sacks, Saturated model theory, Benjamin, New York, 1972. J. R. Shoenfield, Unramified forcing, Axiomatic Set Theory (Proc. Sympos. Pure Math., vol. 13, Part I), Amer. Math. Soc., Providence, R.I., 1971, pp. 357-381. MR 43 #6079. J. Stavi, Superstationary sets and their applications (unpublished). ---, On ${L_{\infty \lambda }}$-equivalence of Boolean algebra rings and groups, Notices Amer. Math. Soc. 22 (1975), A-714. Abstract 75T-E77. W. W. Tait, Equivalence in ${L_{\infty \lambda }}$ and isomorphism (to appear).
Retrieve articles in Transactions of the American Mathematical Society with MSC: 02H10, 02K05, 02H13
Retrieve articles in all journals with MSC: 02H10, 02K05, 02H13
Article copyright: © Copyright 1978 American Mathematical Society
Join the AMS
AMS Conferences
News & Public Outreach
Math in the Media
Mathematical Imagery
Mathematical Moments
Data on the Profession
Fellows of the AMS
Mathematics Research Communities
AMS Fellowships
Collaborations and position statements
Appropriations Process Primer
Congressional briefings and exhibitions
About the AMS
Jobs at AMS
Notices of the AMS · Bulletin of the AMS
American Mathematical Society · 201 Charles Street Providence, Rhode Island 02904-2213 · 401-455-4000 or 800-321-4267
AMS, American Mathematical Society, the tri-colored AMS logo, and Advancing research, Creating connections, are trademarks and services marks of the American Mathematical Society and registered in the U.S. Patent and Trademark Office.
© Copyright , American Mathematical Society · Privacy Statement · Terms of Use · Accessibility | CommonCrawl |
https://doi.org/10.1140/epjc/s10052-011-1733-z
Determination of α S using OPAL hadronic event shapes at and resummed NNLO calculations
The OPAL Collaboration
G. Abbiendi2, C. Ainsley5, P.F. Åkesson7, G. Alexander21, G. Anagnostou1, K.J. Anderson8, S. Asai22,23, D. Axen27, I. Bailey26, E. Barberio7, T. Barillari32, R.J. Barlow15, R.J. Batley5, P. Bechtle25, T. Behnke25, K.W. Bell19, P.J. Bell1, G. Bella21, A. Bellerive6, G. Benelli4, S. Bethke32, O. Biebel31, O. Boeriu9, P. Bock10, M. Boutemeur31, S. Braibant2, R.M. Brown19, H.J. Burckhart7, S. Campana4, P. Capiluppi2, R.K. Carnegie6, A.A. Carter12, J.R. Carter5, C.Y. Chang16, D.G. Charlton1, C. Ciocca2, A. Csilling29, M. Cuffiani2, S. Dado20, M. Dallavalle2, A. Roeck7, E.A. Wolf7, K. Desch25, B. Dienes30, J. Dubbert31, E. Duchovni24, G. Duckeck31, I.P. Duerdoth15, E. Etzion21, F. Fabbri2, P. Ferrari7, F. Fiedler31, I. Fleck9, M. Ford15, A. Frey7, P. Gagnon11, J.W. Gary4, C. Geich-Gimbel3, G. Giacomelli2, P. Giacomelli2, M. Giunta4, J. Goldberg20, E. Gross24, J. Grunhaus21, M. Gruwé7, A. Gupta8, C. Hajdu29, M. Hamann25, G.G. Hanson4, A. Harel20, M. Hauschild7, C.M. Hawkes1, R. Hawkings7, G. Herten9, R.D. Heuer7, J.C. Hill5, D. Horváth29, P. Igo-Kemenes10, K. Ishii22,23, H. Jeremie17, P. Jovanovic1, T.R. Junk6, J. Kanzaki22,23, D. Karlen26, K. Kawagoe22,23, T. Kawamoto22,23, R.K. Keeler26, R.G. Kellogg16, B.W. Kennedy19, S. Kluth32, T. Kobayashi22,23, M. Kobel3, S. Komamiya22,23, T. Krämer25, A. Krasznahorkay30, P. Krieger6, J. Krogh10, T. Kuhl25, M. Kupper24, G.D. Lafferty15, H. Landsman20, D. Lanske13, D. Lellouch24, J. Letts, L. Levinson24, J. Lillich9, S.L. Lloyd12, F.K. Loebinger15, J. Lu27, A. Ludwig3, J. Ludwig9, W. Mader3, S. Marcellini2, A.J. Martin12, T. Mashimo22,23, P. Mättig, J. McKenna27, R.A. McPherson26, F. Meijers7, W. Menges25, F.S. Merritt8, H. Mes6, N. Meyer25, A. Michelini2, S. Mihara22,23, G. Mikenberg24, D.J. Miller14, W. Mohr9, T. Mori22,23, A. Mutter9, K. Nagai12, I. Nakamura22,23, H. Nanjo22,23, H.A. Neal33, S.W. O'Neale1, A. Oh7, M.J. Oreglia8, S. Orito22,23, C. Pahl32, G. Pásztor4, J.R. Pater15, J.E. Pilcher8, J. Pinfold28, D.E. Plane7*, O. Pooth13, M. Przybycień7, A. Quadt32, K. Rabbertz7, C. Rembser7, P. Renkel24, J.M. Roney26, A.M. Rossi2, Y. Rozen20, K. Runge9, K. Sachs6, T. Saeki22,23, E.K.G. Sarkisyan7, A.D. Schaile31, O. Schaile31, P. Scharff-Hansen7, J. Schieck32, T. Schörner-Sadenius7, M. Schröder7, M. Schumacher3, R. Seuster13, T.G. Shears7, B.C. Shen4, P. Sherwood14, A. Skuja16, A.M. Smith7, R. Sobie26, S. Söldner-Rembold15, F. Spano8, A. Stahl13, D. Strom18, R. Ströhmer31, S. Tarem20, M. Tasevsky7, R. Teuscher8, M.A. Thomson5, E. Torrence18, D. Toya22,23, I. Trigger7, Z. Trócsányi30, E. Tsur21, M.F. Turner-Watson1, I. Ueda22,23, B. Ujvári30, C.F. Vollmer31, P. Vannerem9, R. Vértesi30, M. Verzocchi16, H. Voss7, J. Vossebeld7, C.P. Ward5, D.R. Ward5, P.M. Watkins1, A.T. Watson1, N.K. Watson1, P.S. Wells7, T. Wengler7, N. Wermes3, G.W. Wilson15, J.A. Wilson1, G. Wolf24, T.R. Wyatt15, S. Yamashita22,23, D. Zer-Zion4 and L. Zivkovic20
1 School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, UK
2 Dipartimento di Fisica dell' Università di Bologna and INFN, 40126, Bologna, Italy
3 Physikalisches Institut, Universität Bonn, 53115, Bonn, Germany
4 Department of Physics, University of California, Riverside, CA, 92521, USA
5 Cavendish Laboratory, Cambridge, CB3 0HE, UK
6 Ottawa-Carleton Institute for Physics, Department of Physics, Carleton University, Ottawa, Ontario, K1S 5B6, Canada
7 CERN, European Organisation for Nuclear Research, 1211, Geneva 23, Switzerland
8 Enrico Fermi Institute and Department of Physics, University of Chicago, Chicago, IL, 60637, USA
9 Fakultät für Physik, Albert-Ludwigs-Universität Freiburg, 79104, Freiburg, Germany
10 Physikalisches Institut, Universität Heidelberg, 69120, Heidelberg, Germany
11 Department of Physics, Indiana University, Bloomington, IN, 47405, USA
12 Queen Mary and Westfield College, University of London, London, E1 4NS, UK
13 Technische Hochschule Aachen, III Physikalisches Institut, Sommerfeldstrasse 26–28, 52056, Aachen, Germany
14 University College London, London, WC1E 6BT, UK
15 School of Physics and Astronomy, Schuster Laboratory, The University of Manchester, M13 9PL, Manchester, UK
16 Department of Physics, University of Maryland, College Park, MD, 20742, USA
17 Laboratoire de Physique Nucléaire, Université de Montréal, Montréal, Québec, H3C 3J7, Canada
18 Department of Physics, University of Oregon, Eugene, OR, 97403, USA
19 Rutherford Appleton Laboratory, Chilton, Didcot, Oxfordshire, OX11 0QX, UK
20 Department of Physics, Technion-Israel Institute of Technology, Haifa, 32000, Israel
21 Department of Physics and Astronomy, Tel Aviv University, Tel Aviv, 69978, Israel
22 International Centre for Elementary Particle Physics and Department of Physics, University of Tokyo, Tokyo, 113-0033, Japan
23 Kobe University, Kobe, 657-8501, Japan
24 Particle Physics Department, Weizmann Institute of Science, Rehovot, 76100, Israel
25 Institut für Experimentalphysik, Universität Hamburg/DESY, Notkestrasse 85, 22607, Hamburg, Germany
26 , University of Victoria, PO Box 3055, Victoria, BC V8W 3P6, Canada
27 Department of Physics, University of British Columbia, Vancouver, BC, V6T 1Z1, Canada
28 Department of Physics, University of Alberta, Edmonton, AB, T6G 2J1, Canada
29 Research Institute for Particle and Nuclear Physics, PO Box 49, 1525, Budapest, Hungary
30 Institute of Nuclear Research, PO Box 51, 4001, Debrecen, Hungary
31 Ludwig-Maximilians-Universität München, Sektion Physik, Am Coulombwall 1, 85748, Garching, Germany
32 Max-Planck-Institute für Physik, Föhringer Ring 6, 80805, München, Germany
33 Department of Physics, Yale University, New Haven, CT, 06520, USA
* e-mail: [email protected]
Hadronic event shape distributions from e+e− annihilation measured by the OPAL experiment at centre-of-mass energies between 91 GeV and 209 GeV are used to determine the strong coupling α S. The results are based on QCD predictions complete to the next-to-next-to-leading order (NNLO), and on NNLO calculations matched to the resummed next-to-leading-log-approximation terms (NNLO + NLLA). The combined NNLO result from all variables and centre-of-mass energies is [Equation not available: see fulltext.] while the combined NNLO + NLLA result is [Equation not available: see fulltext.] The completeness of the NNLO and NNLO + NLLA results with respect to missing higher order contributions, studied by varying the renormalization scale, is improved compared to previous results based on NLO or NLO + NLLA predictions only. The observed energy dependence of α S agrees with the QCD prediction of asymptotic freedom and excludes the absence of running.
© Springer-Verlag / Società Italiana di Fisica, 2011
Determination of the strong coupling αS from hadronic event shapes with and resummed QCD predictions using JADE data
Study of moments of event shapes and a determination of α S using e+e− annihilation data from JADE
Measurement of event shape distributions and moments in e + e at 91-209 GeV and a determination of
Measurement of αs with radiative hadronic events
Eur. Phys. J. C (2008) 53: 21-39
A study of event shapes and determinations of $\vec{\alpha_s}$ using data of e$^{\vec +}$e$^{\vec-}$ annihilations at $\mathbf{\sqrt{s} = 22}$ to 44 GeV
Eur. Phys. J. C 1, 461-478 (1998) | CommonCrawl |
Basic and Clinical Andrology
Resveratrol ameliorates bisphenol A-induced testicular toxicity in adult male rats: a stereological and functional study
Hossein Bordbar1,2,
Seyedeh-Saeedeh Yahyavi1,2,
Ali Noorafshan1,2,
Elham Aliabadi2 &
Maryam Naseh ORCID: orcid.org/0000-0003-4254-51751
Basic and Clinical Andrology volume 33, Article number: 1 (2023) Cite this article
Bisphenol A (BPA) is one of the most widely used synthetic chemicals worldwide. BPA as an endocrine disruptor affects the reproductive systems through estrogenic and antiandrogenic proprieties. Resveratrol (RES) as a natural polyphenol and potent antioxidant exhibits protective effects against reproductive toxicity by inhibiting of oxidative stress. 48 male rats were divided into eight groups (n=6), including CONTROL, OLIVE OIL (0.5 ml/ day), Carboxy methylcellulose (CMC) (1 ml of 10 g/l), RES (100mg/kg/day), low dose of BPA (25 mg/kg/day), high dose of BPA (50 mg/kg/day), low dose of BPA + RES, and high dose of BPA + RES. All treatments were done orally per day for 56 days. At the end of the 8th week, blood samples were collected for hormone assays. Then, the sperm parameters were analyzed, and the left testis was removed for stereological study.
We showed a significant decrease in sperm parameters in the low and high doses of BPA groups compared to control groups (P<0.05). The volume of testicular components as well as the diameter and length of seminiferous tubules significantly reduced (11-64 %), and the total number of the testicular cell types decreased (34-67 %) on average in the low and high doses of BPA groups. Moreover, serum follicle-stimulating hormone (FSH), luteinizing hormone (LH), and testosterone hormones concentration showed a significant reduction in both doses of BPA groups (P<0.01). Nonetheless, treatment with RES could ameliorate all the above-mentioned changes in the low and high doses of BPA groups (P<0.05).
RES could prevent BPA-induced testicular structural changes and sperm quality via improving gonadotropin hormones and testosterone levels.
Rèsumè
Le bisphénol A (BPA) est l'un des produits chimiques synthétiques les plus utilisés dans le monde. Le BPA en tant que perturbateur endocrinien affecte le système reproducteur par le biais de ses propriétés œstrogéniques et anti-androgènes. Le resvératrol (RES), en tant que polyphénol naturel et puissant antioxydant, présente des effets protecteurs contre la toxicité sur la reproduction en inhibant le stress oxydatif. Quarante-huit rats mâles ont été divisés en huit groupes (n = 6), comprenant les groupes TÉMOIN, HUILE D'OLIVE (0,5 ml/jour), méthylcellulose Carboxyle (MCC) (1 ml de 10 g/L), RES (100 mg/kg/ jour), faible dose de 25 de BPA (25 mg/kg/jour), dose élevée de BPA (50 mg/kg/jour), faible dose de BPA + RES et dose élevée de BPA + RES. Tous les traitements ont été effectués quotidiennement par voie orale pendant 56 jours. À la fin de la 8ème semaine, des échantillons de sang ont été prélevés pour dosages hormonaux. Ensuite, les paramètres du sperme ont été analysés et le testicule gauche a été retiré pour une étude stéréologique.
Nous avons montré une diminution significative des paramètres spermatiques dans les groupes traités par doses faibles et doses élevées de BPA par rapport aux groupe témoin (P<0,05). Le volume des composants testiculaires ainsi que le diamètre et la longueur des tubules séminifères ont été considérablement réduits (11-64 %) ; le nombre total des types de cellules testiculaires a diminué (34-67 %) en moyenne dans les groupes traités par doses faibles et doses élevées de BPA. De plus, la concentration sérique d'hormone folliculostimulante (FSH), lutéinisante (LH) et de testostérone a montré une réduction significative dans les groupes traités quelle que soit la dose de BPA (P<0,01). Néanmoins, le traitement par RES pourrait améliorer tous les changements mentionnés ci-dessus dans les groupes traités par doses faibles et élevées de BPA (P<0,05).
Le RES pourrait avoir un effet positif sur les changements structurels testiculaires induits par le BPA, ainsi que la qualité du sperme, en améliorant les taux sériques d'hormones gonadotrophines et de testostérone.
Bisphénol A Resvératrol Toxicité testiculaire Paramètres du Sperme Stéréologie
Bisphenol A (BPA) is one of the most widely used synthetic chemicals worldwide. It is found in large amount of consumer products such as polycarbonate plastics, epoxy resins, linings of cans, medical devices, dental sealants, and many other products that are part of our daily lives [1,2,3]. Public health has raised concerns about the widespread applications and toxic effects of BPA [4]. Exposure of BPA can occur directly or indirectly through inhalation, dermal exposure and ingestion [5, 6]. It has been reported that the main rout of exposure in humans is oral, which accounts about 90% of BPA exposures. It has been shown that BPA contributes to the cause of several endocrine disorders including reproductive dysfunction, infertility, precocious puberty and hormone dependent tumors [7, 8]. Evidences suggest that BPA exerts the toxic effects on the reproductive system via different mechanisms. BPA as an endocrine disruptor seems to mediate reproductive failure through estrogenic and antiandrogenic proprieties [9]. BPA can interfere with estrogenic signaling pathways by interacting with estrogen receptors (ERs), or by producing a small but potent estrogenic metabolite [10]. BPA can also bind to the androgen receptor (AR) as an antagonist [11], which can disrupt the hypothalamic-pituitary-testicular axis, thereby affecting gene expression and the enzymatic activity of testicular steroidogenesis, leading to hypogonadotropic hypogonadism [12, 13].
In this regard, several animal studies have also confirmed the reproductive toxicity of BPA in rats and mice [14,15,16]. It has been demonstrated that BPA decreases testis weight, reduces diameter and thickness of seminiferous tubules and leads to compromised spermatogenesis. These morphological alterations and abnormal spermatogenesis seem to be induced by the reduction of reproductive hormone production and promotion of germ cell apoptosis [17, 18]. On the other hand, exposure to BPA is related to the reduced activity of antioxidant enzymes, which could contribute to oxidative stress and sperm damage [19, 20].
Resveratrol (RES; trans-3,5,4'-trihidroxy-trans-stilbene), as a natural polyphenol and potent antioxidant is found in a wide range of foods, especially grapes, berries, and peanuts [21]. Several reports have demonstrated that RES exhibits the protective effects against reproductive toxicity by suppressing lipid peroxidation [22, 23]. Moreover, RES may improve sperm count and motility, as well as decrease germ cell apoptosis by stimulating the hypothalamic–pituitary–gonad axis and enhancing blood testosterone levels [24]. Accordingly, for the first time this study was designed to evaluate the protective effects of RES against deleterious effects of low (25 mg/kg/day) and high doses (50 mg/kg/day) of BPA on the structure and function of testis using stereological assessment, hormonal measurements, and quantitative-qualitative study of sperm parameters.
Forty eight male Sprague-Dawley rats (age, 6–8 weeks old; weight, 180-210 g) were purchased from the Animal Laboratory Center of Shiraz University of Medical Sciences. The animals were kept under standard conditions at room temperature (22 ± 2 °C), with normal humidity and 12–12 h light-dark cycles. They also had free access to standard food and water. All animal experiments carried out in accordance with the National Institutes of Health guide for the care and use of Laboratory animals (NIH Publications No. 8023, revised 1978). Also, the animal procedures were performed under the standard rules established by the Animal Care and Ethics Committee of Shiraz University of Medical Sciences (IR.SUMS.REC.1398.392).
The rats were randomly divided into eight groups (n=6); CONTROL group received distilled water orally per day for 56 days (spermatogenesis length), OLIVE OIL group received 0.5 ml/day Olive oil orally for 56 days, Carboxy methylcellulose (CMC) group received 1 ml of 10 g/l CMC orally [25] per day for 56 days, RES group received 100 mg/kg/day RES that was diluted in CMC and administered orally at a dosing volume of 1 ml [26, 27] for 56 days, BPA-LOW group received low dose of BPA (25 mg/kg/day) orally for 56 days, BPA-HIGH group received high dose of BPA (50 mg/kg/day) [28] for 56 days, BPA was diluted in olive oil and administered daily orally at a dosing volume of 0.5 ml. BPA-LOW + RES group received orally with low dose of BPA plus RES (100mg/kg/day) for 56 days, and BPA-HIGH + RES group received high dose of BPA plus RES (100mg/kg/day) orally for 56 days (Fig. 1).
Flow chart of the experimental design
It should be noted that the dosages of BPA (CAS 80-05-7, Sigma–Aldrich Co., St. Louis, USA) used in current study were based on the previously reported as maximum permissible dose that have no observable side effect on reproductive and developmental toxicity (50 mg/kg BW/day) in rats [13, 29].
Hormone measurements
At the end of the 8th week (on day 56), fasted rats were killed by cervical dislocation and blood samples were collected from the heart through a cardiac puncture and stored in heparin-free tubes. Then, the samples were centrifuged at 3500 rpm for 15 min. The serum was obtained and stored at -70 °C for subsequent hormone evaluation.
The serum levels of follicle-stimulating hormone (FSH; Category No. CK-30597), luteinizing hormone (LH; Category No. CK-E90904, and testosterone concentrations (Category No. E90243) were determined by rat ELISA kits (From East. Bio Pharm Company) using a microplate reader (Biotek, USA). Briefly, 100 μL of standard or sample was pipetted to each well and incubated for 2 hours at 37 °C. After removing any unbound substances, 100 μL of anti-biotin antibodies was added to the wells. After washing, 100 μL of avidin conjugated Horseradish Peroxidase (HRP) was added to each well and incubated for 1 hour at 37 °C. Then, 90 μL of 3,3'5,5'-Tetramethylbenzidine (TMB) substrate was added to each well and incubated for 20 minutes at 37 °C. Finally, the color development was stopped and the absorbance was determined at 450 nm using a microplate reader.
Spermatozoa counts, morphology and motility
Immediately after blood collection, the proximal part of the vas deferens just distal to the cauda epididymis (10 mm) was removed, and moved to a petri dish containing 3 mL normal saline solution. The suspension was gently shaken at 37°C for 5-10 min to diffuse the spermatozoa. The samples were counted in a hemocytometer. Ten fields were then randomly selected and evaluated for motility grading to distinguish the immotile sperms from those with progressive or non-progressive motility. Also, the sperm smears were stained with 1% eosin Y for assessing the morphology [30].
There is two types of progressive motility: 1- rapid progressive motility, 2- slow progressive motility. The efficient passage of spermatozoa through cervical mucus is dependent on rapid progressive motility.
We should add that it is necessary to distinguish between these two types of progressive motility. So that neglecting the distinction between two progressive sperm groups leads to ignoring the information in the semen sample, and the removal of such useful information would impoverish the semen analysis [31].
Stereological study
The left testis was removed and weighed. Then, according to the immersion method, it was immersed in isotonic saline-filled jar for measuring the primary volume "V (testicle)" [32]. Afterwards, the samples were fixed in 4% buffered formaldehyde solution for stereological studies. The orientator method was applied to obtain Isotropic Uniform Random (IUR) sections [32]. About 8-12 slabs in each testis were collected through this procedure. To estimate the shrinkage, a circle was punched out from a random testis slab by a trocar (diameter 5 mm), and the trocar radius was considered as the "area (before)" (πr2). After tissue processing, the area was calculated as the "area (after)". After tissue processing and paraffin embedding, 5 and 25 μm sections were cut by the microtome and were stained using Hematoxylin-Eosin (H&E). The areas of the circles were measured before processing (unshrunk) and after processing (shrunk) and finally, the degree of shrinkage "d (shr)" was calculated by the following formula:
$$\mathrm d(\mathrm{shr})=1-{\lbrack\mathrm{Area}(\mathrm{after})/\mathrm{Area}(\mathrm{before})\rbrack}^{1.5}$$
Then, the total volume of the testis was evaluated with regard to tissue shrinkage [V(shrunk)] using the following formula:
$$\mathrm V(\mathrm{shrunken})=\mathrm V(\mathrm{unshrunk})\times\lbrack1-\mathrm d(\mathrm{shr})\rbrack$$
Estimation of the testicular components volume
The volume density of the testis sections was analyzed by a video microscopy system. In doing so, the point grid was superimposed on the microscopic images of the H&E-stained sections (5μm thickness) on a monitor by the software designed at the Histomorphometry and Stereology Research Center. The volume density "Vv (structure/testis)" of the testicular components, including seminiferous tubules, interstitial tissue, and germinal epithelium, was estimated by the point counting method [33, 34]. Finally, the total volume of each component was obtained by the following formula:
$$\mathrm V(\mathrm{structure})=\mathrm{Vv}(\mathrm{structure}/\mathrm{testis})\times\mathrm V(\mathrm{shrunk})$$
Estimation of the length and diameter of seminiferous tubules
The length density (Lv) of the seminiferous tubules was measured on the sampled tubules in an unbiased counting frame applied on the 5 μm thick sections (H&E staining) [35], and calculated by the following formula:
$$\mathrm{Lv}=2\Sigma\mathrm{Q}/\lbrack\Sigma\mathrm{P}\times(\mathrm a/\mathrm f)\rbrack$$
Where "ΣQ" is the total number of the selected tubules, "ΣP" represents the total points superimposed on the testis, and "a/f" indicates the area of the counting frame. The total length of the seminiferous tubules "L(tubules)" was calculated by multiplying the lengths density (Lv) by V(structure) [36].
$$\mathrm L(\mathrm{tubules})=\mathrm{Lv}\times\mathrm V(\mathrm{structure})$$
The diameter of the seminiferous tubules was also measured on the sampled tubules in the counting frame. The diameter was measured perpendicularly to the long axis of the tubules where the tubules were widest [35]. An average of 100 tubules were counted per testis.
Estimation of number of testicular cell types
A computer linked to a light microscope (Nikon E200, Japan) with 40× oil lens (NA=1.4) was used to assess the total number of testicular cell types, including spermatogonia (A and B), spermatocytes, round spermatids (steps 1–8 spermiogenesis), long spermatids (steps 9–16 spermiogenesis), Sertoli and Leydig cells.
The total number of the testicular cell types was calculated using the optical disector method applied on the H&E-stained sections (25μm thickness) [37]. In so doing, the microscopic fields were scanned by moving the microscope stage at equal distances in X and Y directions based on systematic uniform random sampling. The movement in Z direction was also performed using a microcator (MT12, Heidenhain, Germany) fixed on the microscope stage. The Z-axis distribution from the sampled cells in different focal planes was plotted to determine the guard zones and disector's height [38]. The numerical density (Nv) was estimated using the following formula:
$$\mathrm{Nv}=\Sigma\mathrm{Q}/(\Sigma\mathrm{A}\times\mathrm h)\times(\mathrm t/\mathrm{BA})$$
Where "ΣQ" was the number of each cell type nuclei coming into focus, "ΣA" indicated the total area of the unbiased counting frame, "h" represented the disector's height, "t" was the mean section thickness, and "BA" was the microtome block advance. Finally, the total number of the testicular cell types was calculated by multiplying the numerical density (Nv) by V(structure):
$$\mathrm N(\mathrm{cells})=\mathrm{Nv}\times\mathrm V(\mathrm{structure})$$
Where, V(structure) was the total volume of the germinal epithelium for the germinal layer cells and the total volume of the interstitial tissue for the Leydig cells.
The data were expressed as mean ± standard error (SEM). The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test using Graph Pad Prism 6 software (San Diego, CA, USA). P<0.05 was considered to be statistically significant.
Spermatozoa count, normal morphology and motility
According to Table 1, a significant decrease was observed in the count, percentage of normal morphology, and motility of spermatozoa in the rats exposed to low and high doses of BPA groups compared to control group (P<0.05 and P<0.01, respectively). However, these parameters in the BPA-LOW + RES and BPA-HIGH + RES groups improved compared to the BPA groups (P<0.01 and P<0.05, respectively).
Table 1 Comparison of sperm parameters. Mean ± SEM of the Count (×106), Normal morphology (%), Motility (%), and Immotile (%) in the CONTROL, OLIVE OIL, carboxy methylcellulose (CMC), resveratrol (RES), low dose of Bisphenol A (BPA-LOW), high dose of BPA (BPA-HIGH), BPA-LOW + RES, and BPA-HIGH + RES groups. n = 6 in each group. The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test. * P<0.05, ** P<0.01 vs. CONTROL; ## P<0.01 vs. BPA-LOW; $ P<0.05 vs. BPA-HIGH
Qualitative changes
Qualitative evaluation of the testis has been presented in Fig. 2. The histological sections of the low and high doses of BPA rats showed the structural changes, including atrophy and reduced number of seminiferous tubules. Concomitant treatment of these groups with RES ameliorated these destructive effects.
Testicular histological evaluation. Representative photomicrographs of testis sections stained with hematoxylin & eosin (H&E) in the CONTROL (A), low dose of Bisphenol A (BPA-LOW) (B), high dose of BPA (BPA-HIGH) (C), BPA-LOW + resveratrol (RES) (D), and BPA-HIGH + RES (E) groups. All plates are to the same scale (Scale bar = 200 μm). The images indicate the normal seminiferous tubules (asterisk), and atrophied seminiferous tubules (arrow)
Stereological assays
The volume of the testicle
The results showed a significant reduction in the testicle volume by 11.7 % and 13.5 % in the rats exposed to low and high doses of BPA compared to the control groups, respectively (P<0.01 and P<0.001). However, the testis volume recovered considerably in the animals that received BPA-LOW + RES group compared to BPA-LOW group (P<0.01) (Fig. 3A).
The evaluation of volume. The box plots represents the volume of the testis (A), germinal epithelium (B), interstitial tissue (C), and seminiferous tubules (D) in the CONTROL, OLIVE OIL, carboxy methylcellulose (CMC), resveratrol (RES), low dose of Bisphenol A (BPA-LOW), high dose of BPA (BPA-HIGH), BPA-LOW + RES, and BPA-HIGH + RES groups. n = 6 in each group. The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test. Data are presented as mean ± SEM. *p < 0.05, **p < 0.01, and ***p < 0.001 vs. CONTROL; #p < 0.05, ##p < 0.01, and ###p < 0.001 vs. BPA-LOW; $p < 0.05 and $$$p < 0.001 vs. BPA-HIGH
The volume of germinal epithelium
The total epithelial volume in rats treated with low and high doses of BPA decreased 43% and 64% in comparison to the control groups, respectively (P<0.001). Treatment with RES ameliorated the epithelial volume changes in the low or high doses of BPA groups (P<0.001) (Fig. 3B).
The volume of interstitial tissue
The results indicated that the interstitial tissue volume reduced 25.3% and 27.3% in the low and high doses of compared to the control groups, respectively (P<0.01 and P<0.05). However, this parameter significantly was increased in the rats treated with RES in the low or high doses of BPA groups (P<0.05) (Fig. 3C).
The volume of seminiferous tubules
A significant reduction was seen in the total volume of seminiferous tubules by 26.2% and 34% in the low and high doses of BPA compared to the control groups, respectively (P<0.01 and P<0.001). Nevertheless, seminiferous volume significantly was ameliorated in the BPA-LOW + RES and BPA-HIGH + RES groups compared to the BPA groups (P<0.01 and P<0.05, respectively) (Fig. 3D).
Diameter of the seminiferous tubules
The diameter of the seminiferous tubules decreased 29.7% and 37.3% in rats treated with low and high doses of BPA compared to the control group (P<0.001). Treatment with RES increased this parameter in the low and high doses of BPA groups (P<0.01) (Fig. 4A).
The evaluation of diameter and length of seminiferous tubules. The box plots shows the diameter (A), and the length (B) of seminiferous tubules in the CONTROL, OLIVE OIL, carboxy methylcellulose (CMC), resveratrol (RES), low dose of Bisphenol A (BPA-LOW), high dose of BPA (BPA-HIGH), BPA-LOW + RES, and BPA-HIGH + RES groups. n = 6 in each group. The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test. Data are presented as mean ± SEM. **p < 0.01, and ***p < 0.001 vs. CONTROL; #p < 0.05, ##p < 0.01vs. BPA-LOW; $$p < 0.01 vs. BPA-HIGH
Length of the seminiferous tubules
The results showed that length of the seminiferous tubules have reduced 20.6% and 29.8% in the low and high doses of BPA compared to the control groups (P<0.01 and P<0.001, respectively). Nonetheless, the tubules length significantly was improved in the rats treated with RES in the low and high doses of BPA groups (P<0.05 and P<0.01, respectively) (Fig. 4A).
Number of spermatogonia A and B
The total number of spermatogonia A reduced by 40.03% and 55.2%, and spermatogonia B by 51.27% and 70.05% in the low and high doses of BPA compared to the control groups, respectively (P<0.001). However, treatment with RES increased these cells in the low and high doses of BPA groups (P<0.001) (Fig. 5A and B).
Evaluation of the germinal cells number. The box plots represents the number of spermatogonia A (A), spermatogonia B (B), spermatocytes (C), and round spermatids (D) in the CONTROL, OLIVE OIL, carboxy methylcellulose (CMC), resveratrol (RES), low dose of Bisphenol A (BPA-LOW), high dose of BPA (BPA-HIGH), BPA-LOW + RES, and BPA-HIGH + RES groups. The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test. Data are presented as mean ± SEM. ***p < 0.001 vs. CONTROL; #p < 0.05, ##p < 0.01, and ###p < 0.001 vs. BPA-LOW; $$p < 0.01 and $$$p < 0.001 vs. BPA-HIGH
Number of spermatocytes
Statistical analysis showed 34.85% and 53% reduction in the number of spermatocytes for both the low and high doses of BPA compared to the control groups (P<0.001). Treatment with RES ameliorated these changes in the BPA-LOW + RES and BPA-HIGH + RES groups compared to the BPA groups (P<0.01 and P<0.001, respectively) (Fig. 5C).
Number of round and long spermatid
The number of round spermatids decreased by 40.76% and 66.72%, and long spermatids by 28.7% and 60.35% in the low and high doses of BPA, respectively compared to the control groups (P<0.001). Moreover, ameliorative effects of RES on the number of these cells were seen in rats treated with low and high doses of BPA groups (P<0.001) (Figs. 5D and 6A).
Evaluation of long spermatids, Leydig and Sertoli cells number. The box plots represents the number of long spermatids (A), Leydig (B), and Sertoli (C) in the CONTROL, OLIVE OIL, carboxy methylcellulose (CMC), resveratrol (RES), low dose of Bisphenol A (BPA-LOW), high dose of BPA (BPA-HIGH), BPA-LOW + RES, and BPA-HIGH + RES groups. The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test. Data are presented as mean ± SEM. ***p < 0.001 vs. CONTROL; #p < 0.05, ##p < 0.01, and ###p < 0.001 vs. BPA-LOW; $$p < 0.01 and $$$p < 0.001 vs. BPA-HIGH
Number of Leyding and Sertoli cells
A significant reduction was seen in the number of Leydig cells by 45.78% and 62.85%, and Sertoli cells by 32.28% and 52.76% in the low and high doses of BPA than those of the control groups, respectively (P<0.001). Treatment with RES recovered the number of Leydig cells and Sertoli cells in the BPA-LOW + RES (P<0.01 and P<0.05, respectively), and BPA-HIGH + RES (P<0.001 and P<0.01, respectively) groups compared to the BPA groups (Fig. 6B and C)
Hormone assays
The gonadotropins assessment showed a significant reduction in serum LH and FSH levels in the BPA-LOW (P<0.001 and P<0.01, respectively), and BPA-HIGH (P<0.001) groups compared to the control group. Also, the testosterone concentration of the rats given low or high doses of BPA was lower than in the control group (P<0.001). The RES exposure led to significant increase in the serum LH and testosterone levels in the BPA-LOW (P<0.01 and P<0.001, respectively), and BPA-HIGH (P<0.05 and P<0.001, respectively) groups, while the serum FSH levels significantly increased only in the BPA-HIGH + RES group (P<0.05) (Fig. 7).
Serum concentrations of luteinizing hormone (LH), follicle-stimulating hormone (FSH), and testosterone hormones. The column graphs represent the concentrations of LH (A), FSH (B), and testosterone (C) in the CONTROL, OLIVE OIL, carboxy methylcellulose (CMC), resveratrol (RES), low dose of Bisphenol A (BPA-LOW), high dose of BPA (BPA-HIGH), BPA-LOW + RES, and BPA-HIGH + RES groups. n = 6 in each group. The results were analyzed by one-way analysis of variance (ANOVA) and Tukey's post hoc test. Data are presented as mean ± SEM. **p < 0.01, and ***p < 0.001 vs. CONTROL; #p < 0.05, ##p < 0.01vs. BPA-LOW; $$p < 0.01 vs. BPA-HIGH
The current study revealed the ameliorative effects of RES on testicular damage induced by BPA in rats. The first part of our findings showed the deleterious effects of two doses of BPA, 25 and 50 mg/kg/day for 8 weeks, on sperm quality and structural changes of the testis. The earlier studies showed that 50 mg/kg/day is considered as maximum permissible dose that have no observable side effect on reproductive and developmental toxicity [39]. But we found that ingestion of BPA at these dosages had adverse effects on count, morphology, and motility of spermatozoa. In line with our results, it has been observed a reduction in epididymal sperm motility and count in the rats exposed to BPA at the 10 and 50 mg/kg in a dose dependent manner [40]. Also, it has been demonstrated that BPA at 5 and 25 mg/kg/day reduced sperm production, reserves and transit time through the epididymis [13]. Moreover, long-term exposure to 0.2 mg/kg BPA in rats led to decreased sperm count and inhibited spermiation [41].
The reduction in sperm count and quality is in accord with decreased stereological parameters. The changes of structural indices including volume, diameter and length of seminiferous tubules suggest the atrophy of these tubules and testicular abnormalities due to BPA. Loss of the germinal epithelial cells was also seen after exposure to both doses of BPA. A reduction in germinal epithelial volume could be a consequence of decline in the number of germinal cells. The reduction in sperm production could be related to the disruption of spermatogenesis. Jin et al., 2013 also reported that BPA exposure could decrease sperm count via the reduction in type A spermatogonial, spermatocytes and spermatids. BPA impaired spermatogenesis through suppressing reproductive hormones and activating germ cells apoptosis mediated by Fas/FasL signaling pathway [42, 43]. Sertoli cells are another type of cells in the seminiferous tubules, which have a supportive and nutrient function. Since, Sertoli cells can affect the proliferation and differentiation of germinal cells, and also help in the process of spermatogenesis. So, it seems the loss of these supporting cells could be led to deficiency of supportive functions in BPA-treated rat, and cause the loss of spermatogenic cells. It has been indicated that Sertoli cells are targets of pituitary-derived FSH and testosterone to transduce signals into paracrine regulation of spermatogenesis [44,45,46]. Accordingly, Sertoli cell depletion following BPA treatment in present study may be due to a decrease in FSH and testosterone levels. On the other hand, testosterone secretion is produced in Leydig cells of testicular interstitium in response to LH [47]. Therefore, the lack of LH stimulation in BPA-treated groups could justify the reduction of Leydig cells and interstitial tissue atrophy and also the decrease of testosterone production. Testosterone is an essential hormone to maintain normal spermatogenesis and prevention of germ cell apoptosis in adult rats [48]. So, it is reasonable to assume that the inhibition of reproductive hormones production may have contributed to spermatogenesis impairment induced by BPA. Similarly, BPA could cause defective spermatozoa by disruption of the hypothalamic–pituitary–gonadal axis, causing a state of hypogonadotropic hypogonadism [13, 42, 49]. Another possible hypothesis may be involved in spermatogenesis dysfunction is the effects of BPA-induced oxidative damage. BPA exposure could induce ROS production by reducing the activity of the antioxidant system [50]. The adverse effects of BPA on sperm count and quality due to oxidative stress have been described by previous studies [40, 49].
The second step of our study demonstrated the protective effects of RES against BPA- induced testicular structural changes and sperm quality. Our results indicated that the concomitant treatment of the BPA groups with RES for 8 weeks could significantly restore the sperm parameters and prevent testicular atrophy and apoptosis of the testicular cell types. Furthermore, RES enhanced testosterone, FSH, and LH levels in BPA groups. The improvement of testicular structural and sperm quality seems to be related to increased gonadotropin hormones and testosterone levels. Consistent with these findings, previous reports have also shown that the levels of FSH, LH, and testosterone increased in the cisplatin+RES-treated rats compared to cisplatin group, thereby improving sperm parameters and testicular apoptosis [24]. Also, they showed that RES enhanced hormonal levels as well as sperm motility and count compared to control group. But in our study, there was no significant difference between the RES and control groups. The difference between the results of our study and Shatti's research may be because of different route of administration and dose of RES [24].
Also, another earlier study claimed that RES could ameliorate negative effects against BPA-induced reproductive toxicity in mice via reducing oxidative stress [51]. Our results support the contribution of reproductive hormones in the ameliorative effects of RES on BPA-induced testicular toxicity in rats. Meanwhile, a reduction of oxidative stress, as shown by other studies [52], may also be involved in RES protective effects, which required further studies to be confirmed.
One of the limitations of our study was that the signaling pathways that contribute to the amelioration of reproductive hormones by effect on the hypothalamic–pituitary–gonadal axis in the RES treated rats following BPA, which led to spermatogenesis improvement were not investigated.
In conclusion, the present study demonstrated the protective effects of RES against BPA-induced testicular structural changes and sperm quality via improving gonadotropin hormones and testosterone levels.
BPA:
CMC:
Carboxy methylcellulose
LH:
FSH:
Follicle-stimulating hormone
ER:
AR:
Biedermann S, Tschudin P, Grob K. Transfer of bisphenol A from thermal printer paper to the skin. Anal Bioanal Chem. 2010;398:571–6. https://doi.org/10.1007/s00216-010-3936-9.
Vandenberg LN, Hauser R, Marcus M, Olea N, Welshons WV. Human exposure to bisphenol A (BPA). Reprod Toxicol. 2007;24:139–77. https://doi.org/10.1016/j.reprotox.2007.07.010.
Kang J-H, Kito K, Kondo F. Factors influencing the migration of bisphenol A from cans. J Food Prot. 2003;66:1444–7. https://doi.org/10.4315/0362-028x-66.8.1444.
Konieczna A, Rutkowska A, Rachon D. Health risk of exposure to Bisphenol A (BPA). Rocz Panstw Zakl Hig. 2015;66(1):5–11.
Williams C, Bondesson M, Krementsov DN, Teuscher C. Gestational bisphenol A exposure and testis development. Endocrine Disruptors. 2014;2:e29088. https://doi.org/10.4161/endo.29088.
Kang J-H, Kondo F, Katayama Y. Human exposure to bisphenol A. Toxicology. 2006;226:79–89. https://doi.org/10.1016/j.tox.2006.06.009.
Matuszczak E, Komarowska MD, Debek W, Hermanowicz A. The impact of bisphenol A on fertility, reproductive system, and development: a review of the literature. Int J Endocrinol. 2019 Apr;10(2019):4068717. https://doi.org/10.1155/2019/4068717.
Santiago J, Silva JV, Santos MA, Fardilha M. Fighting Bisphenol A-Induced Male Infertility: The Power of Antioxidants. Antioxid. 2021;10:289. https://doi.org/10.3390/antiox10020289.
Maffini MV, Rubin BS, Sonnenschein C, Soto AM. Endocrine disruptors and reproductive health: the case of bisphenol-A. Mol Cell Endocrinol. 2006;254:179–86. https://doi.org/10.1016/j.mce.2006.04.033.
Alonso-Magdalena P, Ropero AB, Soriano S, García-Arévalo M, Ripoll C, Fuentes E, et al. Bisphenol-A acts as a potent estrogen via non-classical estrogen triggered pathways. Mol Cell Endocrinol. 2012;355:201–7. https://doi.org/10.1016/j.mce.2011.12.012.
Lee HJ, Chattopadhyay S, Gong E-Y, Ahn RS, Lee K. Antiandrogenic effects of bisphenol A and nonylphenol on the function of androgen receptor. Toxicol Sci. 2003;75:40–6. https://doi.org/10.1093/toxsci/kfg150 Epub 2003 Jun 12.
Chimento A, Sirianni R, Casaburi I, Pezzi V. Role of estrogen receptors and G protein-coupled estrogen receptor in regulation of hypothalamus–pituitary–testis axis and spermatogenesis. Front Endocrinol. 2014;5:1. https://doi.org/10.3389/fendo.2014.00001.
Wisniewski P, Romano RM, Kizys MM, Oliveira KC, Kasamatsu T, Giannocco G, et al. Adult exposure to bisphenol A (BPA) in Wistar rats reduces sperm quality with disruption of the hypothalamic–pituitary–testicular axis. Toxicology. 2015;329:1–9. https://doi.org/10.1016/j.tox.2015.01.002 Epub 2015 Jan 6.
El Ghazzawy IF, Meleis AE, Farghaly EF, Solaiman A. Histological study of the possible protective effect of pomegranate juice on bisphenol-A induced changes of the caput epididymal epithelium and sperms of adult albino rats. Alexandria J Med. 2011;47:125–37.
Tohei A, Suda S, Taya K, Hashimoto T, Kogo H. Bisphenol A inhibits testicular functions and increases luteinizing hormone secretion in adult male rats. Exp Biol Med. 2001;226:216–21. https://doi.org/10.1177/153537020122600309.
Takahashi O, Oishi S. Testicular toxicity of dietary 2, 2-bis (4-hydroxyphenyl) propane (bisphenol A) in F344 rats. Arch Toxicol. 2001;75:42–51. https://doi.org/10.1007/s002040000204.
Urriola-Muñoz P, Lagos-Cabré R, Moreno RD. A mechanism of male germ cell apoptosis induced by bisphenol-A and nonylphenol involving ADAM17 and p38 MAPK activation. PLoS One. 2014;9:e113793. https://doi.org/10.1371/journal.pone.0113793.
Akingbemi BT, Sottas CM, Koulova AI, Klinefelter GR, Hardy MP. Inhibition of testicular steroidogenesis by the xenoestrogen bisphenol A is associated with reduced pituitary luteinizing hormone secretion and decreased steroidogenic enzyme gene expression in rat Leydig cells. Endocrinol. 2004;145:592–603. https://doi.org/10.1210/en.2003-1174.
Hulak M, Gazo I, Shaliutina A, Linhartova P. In vitro effects of bisphenol A on the quality parameters, oxidative stress, DNA integrity and adenosine triphosphate content in sterlet (Acipenser ruthenus) spermatozoa. Comp Biochem Physiol Part - C: Toxicol. 2013;158:64–71. https://doi.org/10.1016/j.cbpc.2013.05.002.
Meli R, Monnolo A, Annunziata C, Pirozzi C, Ferrante MC. Oxidative stress and BPA toxicity: An antioxidant approach for male and female reproductive dysfunction. Antioxid. 2020;9:405. https://doi.org/10.3390/antiox9050405.
Burns J, Yokota T, Ashihara H, Lean ME, Crozier A. Plant foods and herbal sources of resveratrol. J Agric Food Chem. 2002;50:3337–40. https://doi.org/10.1021/jf0112973.
de Oliveira FA, Costa WS, Sampaio FJ, Gregorio BM. Resveratrol attenuates metabolic, sperm, and testicular changes in adult Wistar rats fed a diet rich in lipids and simple carbohydrates. Asian J Androl. 2019;21:201. https://doi.org/10.4103/aja.aja_67_18.
Collodel G, Federico M, Geminiani M, Martini S, Bonechi C, Rossi C, et al. Effect of trans-resveratrol on induced oxidative stress in human sperm and in rat germinal cells. Reprod Toxicol. 2011;31:239–46. https://doi.org/10.1016/j.reprotox.2010.11.010.
Shati AA. Resveratrol improves sperm parameter and testicular apoptosis in cisplatin-treated rats: effects on ERK1/2, JNK, and Akt pathways. Syst Biol Reprod Med. 2019;65:236–49. https://doi.org/10.1080/19396368.2018.1541114.
Sengottuvelan M, Viswanathan P, Nalini N. Chemopreventive effect of trans-resveratrol-a phytoalexin against colonic aberrant crypt foci and cell proliferation in 1, 2-dimethylhydrazine induced colon carcinogenesis. Carcinogenesis. 2006;27:1038–46. https://doi.org/10.1093/carcin/bgi286 Epub 2005 Dec 7.
Isa A, Mohammed A, Ayo J, Muhammad M, Imam M, Emmanuel N. Serum electrolytes and haematological profiles in adult wistar rats following oral administration of resveratrol during hot-humid season in northern nigeria. Niger J Sci. 2019;18:404–12.
Bitgul G, Tekmen I, Keles D, Oktay G. Protective effects of resveratrol against chronic immobilization stress on testis. Int Sch Res Notices. 2013. https://doi.org/10.1155/2013/278720.
Mahdavinia M, Ahangarpour A, Zeidooni L, Samimi A, Alizadeh S, Dehghani MA, et al. Protective effect of naringin on bisphenol A-induced cognitive dysfunction and oxidative damage in rats. Int J Mol Cell Med. 2019;8:141. https://doi.org/10.22088/IJMCM.BUMS.8.2.141.
Schwetz B, Harris M. Developmental toxicology: status of the field and contribution of the National Toxicology Program. Environ Health Perspect. 1993;100:269–82. https://doi.org/10.1289/ehp.93100269.
Aminsharifi A, Hekmati P, Noorafshan A, Karbalay-Doost S, Nadimi E, Aryafar A, et al. Scrotal cooling to protect against cisplatin-induced spermatogenesis toxicity: preliminary outcome of an experimental controlled trial. Urology. 2016;91:90–8. https://doi.org/10.1016/j.urology.2015.12.062.
Björndahl L. The usefulness and significance of assessing rapidly progressive spermatozoa. Asian Journal of Andrology. 2010;12:33. Björndahl L. The usefulness and significance of assessing rapidly progressive spermatozoa. Asian J Androl. 2010;12:33. https://doi.org/10.1038/aja.2008.50.
Mandarim-de-Lacerda CA. Stereological tools in biomedical research. An Acad Bras Cienc. 2003;75:469–86. https://doi.org/10.1590/s0001-37652003000400006.
Tschanz S, Schneider JP, Knudsen L. Design-based stereology: planning, volumetry and sampling are crucial steps for a successful study. Ann Anatomy Anatomischer Anzeiger. 2014;196:3–11. https://doi.org/10.1016/j.aanat.2013.04.011.
Khodabandeh Z, Dolati P, Zamiri MJ, Mehrabani D, Bordbar H, Alaee S, et al. Protective effect of quercetin on testis structure and apoptosis against lead acetate toxicity: an stereological study. Biol Trace Elem Res. 2021;199:3371–81. https://doi.org/10.1007/s12011-020-02454-8.
Dalgaard M, Pilegaard K, Ladefoged O. In Utero Exposure to Diethylstilboestrol or 4-n-Nonylphenol in Rats: Number of Sertoli Cells, Diameter and Length of Seminiferous Tubules Estimated by Stereological Methods. Pharmacol Toxicol. 2002;90:59–65. https://doi.org/10.1034/j.1600-0773.2002.900202.x.
Howard V, Reed M. Unbiased stereology: three-dimensional measurement in microscopy. Garland. Science. 2004. https://doi.org/10.4324/9780203006399.
Wreford NG. Theory and practice of stereological techniques applied to the estimation of cell number and nuclear volume in the testis. Microsc Res Tech. 1995;32:423–36. https://doi.org/10.1002/jemt.1070320505.
von Bartheld CS. Distribution of particles in the z-axis of tissue sections: relevance for counting methods. Neuroquantology. 2012;10(1):66–75.
FAO/WHO: Reproductive and Developmental Toxicity of Bisphenol A in Mammalian Species. In Book Reproductive and Developmental Toxicity of Bisphenol A in Mammalian Species (Editor ed.^eds.). City: WHO Press Ottawa; 2010. http://apps.who.int/iris/bitstream/handle/10665/44624/97892141564274_eng.pdf;jsessionid=3FD3F82CCD3B53154BD8C8392E03ECE3?sequence=1.
Kourouma A, Peng D, Chao Q, Changjiang L, Chengmin W, Wenjuan F, et al. Bisphenol A induced reactive oxygen species (ROS) in the liver and affect epididymal semen quality in adults Sprague-Dawley rats. J Toxicol Environ. 2014;6:103–12. https://doi.org/10.5897/JTEHS2014.0309.
Liu C, Duan W, Li R, Xu S, Zhang L, Chen C, et al. Exposure to bisphenol A disrupts meiotic progression during spermatogenesis in adult rats through estrogen-like activity. Cell Death Dis. 2013;4:e676–6. https://doi.org/10.1038/cddis.2013.203.
Jin P, Wang X, Chang F, Bai Y, Li Y, Zhou R, et al. Low dose bisphenol A impairs spermatogenesis by suppressing reproductive hormone production and promoting germ cell apoptosis in adult rats. J Biomed Res. 2013;27(2):135–44. https://doi.org/10.7555/JBR.27.20120076.
Wang P, Luo C, Li Q, Chen S, Hu Y. Mitochondrion-mediated apoptosis is involved in reproductive damage caused by BPA in male rats. Environ Toxicol Pharmacol. 2014;38:1025–33. https://doi.org/10.1016/j.etap.2014.10.018.
Oduwole OO, Peltoketo H, Huhtaniemi IT. Role of follicle-stimulating hormone in spermatogenesis. Front Endocrinol. 2018;9:763. https://doi.org/10.3389/fendo.2018.00763.
Walker WH, Cheng J. FSH and testosterone signaling in Sertoli cells. Reproduction. 2005;130:15–28. https://doi.org/10.1530/rep.1.00358.
Smith LB, Walker WH. The regulation of spermatogenesis by androgens. Semin Cell Dev. 2014:2–13. https://doi.org/10.1016/j.semcdb.2014.02.012.
Li X, Zhu Q, Wen Z, Yuan K, Su Z, Wang Y, et al. Androgen and Luteinizing Hormone Stimulate the Function of Rat Immature Leydig Cells Through Different Transcription Signals. Front Endocrinol. 2021;12:205. https://doi.org/10.3389/fendo.2021.599149.
Aitken RJ, Roman SD. Antioxidant systems and oxidative stress in the testes. Oxid Med Cell Longev. 2008;1(1):15–24. https://doi.org/10.4161/oxim.1.1.6843.
Mohamed DA, Arafa MH. Testicular toxic changes induced by bisphenol A in adult albino rats: a histological, biochemical, and immunohistochemical study. Egypt J Histol. 2013;36:233–45. https://doi.org/10.1097/01.EHX.0000426163.95597.40.
Chitra K, Latchoumycandane C, Mathur P. Induction of oxidative stress by bisphenol A in the epididymal sperm of rats. Toxicology. 2003;185:119–27. https://doi.org/10.1016/s0300-483x(02)00597-8.
Golmohammadi MG, Khoshdel F, Salimnejad R. Protective effect of resveratrol against bisphenol A-induced reproductive toxicity in male mice. Toxin Rev. 2021:1–9. https://doi.org/10.1080/15569543.2021.1965625.
Pasquariello R, Verdile N, Brevini TA, Gandolfi F, Boiti C, Zerani M, et al. The role of resveratrol in mammalian reproduction. Molecules. 2020;25:4554. https://doi.org/10.3390/molecules25194554.
This article was extracted from the thesis of Seyedeh-Saeedeh Yahyavi's, M.Sc. in Anatomy. This work was performed at the Histomorphometry and Stereology Research Center and was financially supported by grant No. 97-01-21-18338 from Shiraz University of Medical Sciences. Hereby, the authors would like to thank Ms. A. Keivanshekouh at the Research Consultation Center (RCC) of Shiraz University of Medical Sciences for improving the use of English in the manuscript.
This work was performed at the Histomorphometry and Stereology Research Center and was financially supported by grant No. 97-01-21-18338 from Shiraz University of Medical Sciences.
Histomorphometry and Stereology Research Center, Shiraz University of Medical Sciences, Zand Ave., Shiraz, 71348-45794, Iran
Hossein Bordbar, Seyedeh-Saeedeh Yahyavi, Ali Noorafshan & Maryam Naseh
Department of Anatomy, School of Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
Hossein Bordbar, Seyedeh-Saeedeh Yahyavi, Ali Noorafshan & Elham Aliabadi
Hossein Bordbar
Seyedeh-Saeedeh Yahyavi
Ali Noorafshan
Elham Aliabadi
Maryam Naseh
H.B: Designing the study, supervising laboratory works and revising the manuscript. S.Y: Performing laboratory works and collecting the data. A.N. and E.A.A: Conceptualization, Methodology, Software. M.N: Analysis of the data, writing and editing the manuscript. The authors read and approved the final manuscript.
Correspondence to Maryam Naseh.
All experimental procedures in the current study were done in accordance with the National Institutes of Health guide for the care and use of laboratory animals (NIH Publications No. 8023, revised 1978) and were approved by the Medical and Research Ethics Committee of Shiraz University of Medical Sciences, Shiraz, Iran (Approval No. IR.SUMS.REC.1398.392). All procedures were carried in accordance with the ARRIVE (Animal Research: Reporting in Vivo Experiments) guidelines.
The authors have no conflict of interest to report.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Bordbar, H., Yahyavi, SS., Noorafshan, A. et al. Resveratrol ameliorates bisphenol A-induced testicular toxicity in adult male rats: a stereological and functional study. Basic Clin. Androl. 33, 1 (2023). https://doi.org/10.1186/s12610-022-00174-8
Testicular toxicity
Sperm parameters
Stereology
Submission enquiries: Access here and click Contact Us | CommonCrawl |
The Annals of Mathematical Statistics
Ann. Math. Statist.
Volume 43, Number 5 (1972), 1702-1708.
A Threshold for Log-Concavity for Probability Generating Functions and Associated Moment Inequalities
J. Keilson
More by J. Keilson
Let $\{p_n\}_0^N$ be a discrete distribution on $0 \leqq n \leqq N$ and let $g(u) = \sum^\infty_0 p_n u^n$ be its $\operatorname{pgf}$. Then for $0 \leqq t < \infty g_t(u) = g(u + t)/g(1 + t) = \sum^N_0 p_n(t)u^n$ is a family of $\operatorname{pgf}$'s indexed by $t$. It is shown that there is a unique value $t^\ast$ such that $\{p_n(t)\}_0^N$ is $\log$-concave $(PF_2)$ for all $t \geqq t^\ast$ and is not $\log$-concave for $0 < t < t^\ast$. As a consequence one finds the infinite set of moment inequalities $\{\mu_{\lbrack r\rbrack}/\mathbf{r}!\}^{1/r} \geqq \{\mu_{\lbrack r+1\rbrack}/(r + 1)!\}^{1/r+1} \mathbf{r} = 1,2,3,\cdots$ etc. where $\mu_{\lbrack r\rbrack}$ is the $\mathbf{r}$th factorial moment of $\{p_n\}_0^N$ when the lattice distribution is $\log$-concave. The known set of inequalities for the continuous analogue is shown to follow from the discrete inequalities.
Ann. Math. Statist., Volume 43, Number 5 (1972), 1702-1708.
First available in Project Euclid: 27 April 2007
https://projecteuclid.org/euclid.aoms/1177692406
doi:10.1214/aoms/1177692406
links.jstor.org
Keilson, J. A Threshold for Log-Concavity for Probability Generating Functions and Associated Moment Inequalities. Ann. Math. Statist. 43 (1972), no. 5, 1702--1708. doi:10.1214/aoms/1177692406. https://projecteuclid.org/euclid.aoms/1177692406
The Institute of Mathematical Statistics
Some Sharp Multivariate Tchebycheff Inequalities
Mudholkar, Govind S. and Rao, Poduri S. R. S., The Annals of Mathematical Statistics, 1967
Selection Procedures for Restricted Families of Probability Distributions
Barlow, Richard E. and Gupta, Shanti S., The Annals of Mathematical Statistics, 1969
On a Certain Class of Limit Distributions
Shantaram, R. and Harkness, W., The Annals of Mathematical Statistics, 1972
Asymptotic Distributions of "Psi-Squared" Goodness of Fit Criteria for $m$-th Order Markov Chains
Goodman, Leo A., The Annals of Mathematical Statistics, 1958
The Convergence of Certain Functions of Sample Spacings
Weiss, Lionel, The Annals of Mathematical Statistics, 1957
The Posterior $t$ Distribution
Stone, M., The Annals of Mathematical Statistics, 1963
Limit Theorems for the Multi-urn Ehrenfest Model
Iglehart, Donald L., The Annals of Mathematical Statistics, 1968
On the Distribution of the Number of Successes in Independent Trials
Gleser, Leon Jay, The Annals of Probability, 1975
Extreme Values in the GI/G/1 Queue
The General Moment Problem, A Geometric Approach
Kemperman, J. H. B., The Annals of Mathematical Statistics, 1968
euclid.aoms/1177692406 | CommonCrawl |
Are Newton's laws invalid in real life? [closed]
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
One of my friends and I had an argument over this topic. He stressed the fact that in real life many forces exist, whereas in physics we deal only with ideal situations. He put the following arguments:-
Newton's First Law is invalid because friction exists in real life.
Newton's second law is invalid due to the same reasons.
Newton's third law is invalid because in a trampoline, there is excessive reaction.
In defence, I put forward the following arguments:-
Newton's laws are true but the equations have to be modified to take into account the other forces in real life.
For example, if a force $F$ is applied on a body of mass $m$, and $f_s$ is the force of friction, then, the equation becomes $F - f_s = ma$. Thus, we have just modified the equation $F = ma$.
So basically I mean to say that we have to adjust the laws to suit our purpose.
In the end, there was a stalemate between us. Even now I am confused after this argument. Please clarify my doubt.
newtonian-mechanics forces friction
sammy gerbil
Mriganka ParasarMriganka Parasar
$\begingroup$ Why has been my question downvoted? I was just clarifying my doubt. $\endgroup$ – Mriganka Parasar Jul 28 '16 at 2:17
$\begingroup$ It's being down voted because it basically answers itself in the question. All forces must be accounted for, and friction is a force. It seems like you know the answer already so why ask the question? At least that's my analysis of the down votes, maybe I am wrong. $\endgroup$ – Max von Hippel Jul 28 '16 at 3:55
$\begingroup$ I've deleted some nonconstructive and/or obsolete comments. $\endgroup$ – David Z♦ Jul 29 '16 at 17:09
$\begingroup$ @DavidZ: Then you deleted the only physically relevant comments. :-) $\endgroup$ – CuriousOne Jul 29 '16 at 18:32
$\begingroup$ It's being downvoted because it has an incendiary title coupled to a poorly researched question. The answer is in every decent highschool textbook. $\endgroup$ – Emilio Pisanty Jul 30 '16 at 20:13
Regardless of relativistic effects:
False, the first law talks about the case when no forces are present, if forces are present go to the second law.
False, you add friction to the total force.
Newton's third law is invalid because in a trampolin, there is excessive reaction.
False, why do you think there is excessive reaction?
Wolphram jonnyWolphram jonny
$\begingroup$ I've deleted some nonconstructive and/or otherwise inappropriate comments. $\endgroup$ – David Z♦ Jul 31 '16 at 13:34
Let's review what Newton's first law says:
When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by a net force.
Your friend is right that friction exists in real life. But your friend is wrong that the first law is invalid because of it. The friction is that net force being acted upon the object. The law holds out in space where there is (virtually) no friction, and it holds places on earth where there is a lot of friction.
Again, let's review the second law:
In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration vector a of the object: F = ma.
Once again, friction is a force that goes into that vector sum. So like he said, for the same reasons. Except it's valid for the same reasons, not invalid.
Just bought a trampoline. There isn't any excessive reaction. There's just more bounce. Say you jump from the ground. You push off with some force, go up in the air, hang for a sec, and fall back down. At the top, you have lots of potential energy and at the bottom you've got lots of kinetic energy. And because the earth is made of very rigid rock it doesn't flex very much and absorbs the energy you put into it. Probably your knees and ankles absorb some too.
On a trampoline, this isn't what happens. Your potential energy turns to kinetic energy as you fall as before, but instead of that energy going into the ground with a thud, it goes into the trampoline as potential energy. At the bottom of the trampoline bounce, the resistance of the trampoline overcomes your motion and starts pushing you back up.
So the difference between the ground and the trampoline is that on the ground your energy from jumping goes into shockwaves in the ground, while jumping on a trampoline goes into potential energy that's used to push you back up.
If you want to try this, try taking a rubber bouncy ball and a stuffed animal. Throw them both at a wall and see which one bounces back more. Same energy going into both of them, but one is able to turn that kinetic energy into potential energy and reverse the flow while the other just kinda flops. Newton's laws account for both scenarios quite well.
We may have been able to find scenarios where Newton's laws don't apply (i.e. relativity) but the laws remain fundamental to engineering and have very, very real world applications to this day.
corsiKacorsiKa
Wolphram johnny gave good explanation but for trampoline case there can be more explanation
Whenever you jump from earth, according to "third law", the force you give to the earth is equal to the force (reaction force) given by earth to you which makes you go off from the ground, the force you gave to earth actually moves earth (negligibly due to its high mass) the reaction force is the force which makes you jump... So i guess you are clear about where the third law acts now.
Coming to the trampoline case when you jump and land on trampoline your kinetic energy is stored due to its spring like property, so when you jump again the force is exerted by you and trampoline, both so you have more acceleration (newton second law) this net force is higher compared when you jump from the ground alone (when you jump and land on ground the energy is not stored like in that of trampoline and no extra force). Here the net force is more and the reaction force given to the earth is more and the earths moves more (yet negligible).
Volker Siegel
Rahul J ARahul J A
As the saying goes: "All models are wrong, but some of them are useful."
For me, laws of physics are actually just models, i.e., simplifications of reality that give satisfactory answers to certain questions. Newton's laws are useful to build bridges, build skyscrapers, land rockets on the Moon, etc.
You could also argue that:
In reality, a force is never perfectly constant (e.g., combustion engines produce force in bursts)
In reality, mass is never constant (e.g., even the prototype kilogram loses mass)
Then again, how often do you "care" in reality about a change of $50µg$? If you do calculus of a car, you might decide to model its mass as being constant during its trip. As long as the model gives you are reasonable answer it is useful. Otherwise, you need to adjust or change the model (e.g., add fictions forces, relativistic effects, etc.)
P.S. No clue who had the idea of giving models in physics such a solemn name as "laws". Scientific marketing, I guess. :D
$\begingroup$ This answer is misleading. Newton's laws become wrong at high velocities, small distances or high densities (or your favorite combination of those). Forces not being constant, masses not being constant, the presence of friction, or any other complications you might encounter in "real life" can be handled by Newton with zero problems. $\endgroup$ – Javier Jul 28 '16 at 22:02
$\begingroup$ "Become wrong" is a bit subjective: For example, below certain speeds we choose to ignore relativistic effects (but if you really want super-precision, nothing stops from taking them into account). Maybe I'm splitting the hair, but Newton's law can handle non-constant quantities by essentially splitting time or space in infinitesimal units, in which the quantities can be assumed constant. $\endgroup$ – user1202136 Jul 28 '16 at 22:32
$\begingroup$ As an example of why I think Newton's laws are essentially a model: To make them work inside a non-inertial frame of reference, one needs to add fictious forces. $\endgroup$ – user1202136 Jul 28 '16 at 22:34
$\begingroup$ @Javier It absolutely is misleading. Newton's laws cope perfectly well with non-constant forces and non-constant masses: you just have to use the calculus with which Newton (and Leibniz) conveniently also provided us. $\endgroup$ – David Richerby Jul 29 '16 at 14:13
$\begingroup$ And, by the way, you absolutely do not want to model the mass of a plane as being constant through its trip. Fuel can be more than half the take-off weight of a large jet departing on a long flight. $\endgroup$ – David Richerby Jul 29 '16 at 14:14
Newtons laws are a good approximation for how the world works when the velocity is less than 1% the speed of light, the gravity isn't too strong, when the number of elementary particles an object is composed of isn't too small as an object needs to be large enough for quantum uncertainty to be insignificant, and the amount of space is small compared to the observable universe so that the effect of the expansion of space is insignificant. Friction doesn't disprove Newtons Laws as Friction itself is a force. Because we live on a planet where we tend to be unable to get objects off the ground for long periods of time without technology friction tends to effect everything in our daily lives to the point that until people performed proper experiments on motion it wasn't obvious that friction was effecting objects, which is why it wasn't obvious that moving objects would continue to move in the absence of any force.
Friction does not disprove the second law as friction has to be taken into account to find the total force acting on an object.
The trampoline example doesn't disprove Newtons third law because it states that for every force there is an equal and opposite force, which is different from the effect being equal. Because F=ma the acceleration of a lighter object will be greater than that of a heavier object even when the same force is applied to the lighter object than the heavier object.
Anders GustafsonAnders Gustafson
$\begingroup$ I wouldn't say it's about the strength of gravity or the velocity, but rather, their relative values with respect to the objects you care about. E.g. relativity is important when considering ISS versus the surface of the Earth, but not versus an object floating inside of the ISS. $\endgroup$ – Luaan Jul 28 '16 at 9:49
Your friend is wrong, and seems to suffer from a serious case of anti-intellectualism, not to mention arrogance and authority issues.
It's true that the models of physics we know are approximations, but the reason we're using them - after centuries of research - is that they're extremely good approximations.
To suggest that our physical laws can't cope with something as simple as a trampoline is utterly ridiculous.
So your friend doesn't understand the finer points of how a trampoline works. That's fine, many people don't. The anti-intellectualism lies in what he does with this lack of knowledge. A healthy approach is to be curious, to try to analyze the phenomenon, to try to understand why it works they way it does, to wonder if Newton's laws mean what he thinks they mean, and to ask questions along the way. To realize that if physicists - many of whom have seen trampolines in their lives - haven't modified the models to accommodate them, it's because they already do, for reasons he needs to investigate.
Instead, he chooses to assume that trampolines are magic, that they are outside the natural order of things, that they cannot be explained. That physicists are stumped in the face of trampolines. "We've placed a man on the moon, but we just can't figure out the damn trampoline thing!".
As for specifically how trampolines work - that's the smaller issue, but I'll address it anyway.
Your friend seems to confuse force with energy/velocity. He looks at a person gaining energy by jumping on the trampoline, and for some reason assumes the trampoline exercises more force on it than the other way around. That's completely baseless.
Your friend seems to assume trampolines give you energy for free. They don't. It's not like the video game jumping blocks that you just step on and get flung in the air. If you stop moving while jumping on a trampoline, you'll lose energy, speed and height until you come to a stop. To bounce higher, you need to use your muscles to generate energy - the trampoline helps you convert your downward velocity to upwards velocity, and you gain more energy with your muscles, so you're able to jump higher than before.
As for the friction thing, this was a completely inane argument. Friction is a force. It's not some mysterious "non-force force".
Edited to add based on smci's comments:
Based on the OP's account (which is the only thing I know of said friend), what he displays is not healthy skepticism. It's ok not to get how friction or trampolines work. It's not ok to deduce that physics isn't applicable to real-life, or that it can't explain these two things. "Magic" is the appropriate word to describe the friend's stance because he insists science can't explain these things. The friend's approach is defeatist rather than inquisitive - he doesn't try to reconcile and explain his counterexamples, he just assumes they cannot be explained.
It's true that skepticism can lead to new discoveries, but you have 1,000,000 people who misunderstand something for every person who discovers a genuine deficiency in the current paradigm. So some level of humility is required. Furthermore, you need to understand the current paradigm very well before you can find that it's wrong. If the friend "gave the system a chance" and sought answers, he would have found them. Or, if the confusion happened to be that once-in-a-million time where he actually discovered a mistake, he'd continue investigating until he could demonstrate the error convincingly. But the friend didn't do any of that, he just rage-quit and adopted his own pet theory that "Physics doesn't apply to real life". If young Richard Feynman had had an approach like this, he wouldn't have become the Feynman we know.
As I wrote originally, reconciling friction and trampolines with Newton's laws (and whether the textbooks do a good job with the explanations or not) isn't the main issue here, and explaining those things in the answers doesn't address the main issue. The main issue is the friend's approach, so I focused on that in the answer, and I'm calling a spade a spade. Even more so if their age is 13 - that's a good age to start adopting a healthy approach to science.
Meni RosenfeldMeni Rosenfeld
$\begingroup$ Your first sentence is really unhelpful. A healthy level of skepticism and searching for counterexamples is always good. Sure the guy is mistaken, but the answers are educational. "chooses to assume that trampolines are magic"... "inane" is very disparaging. Would you apply the same language to a young Richard Feynman? I bet you wouldn't. $\endgroup$ – smci Jul 30 '16 at 21:55
$\begingroup$ Wrt "Newton's laws are true but the equations have to be modified to take into account the other forces in real life", high-school students typically don't get taught the vector formulation of Newton's laws, or the crucial clarification that it's "net force". Everyday situations do indeed involve lots of reactions and friction forces, so the textbook emphasis on one single force externally applied is indeed misleading. There's really no need to be disparaging. We also don't know whether they're age 13 or what. $\endgroup$ – smci Jul 30 '16 at 22:01
$\begingroup$ @smci: I've written some comments but they're actually best as an appendix to the answer, stay tuned. $\endgroup$ – Meni Rosenfeld Jul 31 '16 at 7:28
Short Answer: Your friend is wrong because our models of friction (within certain parameters) are derived from Newton's laws. Tidal forces, for example, are are a form of friction.
The argument that Newton's "laws" are "invalid" in an pure philosophical sense, better rest on Newton simply pulling the concepts of gravity, force and inertia out of thin air in order to give meaningless names to parts of his highly predictive geometric and mathematical models.
One can also argue that since they remain accurate only within a specific range of measurements, they capture any real truth about reality but just approximate it to some degree.
It's very, very important to remember that scientific "laws" or "models" do not predict concrete reality, they predict measurements. New scientific laws/rules/models arise when the old laws/rules/models failed to be able to reproduce new measurements.
TL;DR unless you've got time to kill
The core scientific method is take a large number of measurements and create a huge data set. Then create a geometric, mathematical (or as some now argue, computational) system that will, given one part of the set, be able to reproduce another part of the set. Scientific laws are really mere convinces to compress vast number of observations down to mathematical equation, geometric ratios or (as some argue today) a computation.
That scientific laws reproduce sets of measurement data and not reality is seen easiest in statistics, which is really a second layer of math/modeling that tells us how much off from reality or measurements might be e.g. no one has 2.1 kids.
But... the equations don't always reproduce the observed measurement sets unless we cheat a bit an cram in some arbitrary numbers i.e. numbers not derived from measurement, don't seem to represent any observable or measurable phenomena, but which nevertheless, make the equation predictive of its data set.
The great-grandday of them all is Newton's gravitational constant. Cavendish just measure fine gravitational deflections, then started punching in numbers till he found one reproduced the deflection in the data.
There are many constants crammed into scientific equations for which we have no explanation of why the constants are needed or what they represent. A large number of constants in the math of a system is strong predictor that the model will fail once superior measurements become available.
One of the sources of the great friction between Leibniz and Newton was that Newton just made things up. Things like "gravity", "force", "inertia". Leibniz following the French Cartesian idea that all motion in the universe resulted from the collisions of particles and that positing a mysterious "force" called "gravity" instantly began pulling to objects towards each other, or that objects kept moving unless acted upon by another object or "force" all bordered on outright mysticism.
Newton replied that his geometric and mathematical rule-sets/models made vastly better predictions than anything Leibniz had come up with and that naming the parts of the model was just a conceptual connivence and that he would even try to guess what inertia, forces or gravity were, or were caused by, he merely asserted that using the concepts worked very well in making predictions.
But the idea that forces, gravity and inertia were just arbitrary names, didn't catch on and most people still talk about them as if they exist and have some concrete reality we understand.
Because scientific models reproduce measurements and not reality, we often evoke metaphors, call them "as-if" fantasies we use to explain what phenomena we can't observe and measure causes the phenomena we can.
Thus Newton modeled gravity between two objects "as-if" they were instantly connected by a contracting elastic band ( or a non-streching string on the axis of an ellipse. Likewise, Einstein modeled gravity "as-if" nothingness was elastic, bent and curved and caused objects follow the bends and curves.
Newton modeled gravity as a "force" that transmitted instantly between objects. Later, scientist showed that it propagated at the speed of light. Einstein, then modeled gravity not as a "force" like an elastic band between objects, but as each object "bending" space around it such that any object traveling through the bent space found itself deflected toward the initial object.
We're still doing it. The Higg's particle isn't a particle but a convenient way to model changes in the Higg's gauge field (I think.)
But it bears repeating that scientific laws/rules/models reproduce/predict measurements, not the reality that scientist attempt to measure. Since we can't measure with perfect accuracy, our scientific models will always be off a bit from reality.
But "a little bit off" is just fine when your calculating the tossing of a horseshoe blowing something up with nuke.
TechZenTechZen
Your friend actively misunderstands the Newton laws and the ideas behind it. He is about to argue that formula $E=mc^2$ is invalid because it states that Youngs modulus ($E$) is equal to mass ($m$) times hypotenuse squared ($c$). And he is supporting his arguments by several argument fouls.
First law states that when there's no forces applied to the body it remains still or move with constant velocity. The argument by friction is foul, because friction is force and must be included to the equation. It also states that if the velocity is not changing there is zero net force - all applied forces cancel each other completely.
Second law describes the relation between net force and change in net momentum of point mass*. For real bodies we have to introduce angular momentum ($E_{rot}=J\omega^2$). Note that $F=\frac{dp}{dt}$. For constant mass (non-relativistic universe) we can derive $F=ma$.
Third law states that for every force (action) there is equal force (reaction) in opposte direction. In case of the trampoline (idalised) there is gravity ($G=mg$), the force of the trampoline pushing you ($F_{tr}=-k\Delta y$) and the force your body that pulls the trampoline down ($F_{body}=ma=m\ddot {\Delta y}$). If you draw proper image you will see that $G+F_{tr}+F_{body}=0$, that means $F_{tr}=-(G+F_{body})$. Call whatever side action, the other side is reaction.
We can also state that third law is result of laws of conservation of momentum and energy. It just say that if the net momentum and net angular momentum does not change, the net force is zero.
I could not find proper english word to czech "hmotný bod", german "Massepunkt", polish "punkt materialny". If such word(s) exist, feel free to edit.
CrowleyCrowley
$\begingroup$ Point mass? $\endgroup$ – deltab Jul 29 '16 at 23:28
$\begingroup$ Whoa you know the term in 3 three languages but not English?! Fascinating. @deltab It's kinda weird those entries are not connected on wiki. Although, Czech entries have it in "See also sections". $\endgroup$ – luk32 Jul 30 '16 at 4:50
Newton's laws are valid for all situations where velocities are small (compared to the speed of light, ie relativity is not important) and where quantum effects are negligible (mostly where objects are much bigger than elementary particles). The problem with your argument is that you and your friend are using idealized expressions for Newton's laws, not their most general form. That is completely understandable because the more general forms require mathematical concepts that are mostly restricted to physicists and mathematicians (in fact, Newton invented the calculus in order to formulate these laws). Rest assured that the laws are not just valid for those idealized situations that are expressed in terms of elementary math.
Lewis MillerLewis Miller
$\begingroup$ Thank you very much. I know that newton's laws are not valid at or near the velocity of light. But guess what? That friend of mine believes that einstein is wrong. I think I will leave him to his own thoughts. $\endgroup$ – Mriganka Parasar Jul 28 '16 at 2:32
$\begingroup$ I was just making the most general statement that I could regarding their validity. I thought your question was a good one so I up voted it. As for leaving your friend to his own thoughts, that seems advisable. $\endgroup$ – Lewis Miller Jul 28 '16 at 2:38
$\begingroup$ @MrigankaParasar Thinking Einstein is wrong is fine. We should not give too much extra consideration to authority figures in science. If your friend thinks relativity is wrong, please ask him to explain what experimental evidence makes him think that relativity is wrong. $\endgroup$ – DanielSank Jul 28 '16 at 3:41
$\begingroup$ If expressed in the correct form (for example $F=dp/dt$ and not $F=ma$), Newton's laws are valid in spacial relativity (and notice that Newton never wrote $F=ma$ in the Principia...). Also, the problem is not that his friend is not using "their most general form". The usual middle school textbook form is perfectly fine to explain every one of the situations described by the OP. $\endgroup$ – valerio Jul 28 '16 at 7:00
Remember the sum $\sum$ symbol: $$\sum \vec F=0$$ $$\sum \vec F=m\vec a$$
A very important detail.
It's not about "adjusting" anything to make the laws work, it's just about including all forces. Which might be tricky in real life, but that's another story.
And by the way, you can tell your friend that friction is not at all always present - space is a quite good example.
SteevenSteeven
Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces friction or ask your own question.
How is Newton 1st law still true if you throw a baseball and it doesn't keep? Shouldn't it never stop?
Blocks stacked on an incline connected by rope around pulley
What is meant by "method of approximate numerical method" or "method of digital computer" for solving the differential equation of resistive force?
On the work done by friction
How to apply Newton's first law to moving object?
Newton's second law: direction of forces
Is there a way to determine the exact value of static friction in this situation?
Pushing force on wagon
Third law of motion is so confusing | CommonCrawl |
Feedback Integrator Networks
Giorgio Sancristoforo's recent software noise synth Bentō is a marvel of both creative DSP and graphic design. Its "Generators" particularly caught my eye. The manual states:
Bentō has two identical sound generators, these are not traditional oscillators, but rather these generators are models of analog computing patches that solve a differential equation. The Generators are designed to be very unstable and therefore, very alive!
There's a little block diagram embedded in the synth's graphics that gives us a good hint as to what's going on.
We're looking at a variant of a state variable filter (SVF), which consists of two integrators in series in a feedback loop with gain stages in between. Sancristoforo adds an additional feedback loop around the second integrator, and what appears to be an additional parallel path (not entirely sure what it's doing). It's not possible for chaotic behavior to happen in an SVF without a nonlinearity, so presumably there's a clipper (or something else?) in the feedback loops.
While thinking about possible generalized forms of this structure, I realized that the signal flow around integrators can be viewed as 2 x 2 feedback matrix. A form of cross-modulation across multiple oscillators can be accomplished with a feedback matrix of greater size. I thought, why not make it like a feedback delay network in artificial reverberation, with integrators in place of delays? And so a new type of synthesis is born: the "feedback integrator network."
An N x N feedback integrator network consists of N parallel signals that are passed through leaky integrators, then an N x N mixing matrix. First-order highpass filters are added to block dc, preventing the network from blowing up and getting stuck at -1 or +1. The highpass filters are followed by clippers. Finally, the clipper outputs are added back to the inputs with single-sample delays in the feedback path. I experimented with a few different orders in the signal chain, and the order presented here is the product of some trial and error. Here's a block diagram of the 3 x 3 case:
A rich variety of sounds ranging from traditional oscillations to noisy, chaotic behavior results. Here are a few sound snippets of a 8 x 8 feedback integrator network, along with SuperCollider code. I'm using a completely randomized mixing matrix with values ranging from -1000 to 1000. Due to the nonlinearities, the scaling of the matrix is important to the sound. I'm driving the entire network with a single impulse on initialization, and I've panned the 8 parallel outputs across the stereo field.
var snd, n;
snd = Impulse.ar(0);
snd = snd + LocalIn.ar(n);
snd = Integrator.ar(snd, 0.99);
snd = snd * ({ { Rand(-1, 1) * 1000 } ! n } ! n);
snd = LeakDC.ar(snd);
snd = snd.clip2;
Splay.ar(snd) * 0.3;
}.play(fadeTime: 0);
The four snippets above were not curated -- they were the first four timbres I got out of randomization.
There's great fun to be had by modulating the feedback matrix. Here's the result of using smooth random LFOs.
I've experimented with adding other elements in the feedback loop. Resonant filters sound quite interesting. I've found that if I add anything containing delays, the nice squealing high-frequency oscillations go away and the outcome sounds like distorted sludge. I've also tried different nonlinearities, but only clipping-like waveshapers sound any good to my ears.
It seems that removing the integrators entirely also generates interesting sounds! This could be called a "feedback filter network," since we still retain the highpass filters. Even removing the highpass filters, resulting in effectively a single-sample nonlinear feedback delay network, generates some oscillations, although less interesting than those with filters embedded.
While using no input sounds interesting enough on its own, creating a digital relative of no-input mixing, you can also drive the network with an impulse train to create tonal sounds. Due to the nonlinearities involved, the feedback integrator network is sensitive to how hard the input signal is driven, and creates pleasant interactions with its intrinsic oscillations. Here's the same 8 x 8 network driven by a 100 Hz impulse train, again with matrix modulation:
Further directions in this area could include designing a friendly interface. One could use a separate knob for each of the N^2 matrix coefficients, but that's unwieldy. I have found that using a fixed random mixing matrix and modulated, independent gain controls for each of the N channels produces results just as diverse as modulating the entire mixing matrix. A interface could be made by supplying N unlabeled knobs (likely less) and letting the user twist them for unpredictable and fun results.
OddVoices Dev Log 3: Pitch Contours
This is part of an ongoing series of posts about OddVoices, a singing synthesizer I've been building. OddVoices has a Web version, which you can now access at the newly registered domain oddvoices.org.
Unless we're talking about pitch correction settings, the pitch of a human voice is generally not piecewise constant. A big part of any vocal style is pitch inflections, and I'm happy to say that these have been greatly improved in OddVoices based on studies of real pitch data. But first, we need...
Pitch detection
A robust and high-precision monophonic pitch detector is vital to OddVoices for two reasons: first, the input vocal database needs to be normalized in pitch during the PSOLA analysis process, and second, the experiments we conduct later in this blog post require such a pitch detector.
There's probably tons of Python code out there for pitch detection, but I felt like writing my own implementation to learn a bit about the process. My requirements are that the pitch detector should work on speech signals, have high accuracy, be as immune to octave errors as possible, and not require an expensive GPU or a massive dataset to train. I don't need real time capabilities (although reasonable speed is desirable), high background noise tolerance, or polyphonic operation.
I shopped around a few different papers and spent long hours implementing different algorithms. I coded up the following:
Cepstral analysis [Noll1966]
Autocorrelation function (ACF) with prefiltering [Rabiner1977]
Harmonic Product Spectrum (HPS)
A simplified variant of Spectral Peak Analysis (SPA) [Dziubinski2004]
Special Normalized Autocorrelation (SNAC) [McLeod2008]
Fourier Approximation Method (FAM) [Kumaraswamy2015]
There are tons more algorithms out there, but these were the ones that caught my eye for some reason or another. All methods have their own upsides and downsides, and all of them are clever in their own ways. Some algorithms have parameters that can be tweaked, and I did my best to experiment with those parameters to try to maximize results for the test dataset.
I created a test dataset of 10000 random single-frame synthetic waveforms with fundamentals ranging from 60 Hz to 1000 Hz. Each one has harmonics ranging up to the Nyquist frequency, and the amplitudes of the harmonics are randomized and multiplied by \(1 / n\) where \(n\) is the harmonic number. Whether this is really representative of speech is not an easy question, but I figured it would be a good start.
I scored each algorithm by how many times it produced a pitch within a semitone of the actual fundamental frequency. We'll address accuracy issues in a moment. The scores are:
Cepstrum
Simplified SPA
All the algorithms performed quite acceptably with the exception of the Harmonic Product Spectrum, which leads me to conclude that HPS is not really appropriate for pitch detection, although it does have other applications such as computing the chroma [Lee2006].
What surprised me most is that one of the simplest algorithms, cepstral analysis, also appears to be the best! Confusingly, a subjective study of seven pitch detection algorithms by McGonegal et al. [McGonegal1977] ranked the cepstrum as the 2nd worst. Go figure.
I hope this comparison was an interesting one in spite of how small and unscientific the study is. Be reminded that it is always possible that I implemented one or more of the algorithms wrong, didn't tweak it in the right way, or didn't look much into strategies for improving it.
The final algorithm
I arrived at the following algorithm by crossbreeding my favorite approaches:
Compute the "modified cepstrum" as the absolute value of the IFFT of \(\log(1 + |X|)\), where \(X\) is the FFT of a 2048-sample input frame \(x\) at a sample rate of 48000 Hz. The input frame is not windowed -- for whatever reason that worked better!
Find the highest peak in the modified cepstrum whose quefrency is above a threshold derived from the maximum frequency we want to detect.
Find all peaks that exceed 0.5 times the value of the highest peak.
Find the peak closest to the last detected pitch, or if there is no last detected pitch, use the highest peak.
Convert quefrency into frequency to get the initial estimate of pitch.
Recompute the magnitude spectrum of \(x\), this time with a Hann window.
Find the values of the three bins around the FFT peak at the estimated pitch.
Use an artificial neural network (ANN) on the bin values to interpolate the exact frequency.
The idea of the modified cepstrum, i.e. adding 1 before taking the logarithm of the magnitude spectrum, is borrowed from Philip McLeod's dissertation on SNAC, and prevents taking the logarithm of values too close to zero. The peak picking method is also taken from the same resource.
The use of an artificial neural network to refine the estimate is from the SPA paper [Dziubinski2004]. The ANN in question is a classic feedforward perceptron, and takes as input the magnitudes of three FFT bins around a peak, normalized so the center bin has an amplitude of 1.0. This means that the center bin's amplitude is not needed and only two input neurons are necessary. Next, there is a hidden layer with four neurons and a tanh activation function, and finally an output layer with a single neuron and a linear activation function. The output format of the ANN ranges from -1 to +1 and indicates the offset of the sinusoidal frequency from the center bin, measured in bins.
The ANN is trained on a set of synthetic data similar to the test data described above. I used the MLPRegressor in scikit-learn, set to the default "adam" optimizer. The ANN works astonishingly well, yielding average errors less than 1 cent against my synthetic test set.
In spite of the efforts to find a nearly error-free pitch detector, the above algorithm still sometimes produces errors. Errors are identified as pitch data points that exceed a manually specified range. Errors are corrected by linearly interpolating the surrounding good data points.
Source code for the pitch detector is in need of some cleanup and is not yet publicly available as of this writing, but should be soon.
Vocal pitch contour phenomena
I'm sure the above was a bit dry for most readers, but now that we're armed with an accurate pitch detector, we can study the following phenomena:
Drift: low frequency noise from 0 to 6 Hz [Cook1996].
Jitter: high frequency noise from 6 to 12 Hz.
Vibrato: deliberate sinusoidal pitch variation.
Portamento: lagging effect when changing notes.
Overshoot: when moving from one pitch to another, the singer may extend beyond the target pitch and slide back into it [Lai2009].
Preparation: when moving from one pitch to another, the singer may first move away from the target pitch before approaching it.
There is useful literature on most of these six phenomena, but I also wanted to gather my own data and do a little replication work. I had a gracious volunteer sing a number of melodies consisting of one or two notes, with and without vibrato, and I ran them through my pitch detector to determine the pitch contours.
Drift and jitter: In his study, Cook reported drift of roughly -50 dB and jitter at about -60 to -70 dB. Drift has a roughly flat spectrum and jitter has a sloping spectrum of around -8.5 dB per octave. My data is broadly consistent with these figures, as can be seen in the below spectra.
Drift and jitter are modeled as \(f \cdot (1 + x)\) where \(f\) is the static base frequency and \(x\) is the deviation signal. The ratio \(x / f\) is treated as an amplitude and converted to decibels, and this is what is meant by drift and jitter having a decibel value.
Cook also notes that drift and jitter also exhibit a small peak around the natural vibrato frequency, here around 3.5 Hz. Curiously, I don't see any such peak in my data.
Synthesis can be done with interpolated value noise for drift and "clipped brown noise" for jitter, added together. Interpolated value noise is downsampled white noise with sine wave segment interpolation. Clipped brown noise is defined as a random walk that can't exceed the range [-1, +1].
Vibrato is, not surprisingly, a sine wave LFO. However, a perfect sine wave sounds pretty unrealistic. Based on visual inspection of vibrato data, I multiplied the sine wave by random amplitude modulation with interpolated value noise. The frequency of the interpolated value noise is the same as the vibrato frequency.
Also note that vibrato takes a moment to kick in, which is simple enough to emulate with a little envelope at the beginning of each note.
Portamento, overshoot, and preparation I couldn't find much research on, so I sought to collect a good amount of data on them. I asked the singer to perform two-note melodies consisting of ascending and descending m2, m3, P4, P5, and P8, each four times, with instructions to use "natural portamento." I then ran all the results through the pitch tracker and visually measured rough averages of preparation time, preparation amount, portamento time, overshoot time, and overshoot amount. Here's the table of my results.
Prep. time
Prep. amount
Port. time
Over. time
Over. amount
m3 ascending
m3 descending
no preparation
P4 ascending
P4 descending
no overshoot
As one might expect, portamento time gently increases as the interval gets larger. There is no preparation for downward intervals, and spotty overshoot for upward intervals, both of which make some sense physiologically -- you're much more likely to involuntarily relax in pitch rather than tense up. Overshoot and preparation amounts have a slight upward trend with interval size. The overshoot time seems to have a downward trend, but overshoot measurement is pretty unreliable.
Worth noting is the actual shape of overshoot and preparation.
In OddVoices, I model these three pitch phenomena by using quarter-sine-wave segments, and assuming no overshoot when ascending and no preparation when descending.
Pitch detection and pitch contours consumed most of my time and energy recently, but there are a few other updates too.
As mentioned earlier, I registered the domain oddvoices.org, which currently hosts a copy of the OddVoices Web interface. The Web interface itself looks a little bland -- I'd even say unprofessional -- so I have plans to overhaul it especially as new parameters are on the way.
The README has been heavily updated, taking inspiration from the article "Art of README". I tried to keep it concise and prioritize information that a casual reader would want to know.
[Noll1966]
Noll, A. Michael. 1966. "Cepstrum Pitch Determination.
[Rabiner1977]
Rabiner, L. 1977. "On the Use of Autocorrelation Analysis for Pitch Detection."
[Dziubinski2004] (1,2)
Diubinski, M. and Kostek, B. 2004. "High Accuracy and Octave Error Immune Pitch Detection Algorithms."
[McLeod2008]
McLeod, Philip. 2008. "Fast, Accurate Pitch Detection Tools for Music Analysis."
[Kumaraswamy2015]
Kumaraswamy, B. and Poonacha, P. G. 2015. "Improved Pitch Detection Using Fourier Approximation Method."
[Cook1996]
Cook, P. R. 1996. "Identification of Control Parameters in an Articulatory Vocal Tract Model with Applications to the Synthesis of Singing."
[Lai2009]
Lai, Wen-Hsing. 2009. "An F0 Contour Fitting Model for Singing Synthesis."
[Lee2006]
Lee, Kyogu. 2006. "Automatic Chord Recognition from Audio Using Enhanced Pitch Class Profile."
[McGonegal1977]
McGonegal, Carol A. et al. 1977. "A Subjective Evaluation of Pitch Detection Methods Using LPC Synthesized Speech."
OddVoices Dev Log 2: Phase and Volume
This is the second in an ongoing series of dev updates about OddVoices, a singing synthesizer I've been developing over the past year. Since we last checked in, I've released version 0.0.1. Here are some of the major changes.
New voice!
Exciting news: OddVoices now has a third voice. To recap, we've had Quake Chesnokov, a powerful and dark basso profondo, and Cicada Lumen, a bright and almost synth-like baritone. The newest voice joining us is Air Navier (nav-YEH), a soft, breathy alto. Air Navier makes a lovely contrast to the two more classical voices, and I'm imagining it will fit in great in a pop or indie rock track.
Goodbye GitHub
OddVoices makes copious use of Git LFS to store original recordings for voices, and this caused some problems for me this past week. GitHub's free tier caps the amount of Git LFS storage and the monthly download bandwidth to 1 gigabyte. It is possible to pay $5 to add 50 GB to both storage and bandwidth limits. These purchases are "data packs" and are orthogonal to GitHub Pro.
What's unfortunate is that all downloads by anyone (including those on forks) contribute to the monthly download bandwidth, and even worse, downloads from GitHub Actions do also. I am easily running CI dozens of times per week, and multiplied by the gigabyte or so of audio data, the plan is easily maxed out.
A free GitLab account has a much more workable storage limit of 10 GB, and claims unlimited bandwidth for now. GitLab it is. Consider this a word of warning for anyone making serious use of Git LFS together with GitHub, and especially GitHub Actions.
Goodbye MBR-PSOLA
OddVoices, taking after speech synthesizers of the 90's, is based on concatenation of recorded segments. These segments are processed using PSOLA, which turns them into a sequence of frames (grains), each for one pitch period. PSOLA then allows manipulation of the segment in pitch, time, and formants, and sounds pretty clean. The synthesis component is also computationally efficient.
One challenge with a concatenative synthesizer is making the segments blend together nicely. We are using a crossfade, but a problem arises -- if the phases of the overlapping frames don't approximately match, then unnatural croaks and "doubling" artifacts happen.
There is a way to solve this: manually. If one lines up the locations of the frames so they are centered on the exact times when the vocal folds close (the so-called "glottal closure instant" or GCI), the phases will match. Since it's difficult to find the GCI from a microphone signal, an electroglottograph (EGG) setup is typically used. I don't have an EGG on hand, and I'm working remotely with singers, so this solution has to be ruled out.
A less daunting solution is to use FFT processing to make all phases zero, or set every frame to minimum phase. These solve the phase mismatch problem but sound overtly robotic and buzzy. (Forrest Mozer's TSI S14001A speech synthesis IC, memorialized in chipspeech's Otto Mozer, uses the zero phase method -- see US4214125A.) MBR-PSOLA softens the blows of these methods by using a random set of phases that are fixed throughout the voice database. Dutoit recommends only randomizing the lower end of the spectrum while leaving the highs untouched. It sounds pretty good, but there is still an unnatural hollow and phasey quality to it.
I decided to search around the literature and see if there's any way OddVoices can improve on MBR-PSOLA. I found [Stylianou2001], which seems to fit the bill. It recommends computing the "center" of a grain, then offsetting the frame so it is centered on that point. The center is not the exact same as the GCI, but it acts as a useful stand-in. When all grains are aligned on their centers, their phases should be roughly matched too -- and all this happens without modifying the timbre of the voice, since all we're doing is a time offset.
I tried this on the Cicada voice, and it worked! I didn't conduct any formal listening experiment, but it definitely sounded clearer and lacking the weird hollowness of the MBROLA voice. Then I tried it on the Quake voice, and it sounded extremely creaky and hoarse. This is the result of instabilities in the algorithm, producing random timing offsets for each grain.
Frame adjustment
Let \(x[t]\) be a sampled quasiperiodic voice signal with period \(T\), with a sample rate of \(f_s\). We round \(T\) to an integer, which works well enough for our application. Let \(w[t]\) be a window function (I use a Hann window) of length \(2T\). Brackets are zero-indexed, because we are sensible people here.
The PSOLA algorithm divides \(x\) into a number of frames of length \(2T\), where the \(n\)-th frame is given by \(s_n[t] = w[t] x[t + nT]\).
Stylianou proposes the "differentiated phase spectrum" center, or DPS center, which is computed like so:
\begin{equation*} \eta = \frac{T}{2\pi} \arg \sum_{i = -T}^{T - 1} s_n^2[t] e^{2 \pi j t / T} \end{equation*}
\(\eta\) is here expressed in samples. The DPS center is not the GCI. It's... something else, and it's admitted in the paper that it isn't well defined. However, it is claimed that it will be close enough to the GCI, hopefully by a near-constant offset. To normalize a frame on its DPS center, we recalculate the frame with an offset of \(\eta\): \(s'_n[t] = w[t] x[t + nT + \text{round}(\eta)]\).
The paper also discusses the center of gravity of a signal as a center close to the GCI. However, the center of gravity is less robust than the DPS center, as it can be shown that the center can be computed from just a single bin in the discrete Fourier transform, whereas the DPS center involves the entire spectrum.
Here's where we go beyond the paper. As discussed above, for certain signals \(\eta\) can be noisy, and using this algorithm as-is can result in audible jitter in the result. The goal, then, is to find a way to remove noise from \(\eta\).
After many hours of experimenting with different solutions, I ended up doing a lowpass filter on \(\eta\) to remove high-frequency noise. A caveat is that \(\eta\) is a circular value that wraps around with period \(T\), and performing a standard lowpass filter will smooth out discontinuities produced by wrapping, which is not what we want. The trick is to use an encoding common in circular statistics, and especially in machine learning: convert it to sine and cosine, perform filtering on both signals, and convert it back with atan2. A rectangular FIR filter worked perfectly well for my application.
Overall the result sounds pretty good. There are still some minor issues with it, but I hope to iron those out in future versions.
Volume normalization
I encountered two separate but related issues regarding the volume of the voices. The first is that the voices are inconsistent in volume -- Cicada was much louder than the other two. The second, and the more serious of the two, is that segments can have different volumes when they are joined, and this results in a "choppy" sound with discontinuities.
I fixed global volume inconsistency by taking the RMS amplitude of the entire segment database and normalizing it to -20 dBFS. For voices with higher dynamic range, this caused some of the louder consonants to clip, so I added a safety limiter that ensures the peak amplitude of each frame is no greater than -6 dBFS.
Segment-level volume inconsistency can be addressed by examining diphones that join together and adjusting their amplitudes accordingly. Take the phoneme /k/, and gather a list of all diphones of the form k* and *k. Now inspect the amplitudes at the beginning of k* diphones, and the amplitudes at the end of *k diphones. Take the RMS of all these amplitudes together to form the "phoneme amplitude." Repeat for all other phonemes. Then, for each diphone, apply a linear amplitude envelope so that the beginning frames match the first phoneme's amplitude and the ending frames match the second phoneme's amplitude. The result is that all joined diphones will have a matched amplitude.
The volume normalization problem in particular taught me that developing a practical speech or singing synthesizer requires a lot more work than papers and textbooks might make you think. Rather, the descriptions in the literature are only baselines for a real system.
More is on the way for OddVoices. I haven't yet planned out the 0.0.2 release, but my hope is to work on refining the existing voices for intelligibility and naturalness instead of adding new ones.
[Stylianou2001]
Stylianou, Yannis. 2001. "Removing Linear Phase Mismatches in Concatenative Speech Synthesis."
OddVoices Dev Log 1: Hello World!
The free and open source singing synthesizer landscape has a few projects worth checking out, such as Sinsy, eCantorix, meSing, and MAGE. While each one has its own unique voice and there's no such thing as a bad speech or singing synthesizer, I looked into all of them and more and couldn't find a satisfactory one for my musical needs.
So, I'm happy to announce OddVoices, my own free and open source singing synthesizer based on diphone concatenation. It comes with two English voices, with more on the way. If you're not some kind of nerd who uses the command line, check out OddVoices Web, a Web interface I built for it with WebAssembly. Just upload a MIDI file and write some lyrics and you'll have a WAV file in your browser.
OddVoices is based on MBR-PSOLA, which stands for Multi-Band Resynthesis Pitch Synchronous Overlap Add. PSOLA is a granular synthesis-based algorithm for playback of monophonic sounds such that the time, formant, and pitch axes can be manipulated independently. The MBR part is a slight modification to PSOLA that prevents unwanted phase cancellation when crossfading between concatenated samples, and solves other problems too. For more detail, check out papers from the MBROLA project. The MBROLA codebase itself has some tech and licensing issues I won't get into, but the algorithm is perfect for what I want in a singing synth. Note that OddVoices doesn't interface with MBROLA.
I'll use this post to discuss some of the more interesting challenges I had to work on in the course of the project so far. This is the first in a series of posts I will be making about the technical side of OddVoices.
Vowel mergers
OddVoices currently only supports General American English (GA), or more specifically the varieties of English that I and the singers speak. I hope in the future that I can correct this bias by including other languages and other dialects of English.
When assembling the list of phonemes, the cot-caught merger immediately came up. I decided to merge them, and make /O/ and /A/ aliases except for /Or/ and /Ar/ (here using X-SAMPA). To reduce the number of phonemes and therefore phoneme combinations, I represent /Or/ internally as /oUr/.
A more interesting merger concerns the problem of the schwa. In English, the schwa is used to represent an unstressed syllable, but the actual phonetics of that syllable can vary wildly. In singing, a syllable that would be unstressed in spoken English can be drawn out for multiple seconds and become stressed. The schwa isn't actually sung in these cases, and is replaced with another phoneme. As one of the singers put it, "the schwa is a big lie."
This matters when working with the CMU Pronouncing Dictionary, which I'm using for pronouncing text. Take a word like "imitate" -- the second syllable is unstressed, and the CMUDict transcribes it as a schwa. But when sung, it's more like /I/. This is simply a limitation of the CMUDict that I don't have a good solution for. In the end I merge /@/ with /V/, since the two are closely related in GA. Similarly, /3`/ and /@`/ are merged, and the CMUDict doesn't even distinguish those.
Real-time vs. semi-real-time operation
A special advantage of OddVoices over alternative offerings is that it's built from scratch to work in real time. That means that it can become a UGen for platforms like SuperCollider and Pure Data, or even a VST plugin in the far future. I have a SuperCollider UGen in the works, but there's some tricky engineering work involving communication between RT and NRT threads that I haven't tackled yet. Stay tuned.
There is a huge caveat to real time operation: singers don't operate in perfect real time! To see why, imagine the lyrics "rice cake," sung with two half notes. The final /s/ in "rice" has to happen before the initial /k/ in "cake," and the latter happens right on the third beat, so the singer has to anticipate the third beat with the consonant /s/. But in MIDI and real-time keyboard playing, there is no way to predict when the note off will happen until the third beat has already arrived.
VOCALOID handles this by being its own DAW with a built-in sequencer, so it can look ahead as much as it needs. chipspeech and Alter/Ego work in real time. In their user guides, they ask the user to shorten every MIDI note to around 50%-75% of its length to accommodate final consonant clusters. If this is not done, a phenomenon I call "lyric drift" happens and the lyrics misalign from the notes.
OddVoices supports two possible modes: true real-time mode and semi-real-time mode. In true real-time mode, we don't know the durations of notes, so we trigger the final consonant cluster on a note off. Like chipspeech and Alter/Ego, this requires manual shortening of notes to prevent lyric drift. Alternatively, OddVoices supports a semi-real-time mode where every note on is accompanied by the duration of the note. This way OddVoices can predict the timing of the final consonant cluster, but still operate in otherwise real-time.
Semi-real-time mode is used in OddVoices' MIDI frontend, and can also be used in powerful sequencing environments like SC and Pd by sending a "note length" signal along with the note on trigger. I think it's a nice compromise between the constraints of real-time and the omniscience of non-real-time.
Syllable compression
After I implemented semi-real-time mode, another problem remained that reared its head in fast singing passages. This happens when, say, the lyric "rice cake" is sung very quickly, and the diphones _r raI aIs (here using X-SAMPA notation), when concatenated, will be longer than the note length. The result is more lyric drift -- the notes and the lyrics diverge.
The fix for this was to peek ahead in the diphone queue and find the end of the final consonant cluster, then add up all the segment lengths from the beginning to that point. This is how long the entire syllable would last. This is then compared to the note length, and if it is longer, the playback speed is increased for that syllable to compensate. In short, consonants have to be spoken quickly in order to fit in quickly sung passages.
The result is still not entirely satisfactory to my ears, and I plan to improve it in future versions of the software. Syllable compression is of course only available in semi-real-time mode.
Syllable compression is evidence that fast singing is phonetically quite different from slow singing, and perhaps more comparable to speech.
Stray thoughts
This is my second time using Emscripten and WebAssembly in a project, and I find it an overall pleasant technology to work with (especially with embind for C++ bindings). I did run into an obstacle, however, which was that I couldn't figure out how to compile libsndfile to WASM. The only feature I needed was writing a 16-bit mono WAV file, so I dropped libsndfile and wrote my own code for that.
I was surprised by the compactness of this project so far. The real-time C++ code adds up to 1,400 lines, and the Python offline analysis code only 600.
Hearing Graphs
My latest project is titled Hearing Graphs, and you can access it by clicking on the image below.
Hearing Graphs is a sonification of graphs using the graph spectrum -- the eigenvalues of the graph's adjacency matrix. Both negative and positive eigenvalues are represented by piano samples, and zero eigenvalues are interpreted with a bass drum. The multiplicity of each eigenvalue is represented by hitting notes multiple times.
Most sonifications establish some audible relationship between the source material and the resulting audio. This one remains mostly incomprehensible, especially to a general audience, so I consider it a failed experiment in that regard. Still, it was fun to make.
Moisture Bass
If you haven't heard of the YouTube channel Bunting, it gets my strong recommendation. Bunting creates excellent style imitations of experimental bass music artists and breaks them down with succinct explanations. Notable is his minimal tooling: he uses mostly Ableton Live stock plugins and the free and open source wavetable synth Vital.
His latest tutorial, mimicking the style of the artist Resonant Language, contains several bass sounds with a property he calls "moisture" (timestamp). These bass sounds are created by starting with a low saw wave, boosting the highs, and running the result through Ableton Live's vocoder set on "Modulator" mode. According to the manual, this enables self-vocoding, where the same signal is the modulator and carrier. An abstract view of a vocoder would suggest that this does little or nothing to the saw wave other than change its spectral tilt, but the reality is much more interesting. Hear for yourself an EQ'd saw wave before and after self-vocoding:
A closer inspection of the latter waveform shows why the self-vocoded saw sounds the way it does. Here's a single pitch period:
The discontinuity in the saw signal is decorated with a chirp, or a sine wave that rapidly descends in frequency. This little 909 kick drum every pitch period is responsible for the "moisture" sound. Certainly there have been no studies on the psychoacoustics of moisture bass (for lack of a better term), but I suspect that it mimics dispersive behavior, lending a vaguely acoustic sound.
The chirp originates from the bandpass filters in the vocoder. The frequencies of the vocoder are exponentially spaced, so the bandpass filters have to increase in bandwidth for higher frequencies to cover the gaps. Larger bandwidth means lower Q, and lower Q reduces the ring time in the filter's impulse response. The result is that low frequencies ring longer when the vocoder is pinged and high frequencies ring shorter. Mix them all together, and you have an impulse response resembling a chirp.
Self-vocoding with exponentially spaced bands is clever, but it isn't the only way to create this effect. One option is to eliminate the vocoding part and use only the exponentially spaced bandpass filters, like an old-school filter bank. This sounds just like self-vocoding but requires fewer bandpass filters to work. In my experiments, I found that putting the resulting signal through nonlinear distortion is necessary to bring out the moisture property.
A more direct approach is to use wavefolding on a curved signal. The slope of the input signal controls the rate that it scrubs through wavefolding function, and thus controls the frequency of the resulting triangle wave. By modulating the slope from high in absolute value down to zero, a triangle wave descending in frequency is created. This is best explained visually:
And here's how it sounds:
Announcing Canvas
Canvas (working title) is a visual additive synthesizer for Windows, macOS, and Linux where you can create sound by drawing an image. This is accomplished with a bank of 239 oscillators spaced at quarter tones, with stereo amplitudes mapped to the red and blue channels of the image. You can import images and sonify them, you can import sounds and convert their spectrograms to images, and you can apply several image filters like reverb, tremolo, and chorus. Check out the demo:
Canvas is directly inspired by the Image Synth from U&I Software's MetaSynth. When I first heard of MetaSynth, I wrote it off as a gimmick that could only produce sweeps and whooshes. It wasn't until I heard Benn Jordan's recent demo and learned of its powerful set of image filters that I was immediately sold on the approach. I decided to build an alternative that's just featureful enough to yield musical results, a sort of MS Paint to MetaSynth's Photoshop.
Canvas has a lot of rough edges, and currently requires building from scratch if using Linux or macOS. Nevertheless, I hope it is of some interest to the music tech community. | CommonCrawl |
State board chapter 12
jeanette_goyco
In organic chemistry is the study of substances that do not contain the element carbon, but may contain which element?
A substance that cannot be broken down into simpler substances without a loss of identity is
An element
Chemically combining two or more atoms in definite proportion forms
A molecule
A(n)____________Is a stable physical mixture of two or more substances in a solvent
A(n)__________is a substance dissolved into a solution
Solute
Liquids that are not capable of being mixed together to form stable solutions are considered
Immiscible
Unstable physical mixture of undissolved particles in a liquid are
An unstable physical mixture of two or more immiscible substances plus a special ingredient is
An emulsion
A substance that allows oil and water to mix or emulsify is
A surfactant
The tale of a surfactant molecule is oil loving or
Lipophilic
An atom or molecule that carries an electrical charge is called an
Alpha hydroxy acids (AHA's) are derived from ___________And used in a salon to exfoliate the skin and to help adjust the pH of certain products
Chemical reactions that release a significant amount of heat under certain circumstances are
Exothermic
A substance that has a pH below 7.0 is considered to be
Alkanolamines Are often use in place of ammonia because they
Produce less order
Which of these is not composed of organic chemicals
Elemental molecules contain two or more ___________Of the same element in definite proportions
Vapor is_____________That has evaporated into a gas like state
A liquid
An oxidizing agent is a substance that releases
A pure substance is a chemical combination of matter in_________Proportions
The glitter in nail polish that can separate from the polish is an example of
A Suspension
Water in oil emulsion feel___________Then oil and water emulsions
Greasier
The ingredient used to raise the pH in hair products to allow the solution to penetrate the hair shaft is
Volatile organic compounds contain_____________And evaporate very easily
The chemical reaction that combines a substance with oxygen to produce an oxidized is
The term logarithm means multiples of
A chemical reaction in which oxidation And reduction take place in the same time is
Any substance that occupies space and has mass is
Characteristics that can only be determined by a chemical reaction in a chemical change in the substance are
Chemical combination of matter in definite proportion is a
Pure substance
A physical combination of matter in any proportion is a
Physical mixture
The ____________Is the smallest chemical component of an element
Rapid oxidation of a substance occupied by the production of heat and light is
Characteristics that can be determined without a chemical reaction and that do not involve a chemical change in the substance are
State board ch 11
Chemistry Study Guide chapter 1-4
pieislife
Lesson four vocabulary
delaneywinz
science 1206 chemistry
dylan_langford
Pigeon2468
Aardrijkskunde H2. Stedelijke gebieden in de VS
shannahasselbaink
Physically & Developmentally Impaired Patients
haileymariee22
Cours 7 - Portion asynchrone - Les troub…
Coroy_xoxoPLUS
History 105 Mid-term Term List
Mecenzie_Miranda
Verified questions
Which of the following statements is(are) correct? All of the following elements are expected to gain electrons to form ions in ionic compounds: Ga, Se, and Br.
Verified answer
A Slinky is a great example for seeing how seismic waves travel through Earth's interior. Place the Slinky on the floor. With two people firmly holding the ends of the Slinky, stretch it out to about 5 ft in length (do not overstretch). On one end, pull the Slinky backward and then push forward rapidly. Stop, and observe. How do the coils move? Which type of seismic wave does this motion mimic? Once the Slinky has stopped moving, jerk the toy to the side so that it moves like a slithering snake. How do the coils move? Which type of seismic wave does this motion mimic?
In an experiment reported in the scientific literature, male cockroaches were made to run at different speeds on a miniature treadmill while their oxygen consumption was measured. In 1 hr the average cockroach running at 0.08 km/ hr consumed 0.8 ml of $$ O_2 $$ at 1 atm pressure and $$ 24 ^ { \circ } \mathrm { C } $$ per gram of insect mass. How many moles of $$ O_2 $$ would be consumed in 1 hr by a 5.2-g cockroach moving at this speed?
How does the kinetic-molecular theory explain the pressure exerted by gases? | CommonCrawl |
Shape optimization
-Optimization
Information about the page
FreeFEM
1)Introduction
2.Energy Sensitivity
1)GJ-integral
2)Crack GJ-int
(1)J-integral
(2)3d-fracture
(3)Griffith theory
1)Singular points
2)Lagrange multiplier
3)Applications
(1)Fracture
(2)Mixed boundary
(3)Interfaces
3.Shape optimization
1)Eigenvalues
2)Optimization
3)Crack path
4)FEM of GJ-integral
5)FE-calculation
4.Numerical analysis
1)Energy opt.
2)Mean-square error
3)Mean compliance
4)Interface
We now consider problems of finding the shape of singular points from shape sensitivity, that is, for a given cost function $F$, find a mapping $\varphi ^O\in W^{1,\infty }(\Omega ;\mathbb{R}^d)$ such that $F(u^{\varphi ^O}) \lt F(u)$ where $u$ and $u^{\varphi ^O}$ are solutions of problems (2.1) and (2.25a) with $\varphi _t=x+t\mu ^O, \varphi ^O=\varphi _{\epsilon ^O}$ with $\mu ^O\in W^{1,\infty }(\Omega ;\mathbb{R}^d)$ and a number $\epsilon ^O \gt 0$. In our shape optimization there is the case that $\varphi ^O(\Omega )=\Omega $ such as mixed boundary conditions ($\varphi ^O(\Gamma _D)\neq \Gamma _D$). Expanding the cost function $F$ of $u^{\varphi _t}, \varphi _t=x+t\mu , \mu \in W^{1,\infty }(\Omega ;\mathbb{R}^d)$ with respect to $t$, we derive $$ F(u^{\varphi _t})=F(u)+tdF(u)[\mu ]+o(t), dF(u)[\mu ] =\left .\frac{d}{dt}F(u^{\varphi _t})\right |_{t=0}$$ In the usual shape shape derivative $dF(u)[\mu ]$ is represented ordinary \begin{equation} dF(u)[\mu ]=\int _{\partial \Omega }g_{\Gamma }(\mu \cdot n)\, ds \tag{3.13} \end{equation} with an integrable function $g_{\Gamma }$ defined on $\Gamma $ if the boundary $\partial \Omega $ is smooth [Theorem 2.27, Sok92]. The advantage of the boundary expression (3.13) is that a descent direction $-g_{\Gamma }n$ is readily available. However, numerical instability phenomenon appears by the method of directly moving the node on the boundary $\partial \Omega $ using the shape gradient $-g_{\Gamma }n$. There are two ways to eliminate such instability; one is to consider it as a problem of numerical stability with a finite dimensional design space, called parametric method (see e.g.[Sa99]). Another is to find general shapes from the sensitivity of a shape function with respect to an arbitrary perturbation of these shapes, called nonparametric method where (before discretization) the design space is infinite-dimensional. In Japan there is the famous nonparametric method called the H1-gradient method (originally called the ``traction method'') [Az94, A-W96, Az20]. Find $\mu ^O$ by solving the additional auxiliary variational problem \begin{equation} b(\mu ^O,\eta ) = -dF(u)[\eta ] ~~ ∀\eta \in M(\Omega ) \tag{3.14} \end{equation} where $b(\cdot ,\cdot )$ is a coersive bilinear form \begin{equation} b(\eta ,\eta )\ge \alpha _b\|\eta \|_{1,\Omega }^2 ~~ ∀\eta \in M(\Omega ) \tag{3.15} \end{equation} with a constant $\alpha _b \gt 0$, and $M(\Omega )$ is a suitable subspace of $H^1(\Omega ;\mathbb{R}^d)$. Azegami [Az94, A-W96] chose for the bilinear form $b(\cdot ,\cdot )$ from linear elasticity. There is similar nonparametric method in French researcher, for example, Allaire[A-P06, Al07] use the bilinear form $b(\mu ,\eta ) =\int _{\Omega }\{\nabla \mu :\nabla \eta +\mu \cdot \eta \} dx$ to find shape optimization, and we also call it H1-gradient method here. By replacing $dF(u)[\mu ]$ in (3.14) with the shape sensitivities by GJ-integral, we have for any $\eta \in M(\Omega )$ when the conditions in Corollary 2.13 are satisfied, \begin{equation} b(\mu ^O,\eta )= \left \lbrace \array{R_{\Omega }(u,\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds& \textrm{(energy)}\\ -2\left \{R_{\Omega }(u,\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds\right \} & \textrm{(mean compliance)}\\ -\delta R_{\Omega }(u,\eta )[u_g]&\\ -\int _{\partial \Omega } (fu_g+\widehat{g}(u)-\nabla _z\widehat{g}(u)u)(\eta \cdot n)ds\\ (F(u)=\int _{\Omega }\widehat{g}(u)dx)&\\ }\right . \tag{3.16} \end{equation} where $u_g$ is the solution of the problem (3.8) . Here we notice that the right-hand side of (3.16) take the finite value for all weak solution of (2.1) . The solution $\mu ^O$ of (3.16) is unique. Putting $\varphi ^O_t=x+t\mu ^O$, we have \begin{eqnarray} F(u^{\varphi ^O_t})&=&F(u)+\left .t\frac{d}{dt}F(u^{\varphi ^O_t})\right |_{t=0}+o(t)\\ &=&F(u)-tb(\mu ^O,\mu ^O)+o(t)\notag \\ &\le &F(u)-t\alpha _b\|\mu ^O\|_{1,2,\Omega }^2+o(t)\notag \tag{3.17} \end{eqnarray} If $\mu ^O\neq 0$, we can take an appropriate number $\epsilon ^O$ such that $F(u^{\varphi _t^O}) \lt F(u), 0\le t\le \epsilon ^O$, so that $\varphi ^O=x+\epsilon ^O\mu ^O$ gives the optimization of the singular points with respect to the cost function $F$. We already indicated that $[\eta \mapsto R_{\Omega }(u,\eta )]$ is continuous on $W^{1,\infty }(\Omega ,\mathbb{R}^d)$, but in the H1-gradient method $[\eta \mapsto R_{\Omega }(u,\eta )]$ needs to be continuous on $H^1(\Omega ;\mathbb{R}^d)$. If there is some regularity such as $u\in H^s(\Omega ;\mathbb{R}^m),s \gt 1$, then we will extend $[\eta \mapsto R_{\Omega }(u,\eta )]$ to the continious functional on $H^1(\Omega ;\mathbb{R}^m)$. In numerical calculation of (3.16) , for example FEM, (3.16) is well-defined if $[u\mapsto R_{\Omega }(u_h,\mu )]$ give the good approximation of $[u\mapsto R_{\Omega }(u,\mu )]$ for a FE-approximation $u_h$ of $u$. In the interface problem (2.51) , H1-gradient method becomes as follows when the conditions in Corollary 2.18 are satisfied: \begin{equation} b(\mu ^O,\eta )= \left \lbrace \array{\sum _{\kappa =1}^K R_{\Omega _{\kappa }}(u,\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds& \textrm{(energy)}\\ -2\left \{\sum _{\kappa =1}^KR_{\Omega _{\kappa }}(u_{\kappa },\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds\right \} &\textrm{(mean compliance)}\\ -\sum _{\kappa =1}^K\delta R_{\Omega _{\kappa }}(u,\eta )[u_g]&\\ -\int _{\partial \Omega } (fu_g+\widehat{g}(u)-\nabla _z\widehat{g}(u)u)(\eta \cdot n)ds &(F=\int _{\Omega }\widehat{g}(u)dx) }\right . \tag{3.18} \end{equation} The H1-gradient of eigenvalue, when satisfying the conditions in Theorem 3.3 , is \begin{equation} b(\mu ^O,\eta )=2R_{\Omega }^E(u_{\lambda },\eta )+\lambda \int _{\partial \Omega }u_{\lambda }^2(\eta \cdot n)ds \tag{3.19} \end{equation} For the shape optimization of energy and compliance under Stokes problem, H1-gradient become \begin{equation} b(\mu ^O,\eta )= \left \lbrace \array{R_{\Omega }^S((u,p),\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds& \textrm{(energy)}\\ -2\left \{R_{\Omega }^S((u,p),\eta )+\int _{\partial \Omega }fu(\eta \cdot n)ds\right \} & \textrm{(mean compliance)} }\right . \tag{3.20} \end{equation}
Information about the page: The current position is painted circle in the diagram below. Blue is the main MaKR and orange is a duplicate for MaKR's public use, where dashed line means the connection to the private area The dashed lines are only connections to main MaKR.
[Adams] R.A.Adams, Sobolev spaces, Academic Press, 1975.
[A-P06] G. Allaire and O. Pantz, Structural optimization with FreeFem++, Struct. Multidiscip. Opt, 32 (2006), 173--181.
[Al07] G. Allaire, Conception optimale de structures, Springer, 2007.
[Az94] H. Azegami, Solution to domain optimization problems, Trans. Japan Soc. Mech. Engrs. Series A, 60, No.574 (1994), 1479--1486. (in Japanese)
[A-W96] H. Azegami and Z. Wu, Domain optimization analysis in linear elastic problems: Approach using traction method, JSME Inter. J. Series A, 39 (1996), 272--278.
[Az17] H. Azegami. Solution of shape optimization problem and its application to product design, Mathematical Analysis of Continuum Mechanics and Industrial Applications, Springer, 2017, 83--98.
[B-S04] M.P. Bends{\o }e and O. Sigmund, Topology optimization: theory, methods, and applications, Springer, 2004.
[Bu04] H.D. Bui, Fracture mechanics -- Inverse problems and solutions, Springer, 2006.
[Ch67] G.P. Cherepanov, On crack propagation in continuous media, Prikl. Math. Mekh., 31 (1967), 476--493.
[Cir88] P.G. Ciarlet, Mathematical elasticity: Three-dimensional elasticity, North-Holland, 1988.
[Co85] R. Correa and A. Seeger, Directional derivative of a minimax function. Nonlinear Anal., 9(1985), 13--22.
[D-Z88] M.C. Delfour and J.-P. Zolésio, Shape sensitivity analysis via min max differentiability, SIAM J. Control and Optim., 26(1988), 834--862.
[D-D81] Ph. Destuynder and M. Djaoua, Sur une interprétation de l'intégrale de Rice en théorie de la rupture fragile. Math. Meth. in Appl. Sci., 3 (1981), 70--87.
[E-G04] A. Em and J.-L. Guermond, Theory and practice of finite elements, Springer, 2004.
[Es56] J.D. Eshelby, The Continuum theory of lattice defects, Solid State Physics, 3 (1956), 79--144.
[F-O78] D. Fujiwara and S. Ozawa, The Hadamard variational formula for the Green functions of some normal elliptic boundary value problems, Proc. Japan Acad., 54 (1978), 215--220.
[G-S52] P.R. Garabedian and M. Schiffer, Convexity of domain functionals, J.Anal.Math., 2 (1952), 281--368.
[Gr21] A.A. Griffith, The phenomena of rupture and flow in solids, Phil. Trans. Roy. Soc. London, Series A 221 (1921), 163--198.
[Gr24] A.A. Griffith, The theory of rupture, Proc. 1st.Intern. Congr. Appl. Mech., Delft (1924) 55--63.
[Gr85] P. Grisvard, Elliptic problems in nonsmooth domains, Pitman, 1985.
[Gr92] P. Grisvard, Singularities in boundary value problems, Springer, 1992.
[Had68] J. Hadamard, Mémoire sur un problème d'analyse relatif à l'équilibre des plaques élastiques encastrées, Mémoire des savants étragers, 33 (1907), 515--629.
[Hau86] E.J. Haug, K.K. Choi and V. Komkov, Design sensitivity analysis of structural systems, Academic Press, 1986.
[ffempp] F. Hecht, New development in freefem++. J. Numer. Math. 20 (2012), 251--265. 65Y15, (FreeFem++ URL:http://www.freefem.org)
[Kato] T. Kato, Perturbation theory for linear operators, Springer, 1980.
[K-W06] M. Kimura. and I. Wakano, New mathematical approach to the energy release rate in crack extension, Trans. Japan Soc. Indust. Appl. Math., 16(2006) 345--358. (in Japanese) \bibitem {K-W11} M. Kimura and I. Wakano, Shape derivative of potential energy and energy release rate in rracture mechanics, J. Math-for-industry, 3A (2011), 21--31.
[Kne05] D. Knees, Regularity results for quasilinear elliptic systems of power-law growth in nonsmooth domains: boundary, transmission and crack problems. PhD thesis, Universität Stuttgart, 2005. http://elib.uni-stuttgart.de/opus/volltexte/2005/2191/.
[Ko06] V.A. Kovtunenko, Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration, IMA Jour. Appl. Math. 71 (2006), 635--657.
[K-O18] V.A. Kovtunenko and K. Ohtsuka, Shape differentiability of Lagrangians and application to stokes problem, SIAM J. Control Optim. 56 (2018), 3668--3684.
[M-P01] B. Mohammadi and O. Pironneau, Applied shape optimization for fluids. Oxford University Press, 2001.
[Na94] S.Nazarov and B.A.Plamenevsky, Elliptic problems in domains with piecewise smooth boundaries, de Gruyter Expositions in Mathematics 13. Walter de Gruyter \& Co., 1994.
[Nec67] J. Nečas, Direct methods in the theory of elliptic equations, Springer, 2012. Translated from ``Méthodes directes en théorie des équations elliptiques, 1967, Masson''.
[Noe18] E. Noether, Invariante variationsprobleme, göttinger nachrichten, Mathematisch-Physikalische Klasse (1918), 235--257.
[N-S13] A.A. Novotny and J. Sokolowski, Topological derivatives in shape optimization, Springer, 2013.
[Oh81] K. Ohtsuka, Generalized J-integral and three dimensional fracture mechanics I, Hiroshima Math. J., 11(1981), 21--52.
[Oh85] K. Ohtsuka, Generalized J-integral and its applications. I. -- Basic theory, Japan J. Appl. Math., 2 (1985), 329--350.
[O-K00] K. Ohtsuka and A. Khludnev, Generalized J-integral method for sensitivity analysis of static shape design, Control \& Cybernetics, 29 (2000), 513--533.
[Oh02] K. Ohtsuka, Comparison of criteria on the direction of crack extension, J. Comput. Appl. Math., 149 (2002), 335--339.
[Oh02-2] K. Ohtsuka, Theoretical and numerical analysis on 3-dimensional brittle fracture, Mathematical Modeling and Numerical Simulation in Continuum Mechanics, Springer, 2002, 233--251.
[Oh09] K. Ohtsuka, Criterion for stable/unstable quasi-static crack extension by extended griffith energy balance theory, Theor. Appl. Mech. Japan, 57 (2009), 25--32.
[Oh12] K. Ohtsuka, Shape optimization for partial differential equations/system with mixed boundary conditions, RIMS K\^oky\^uroku 1791 (2012), 172--181.
[OT-K12] K. Ohtsuka and M. Kimura, Differentiability of potential energies with a parameter and shape sensitivity analysis for nonlinear case: the p-Poisson problem, Japan J. Indust. Appl. Math., 29 (2012), 23--35.
[Oh14] K. Ohtsuka and T. Takaishi, Finite element anaysis using mathematical programming language FreeFem++, Kyoritsu Shuppan, 2014. (in Japanese)
[Oh17] K. Ohtsuka, Shape optimization by GJ-integral: Localization method for composite material, Mathematical Analysis of Continuum Mechanics and Industrial Applications, Springer, 2017, 73--109.
[Oh18] K. Ohtsuka, Shape optimization by Generalized J-integral in Poisson's equation with a mixed boundary condition, Mathematical Analysis of Continuum Mechanics and Industrial Applications II, Springer, 2018, 73--83.
[Pr10] A.N. Pressley, Elementary differential geometry, Springer, 2010.
[Ri68] J.R. Rice, A path-independent integral and the approximate analysis of strain concentration by notches and cracks, J. Appl. Mech., 35(1968), 379--386.
[Ri68-2] J.R. Rice, Mathematical analysis in the mechanics of fracture, Fracture Volume II, Academic Press, 1968, 191--311.
[Pi84] O. Pironneau, Optimal shape design for elliptic systems, Springer-Verlag, 1984.
[Sa99] J.A. Samareh, A survey of shape parameterization techniques, NASA Report CP-1999-209136 (1999), 333--343.
[Sc91] B.-W. Schulze, Pseudo-differential operators on manifolds with singularities, North-Holland, 1991.
[Sok92] J. Sokolowski and J.-P. Zolesio, Introduction to shape optimization, Springer, 1992.
[St14] K. Sturm, On shape optimization with non-linear partial differential equations, Doctoral thesis, Technische Universiltät of Berlin, 2014. https://d-nb.info/106856959X/34
[Sumi] Y. Sumi, Mathematical and computational analyses of cracking formation, Springer, 2014.
[Zei/2B] E. Zeidler. Nonlinear functional analysis and its applications II/B, Springer, 1990.
[Z-S73] O.C. Zienkiewicz and J.S. Campbell, Shape optimization and sequential linear programming, Optimum Structural Design, Wiley, 1973, 109--126.
©Kohji Ohtsuka, powered by MaKR (2022/9/27) | CommonCrawl |
Probability measure implies quantum mechanics?
There is a Hilbert space H (probably of dimension 3 or more, as in Gleason's theorem?) equipped with some logical apparatus (the lattice L?).
Correct, and the lattice $L(H)$ is that of orthogonal projectors/closed subspaces of a separable complex Hilbert space $H$. As a partially ordered set, the partial ordering $P\leq Q$ relation is the inclusion of subspaces: $P(H) \subset Q(H)$.
As a consequence $P \vee Q := \sup\{P,Q\}$ is the projector onto the closure of the sum of $P(H)$ and $Q(H)$ and $P\wedge Q := \inf\{P,Q\}$ is the projector onto the intersection of the said closed subspaces.
This lattice turns out to be orthomodular, bounded, atomic, satisfying the covering law, separable, ($\sigma$-)complete.
You also need not assume that the lattice of elementary propositions of a quantum system is $L(H)$ from scratch, but you can prove it, assuming some general hypotheses (those I wrote above together with a few further technical requirements). However what you eventually find is that the Hilbert space can be real, complex or quaternionic. This result was obtained by Solèr in 1995.
We have a probability measure that satisfies the Kolmogorov axioms (including countable additivity, but without the connotations of boolean logic).
Correct. The lattice is (orthocomplemented and) orthomodular ($A= B \vee (A \wedge B^\perp)$ if $B\leq A$) instead of (orthocomplemented and) Boolean ($\vee$ and $\wedge$ are mutually distributive).
However the story is much longer. The elements of $L(H)$ are interpreted as the elementary propositions/observables of a quantum system, admitting only the outcomes YES and NOT under measurement.
In an orthomodular lattice, two elements $P,Q$ are said to commute if the smallest sublattice including both them is Boolean.
It is possible to prove that, for the lattice of orthogonal projectors $L(H)$, a pair of elements $P$ and $Q$ commute if and only if they commute as operators: $PQ=QP$.
A posteriori, this is consistent with the idea that these elementary observables can be measured simultaneously.
If $P$ and $Q$ in $L(H)$ commute, it turns out that $$P\wedge Q = PQ =QP\tag{*}$$ and $$P\vee Q = P+Q-PQ\:.\tag{**}$$
A crucial point is the following one. Having a Boolean sublattice (i.e. made of mutually commuting elements) $\vee$ and $\wedge$ can be equipped with the standard logical meaning of OR and AND respectively. The orthogonal $P^\perp = I-P$ corresponds to the negation NOT $P$.
This is a way to partially recover classical logic form quantum logic.
Do we also need to assume some form of the law of large numbers?
Actually, at least when you make measurements of observables, you always reduce to a Boolean subalgebra where the probability measure becomes a standard $\sigma$-additive measure of a $\sigma$-algebra and here you can assume standard results on the relation between probabilities - frequencies.
Is the following something like the correct list of results?
The probability measure can be described by a density matrix (Gleason's theorem).
Yes, provided the Hilbert space is separable with dimension $\neq 2$.
In particular, the extremal elements of the convex set of probability Gleason measures (the probability measures which cannot be decomposed into non-trivial convex cominations) turn out to be of the form $|\psi\rangle \langle \psi|$ for every possible $\psi\in H$ with unit norm. This way, extremal measures coincides to pure states, i.e., unit vectors up to phases.
Observables must be represented by self-adjoint operators.
Yes, this is straightforward to prove if one starts by assuming that an observable $A$ is a collection $\{P^{(A)}(E)\}_{E \in B(\mathbb R)}$ of elements of the lattice $L(H)$, that is, projectors $P(E)$ where $E\subset \mathbb R$ is any real Borel set.
The physical meaning of $P^{(A)}(E)$ is "the outcome of the measurement of $A$ lies in (or is) $E$".
Evidently $P^{(A)}(E)$ and $P^{(A)}(E')$ commute and giving the standard meaning to $\wedge$ (= AND), we have from (*) that $$P^{(A)}(E) P^{(A)}(F) = P^{(A)}(E)\wedge P^{(A)}(F) = P^{(A)}(E\cap F)\:.\tag{1}$$
Using completeness, it is not difficult to justify also the property $$\vee_i P^{(A)}(E_i) = P^{(A)}(\cup_i E_i)$$ where the $E_i$ and a finite or countable class of Borel sets pairwise disjoint. This requirement, making in particular use of (**), is mathematically equivalent to $$\sum_i P^{(A)}(E_i) = P^{(A)}(\cup_i E_i)\tag{2}$$ where the $E_i$ and a finite or countable class of Borel sets pairwise disjoint and the sum is computed in the strong operator topology.
Finally since some outcome must be measured in $\mathbb R$, we conclude that $$P^{(A)}(\mathbb R)=I\tag{3}\:,$$ because the trivial projector $I \in L(H)$ satisfies $\mu(I)=1$ for every Gleason state.
Properties (1), (2) and (3) say that $\{P^{(A)}(E)\}_{E \in B(\mathbb R)}$ is a projection valued measure (PVM) so that the self-adjoint operator $$A = \int_{\mathbb R} \lambda P^{(A)}(\lambda) $$ exists.
The spectral theorem proves that the correspondence between observables and self-adjoint operators is one-to-one.
Given a pure state represented by the unit vector up to phases $\psi$ and a PVM $\{P^{(A)}(E)\}_{E\in B(\mathbb R)}$ describing the observable/self-adjoint operator $A$, the map $$B({\mathbb R}) \ni E \mapsto \mu^{(A)}_\psi(E) := tr(|\psi\rangle \langle \psi| P^{(A)}(E)) = \langle \psi|P^{(A)}(E) \psi\rangle$$ is a standard probability measure over $\sigma(A)$, and standard results of QM arise like this ($\psi$ is supposed to belong to the domain of $A$) $$\langle \psi |A \psi \rangle = \int_{\sigma(A)}\lambda d\mu^{(A)}(\lambda)\:,$$ justifying the interpretation of the left-hand side as expectation value of $A$ in the state represented by $\psi$, and so on.
It also turns out that the support of a PVM coincides with the spectrum $\sigma(A)$ of the associated observable.
The elements $P$ of $L(H)$ are self-adjoint operators and thus the picture is consistent: $P$ is an elementary observable admitting only two values $0$ (NOT) and $1$ (YES). In fact $\{0,1\} = \sigma(P)$ unless considering the two trivial cases (the contradiction) $P=0$ where $\sigma(P)= \{0\}$ and $P=I$ (the tautology) where $\sigma(P)= \{1\}$.
Time evolution must be unitary.
Here one has to introduce the notion of symmetry and continuous symmetry.
There are at least 3 possibilities which are equivalent on $L(H)$, one is the well-known Wigner's theorem. The most natural one, in this picture, is however that due to Kadison (one of the two possible versions): a symmetry can be defined as an isomorphism of the lattice $L(H)$, $h: L(H) \to L(H)$.
It turns out that (Kadison's theorem) isomorphisms are all of the form $$ L(H)\ni P \to h(P) = UPU^{-1}$$ for some unitary or antiunitary operator $U$, defined up to a phase, and depending on the isomorphism $h$.
Temporal homogeneity means that there is no preferred origin of time and all time instants are physically equivalent.
So, in the presence of time homogeneity, there must be a relation between physics at time $0$ and physics at time $t$ preserving physical structures. Time evolution form $0$ to $t$ must therefore be implemented by means of an isomorphism $h_t$ of $L(H)$.
Since no origin of time exists, it is also natural to assume that $h_t\circ h_s = h_{t+s}$.
It is therefore natural to assume that, in the presence of temporal homogeneity, time evolution is represented by an one-parameter group of such automorphisms ${\mathbb R} \ni t \mapsto h_t$. (One-parameter group means $h_t\circ h_s = h_{t+s}$ and $h_0= id$.)
It is also natural assuming a continuity hypothesis related to possible measurements and states:
$${\mathbb R} \ni t \mapsto \mu(h_t(P))$$
is continuous for every $P\in L(H)$ and every Gleason state $\mu$.
Notice that Kadison theorem associates a unitary $U_t$ to every $h_t$ up to phases, so that there is no reason, a priory, to have $U_tU_s = U_{t+s}$, since phases depending on $s$ and $t$ may show up.
Even if one is so clever to fix the phases to prove the composition rule of an one-parameter group of unitary operators $U_tU_s = U_{t+s}$ and $U_0=I$, there is no a priory reason to find a continuous map $t \mapsto U_t$ in some natural operator topology.
Actually under the said hypotheses on $\{h_t\}_{t\in \mathbb R}$, it is possible to prove that (the simplest example of application of Bargmann's theorem since the second co-homoloy group of $\mathbb R$ is trivial) the phases in the correspondence $h_t \to U_t$ via Kadison's theorem can be unambiguously accommodated in order that $h_t(P) = U_t P U_t^{-1}$ where $$\mathbb R \ni t \mapsto U_t$$ is a strongly continuous one-parameter group of unitary operators.
Stone's theorem immediately implies that $U_t = e^{-itH}$ for some self-adjoint operator $H$ (defined up to an additive constant in view of arbitrariness of the phase of $U_t$).
This procedure extended to other one-parameter groups of unitary operators $e^{-isA}$ describing continuous symmetries gives rise to the well-known quantum version of Noether theorem. The continuous symmetry preserves time evolution, i.e., $$e^{-isA} e^{-itH}= e^{-itH}e^{-isA}$$ for all $t,s \in \mathbb R$, if and only if the observable $A$generating the continuous symmetry is a constant of motion: $$e^{itH}Ae^{-itH}=A\:.$$
Hilbert Space
Born Rule
Is a canonical transformation equivalent to a transformation that preserves volume and orientation? How big would my telescope have to be if I wanted to see the Mars rover from my backyard? What's the difference between low energy photons and high energy photons? Do Maxwell's equations imply that still charges produce electrostatic fields and no magnetic fields? For ideal classical gasses, in terms of the energy levels why do we ignore whether the particles are fermions or bosons? Strange interference pattern of light on top of tower, pattern was seen on air. What was it? Why does gravity travel at the speed of light? What is considered a wavelength? Does Gleason's Theorem Imply Born's Rule? Double slit experiment at home failed What are the strongest objections to be made against decoherence as an explanation of "collapse?" Microcanonical ensemble, ergodicity and symmetry breaking | CommonCrawl |
Genome Biology
Distance-dependent inhibition of translation initiation by downstream out-of-frame AUGs is consistent with a Brownian ratchet process of ribosome scanning
Ke Li1 na1,
Jinhui Kong1,2 na1,
Shuo Zhang1,2 na1,
Tong Zhao3 &
Wenfeng Qian ORCID: orcid.org/0000-0001-6875-08421,2
Genome Biology volume 23, Article number: 254 (2022) Cite this article
Eukaryotic ribosomes are widely presumed to scan mRNA for the AUG codon to initiate translation in a strictly 5′–3′ movement (i.e., strictly unidirectional scanning model), so that ribosomes initiate translation exclusively at the 5′ proximal AUG codon (i.e., the first-AUG rule).
We generate 13,437 yeast variants, each with an ATG triplet placed downstream (dATGs) of the annotated ATG (aATG) codon of a green fluorescent protein. We find that out-of-frame dATGs can inhibit translation at the aATG, but with diminishing strength over increasing distance between aATG and dATG, undetectable beyond ~17 nt. This phenomenon is best explained by a Brownian ratchet mechanism of ribosome scanning, in which the ribosome uses small-amplitude 5′–3′ and 3′–5′ oscillations with a net 5′–3′ movement to scan the AUG codon, thereby leading to competition for translation initiation between aAUG and a proximal dAUG. This scanning model further predicts that the inhibitory effect induced by an out-of-frame upstream AUG triplet (uAUG) will diminish as uAUG approaches aAUG, which is indeed observed among the 15,586 uATG variants generated in this study. Computational simulations suggest that each triplet is scanned back and forth approximately ten times until the ribosome eventually migrates to downstream regions. Moreover, this scanning process could constrain the evolution of sequences downstream of the aATG to minimize proximal out-of-frame dATG triplets in yeast and humans.
Collectively, our findings uncover the basic process by which eukaryotic ribosomes scan for initiation codons, and how this process could shape eukaryotic genome evolution.
To synthesize functional proteins and maintain protein homeostasis, the genetic information carried by messenger RNAs (mRNAs) must be faithfully transmitted to proteins [1]. In particular, the recognition of the initiation codon of the canonical open reading frame (ORF) by ribosomes is crucial to obtaining functional proteins. While it is well-established that most translation starts at the AUG codon [2, 3], AUG triplets can occur with an approximate frequency of every 43 nucleotides (i.e., 64 nt), presenting a serious challenge for efficient ribosomal recognition of the AUG codon corresponding to the canonical ORF in a given mRNA.
Eukaryotic cells are known to tackle the challenge by using a "scanning" mechanism [4, 5] based on the 43S preinitiation complex (PIC), comprised of a 40S ribosomal subunit, several eukaryotic initiation factors (eIFs), methionyl initiator transfer RNA (Met-tRNAi), and guanosine triphosphate [6,7,8,9,10]. PIC scanning starts with attachment to the 5′-cap of an mRNA, after which the PIC migrates along the mRNA 1 nt at a time searching for the AUG codon: successive triplets enter the P-site of the 40S ribosomal subunit, where they are inspected for complementarity to the Met-tRNAi anticodon [4, 5, 11,12,13].
According to current understanding, the PIC remains tethered to the eukaryotic mRNA (without jumping) during scanning [5, 14]. This working model is supported by evidence showing that the most upstream (i.e., 5′) AUG triplet is preferentially used as the primary initiation codon [11, 15, 16]; insertion of an additional AUG triplet upstream (uAUG) of the annotated AUG triplet (aAUG, the initiation codon of the canonical ORF) can prevent translation initiation at the aAUG [15, 17,18,19,20,21,22]. Further, the insertion of a strong mRNA secondary structure between the 5′-cap and the aAUG can also prevent translation initiation [23, 24].
Since the PIC starts scanning at the 5′-cap of an mRNA and the aAUG codon is somewhere downstream (i.e., 3′), PIC scanning results in a net 5′–3′ ribosomal movement [6, 25]. Currently, there are two competing models to explain the directionality of individual scanning steps (i.e., 1 nt each step). The more established model that is commonly described in textbooks is the strictly unidirectional scanning model [2, 3, 15, 16], which posits that the PIC scans exclusively in the 5′–3′ direction (Fig. 1A) possibly governed by an RNA helicase constantly fed by adenosine triphosphate (ATP). Sometimes the PIC misses an AUG triplet, an event termed "leaky scanning," and will continue to scan the mRNA further downstream, which can enable access to downstream AUGs (dAUG) by the PIC. The scanning process proceeds until an AUG triplet is recognized [12, 14, 15].
Testing PIC scanning models using thousands of dATG variants. A Predictions of the strictly unidirectional model and the Brownian ratchet scanning model on translational efficiency when an AUG is inserted downstream of the aAUG. B High-throughput construction of dATG variants with doped nucleotides and detection of GFP intensity en masse via FACS-seq. C Nomenclature of Solo and Duo variants. The aATGs, in-frame dATGs, and out-of-frame dATGs are shown in green, orange, and blue, respectively. Each dot in the sequences represents a nucleotide that cannot form an ATG triplet or an in-frame stop codon. D Boxplot shows GFP intensities (normalized by dTomato intensity, here and elsewhere in this study when presenting FACS-seq data) for Solo and Duo variants. P values were given by the Mann-Whitney U tests. E The average GFP intensities (dots) and the 95% confidence intervals (error bars) of Duo variants. The orange dashed line represents the average GFP intensity of all Duo variants with an in-frame dATG, and the blue curve represents the local regression line generated by R function "geom_smooth" (span = 1) for the Duo variants with an out-of-frame dATG. Duo variants with the dATG at positions from +4 to +6 (shown in gray) had fixed nucleotides at the −3 position of the dATG (due to the aATG), and therefore, were not used to fit the local regression line. F A small-scale experiment that introduced dATGs by synonymous mutations and strictly controlled the flanking sequences of the aATG and dATG. The GFP/dTomato fluorescence ratio estimated for each replicate is shown in a dot and the average value is shown in the red line. The GFP/dTomato ratios were normalized to the variant lacking additional dATG, which were 0.41, 0.64, 0.93, and 0.94 for the variants with dATGs inserted at the +8, +14, +20, and +26 positions, respectively. P values were given by t-tests
By contrast, an alternative model—the Brownian ratchet scanning model—was speculated [25, 26], based on the observation that all reported movement of particles with similar size of ribosomes involves Brownian motion [25, 27]. The Brownian ratchet scanning model proposes that the PIC can migrate along an mRNA in both 5′–3′ and 3′–5′ directions governed by Brownian motion and that the oscillation is directionally rectified through a ratchet-and-pawl mechanism: a "pawl" (i.e., possibly RNA-binding proteins) is occasionally placed on the mRNA at the trailing side of the PIC, restricting the 3′–5′ movement of the PIC beyond the pawl (Fig. 1A). In this model, even if an aAUG is missed by the PIC, it may be recognized in a second or subsequent scan as the PIC oscillates back and forth; a proximal dAUG, if present, can retain PICs that miss the aAUG, reducing the chance for a second (or more) inspection of the aAUG. In other words, while the strictly unidirectional scanning model relies on strictly sequential (5′–3′) decision-making involving more than one AUG in translation initiation, in the Brownian ratchet model initiation decisions are competitive between closely spaced AUGs.
The fundamental difference between the two models is whether the PIC can frequently move in the 3′–5′ direction, which can be experimentally determined by insertion of out-of-frame dATGs. The strictly unidirectional scanning model predicts that dATGs will have no effect on translation initiation of the canonical ORF, whereas the Brownian ratchet scanning model predicts that a dAUG can exhibit an inhibitory effect on initiation at the aAUG due to competition as the translation initiation site (Fig. 1A). Consistent with the strictly unidirectional model, previous work has shown that the first-AUG codon can exclusively serve as the site of translation initiation even when a second AUG is located within just a few nucleotides downstream [28], which is known as the first-AUG rule. However, other studies with genetically modified overlapping bicistronic mRNAs from Turnip yellow mosaic virus and Influenza virus B [29, 30] have revealed that the initiation frequency of an upstream ORF can be reduced by the presence of a proximal, downstream, and overlapping ORF, thus implying the presence of 3′–5′ PIC scanning at least in some sequence context [6, 12, 29]. These findings indeed raised many questions, and led us to investigate whether such 3′–5′ scanning observed in bicistronic viral mRNAs could also occur endogenously in monocistronic eukaryotic mRNAs.
Here, to compare the strictly unidirectional vs. Brownian ratchet scanning models, we generated thousands of green fluorescent protein (GFP) reporter gene sequence variants, each containing an out-of-frame ATG downstream of the ATG corresponding to the canonical ORF. We measured the fluorescence intensity of each variant, then performed computational simulations and estimated the leakage rate of each scan, as well as the number of scans for each triplet, before the ribosome eventually migrated to downstream regions of the mRNA. Our results reveal several general rules governing ribosomal scanning and enhance our understanding of how point mutations that introduce dATGs can lead to dysregulation of gene expression in human cells.
Generation of thousands of dATG yeast variants
Previous systems studies have observed reduced protein abundance upon the addition of an out-of-frame uATG [17,18,19], indicating that PICs scan continuously along the mRNA in the 5′–3′ direction [4, 5, 15, 16]. By the same logic, a reduction in GFP intensity following the addition of an out-of-frame dATG will indicate that ribosomes can also scan in the 3′–5′ direction (Fig. 1A). In this study, we investigated the occurrence and prevalence of 3′–5′ PIC movement by inserting ATGs downstream of the aATG of a GFP reporter and then detecting the impacts on GFP synthesis (i.e., through differences in GFP intensity). To avoid reaching conclusions that are caused by some specific flanking sequences (i.e., confounding factors), as in viruses that may use specific sequences to regulate PIC scanning for overlapping ORFs in their bicistronic mRNAs [29, 30], we generated a large number of sequence variants, each with a dATG inserted in various sequence contexts (Fig. 1B).
Specifically, we introduced dATGs by chemically synthesizing a 39-nt DNA oligo, with six upstream and thirty downstream doped (i.e., random) nucleotides (N = 25% A + 25% T + 25% G + 25% C) around a fixed ATG triplet (designated as the aATG, Fig. 1B, Additional file 1: Fig. S1A). ATG triplets (either in-frame or out-of-frame) could then form randomly within the 30-nt downstream region of each individual construct, ultimately resulting in a randomly sampled variant library (from a huge number of all possible variants) containing dATGs at each successive position downstream of the aATG in various sequence contexts. To increase the fraction of dATG-containing variants, we further synthesized 28 additional DNA oligos, each with a dATG fixed at one of the 28 possible downstream triplet positions (Fig. 1B, Additional file 1: Fig. S1A). We fused these DNA oligos with the full-length GFP sequence (with its initiation codon omitted, Additional file 1: Fig. S1A), and integrated the fusion constructs individually into the same locus in Chromosome II of the yeast genome. We also inserted dTomato, encoding a red fluorescent protein, into a nearby genomic region to normalize GFP intensity (Fig. 1B, Additional file 1: Fig. S1A).
We measured the GFP intensities en masse through fluorescence-activated cell sorting (FACS)-seq for individual variants, as described in a previous study [31]. Briefly, we sorted yeast cells into eight bins according to GFP intensity (here and elsewhere in this study, normalized by dTomato intensity). Based on the variant frequencies in high-throughput sequencing reads of the eight bins, and the median GFP intensity and the number of cells belonging to each bin, we calculated the GFP intensity for each variant as the weighted average GFP intensity across the eight bins (Additional file 1: Fig. S1A).
To verify the accuracy of GFP intensities measured en masse, we randomly isolated 20 clones from the yeast library, and individually measured their GFP intensities by flow cytometry. There was good consistency between the GFP intensities measured en masse and individually (Pearson's correlation coefficient r = 0.99, P = 1 × 10−19, Additional file 1: Fig. S1B). We measured GFP intensity of the yeast variants in two biological replicates, and the values were highly correlated for 18,950 variants shared between both experiments (Pearson's correlation coefficient r = 0.99, P < 2.2 × 10−16, Additional file 1: Fig. S1C). Consequently, we pooled dATG variants from both replicates in subsequent data analyses (Additional file 1: Fig. S1D, the average GFP intensity was used for variants shared by the two replicates), if not otherwise specified.
We performed two positive control analyses to examine the data quality. First, the GFP intensity of variants with in-frame stop codons formed in the 30-nt region downstream of aATG was lower than that of variants without in-frame stop codons (P < 2.2 × 10−16, Mann-Whitney U test; Additional file 1: Fig. S1E). Second, the variants containing in-frame uATGs showed elevated GFP intensity compared to variants without uATGs (P < 2.2 × 10−16, Mann-Whitney U test; Additional file 1: Fig. S1F), most likely because the second in-frame AUGs could function as an auxiliary initiation site for GFP translation [32]. In contrast, the variants containing out-of-frame uATGs showed reduced GFP intensity (P < 2.2 × 10−16, Mann-Whitney U test; Additional file 1: Fig. S1F), likely because they can prevent translation in the reading frame of GFP. These observations bolstered our confidence to compare GFP intensities among the dATG variants in our study. Note that we excluded the variants containing in-frame stop codons or uATGs from the subsequent analyses, to avoid their potential impacts on GFP intensity (remaining variants n = 21,598, Additional file 1: Fig. S1D).
Both seminal studies analyzing the consensus sequence across genes [15, 33,34,35] and the recent structural analysis of the late-stage 48S initiation complexes [36] led to the hypothesis that some flanking sequences could facilitate translation initiation (known as the Kozak sequence). To determine if the sequences flanking the aAUGs exerted any detectable influence on the GFP intensities measured in our yeast library, we grouped the 1805 variants that had only one ATG (i.e., the designed aATG) in the 39-nt region, according to the nucleotide type at each position and estimated the average GFP intensity for each of the four variant groups at each position (Additional file 1: Fig. S2A). Briefly, placing different nucleotides at the −3 position (relative to the A[+1] in the aATG codon) led to the highest variation in GFP intensity compared to variation related to different nucleotides at other positions (from −6 to +15, Additional file 1: Fig. S2A). At the −3 position, "A" conferred the highest GFP intensity, followed by G, C, and finally T. This observation is qualitatively consistent with the prevalence of A at the −3 position among 96 yeast genes investigated in a previous study [33] or among the 500 genes with the highest protein synthesis rate in the yeast genome (Additional file 1: Fig. S2B). For simplicity, we hereafter refer to the ATG context using the nucleotide at the −3 position; in the order from "strong" to "weak" are the A, G, C, and T contexts. The observed differences in the strength of the sequence context are likely related to the frequency of leaky scanning, according to previous studies [15].
Frame- and distance-dependent translational inhibition by dAUGs
Prior to measuring the effects of dAUGs on GFP intensity, we considered the variation in the number, position, and context of ATGs among the variants in the yeast library, to establish a standardized and clear nomenclature for these variants. Some variants had only one ATG in the 39-nt region (i.e., the designed aATG) and were therefore denoted as "Solo" variants. Some variants had one additional ATG in the 30-nt downstream region (i.e., the dATG) and were thus designated as "Duo" variants. In addition, the names of variants include the position and context of the aATG and dATG (if present). For example, Duo(1N, 4A) represents variants with two ATGs: the aATG having any one of the four nucleotides (N) at the −3 position and a dATG at the +4 position with an A in its −3 position (Fig. 1C). We subsequently focused on the analysis of 1805 Solo and 13,437 Duo variants.
In our design, dATGs were introduced at a total of 28 positions, among which ten were in-frame and 18 out-of-frame, relative to the GFP reading frame (Fig. 1C, Additional file 1: Fig. S1A). To investigate whether out-of-frame dAUGs can inhibit translation initiation from the aAUG, we grouped the Duo variants according to the reading frames of their dATGs. The results showed that the Duo variants containing in-frame dATGs showed elevated GFP intensity compared to Solo variants (P < 2.2 × 10−16, Mann-Whitney U test; Fig. 1D), as variants containing in-frame uATGs (Additional file 1: Fig. S1F). In sharp contrast, Duo variants harboring an out-of-frame dATG showed reduced GFP intensity compared to Solo variants (P = 3.9 × 10−5, Mann-Whitney U test, Fig. 1D), strongly suggesting that out-of-frame dAUGs can inhibit translation initiation at the aAUG in a frame-dependent manner. The fraction of reduction in GFP intensity for Duo variants relative to Solo variants is termed as the "inhibitory effect" subsequently.
To test if these inhibitory effects of out-of-frame dAUGs were dependent on the distance between aATG and dATG, we grouped the Duo variants according to the position of their dATG and then estimated the average GFP intensity for each group. The inhibitory effect gradually declined with increasing aATG-dATG distance (Fig. 1E, Additional file 1: Fig. S2C), and no inhibitory effects were evident at aATG-dATG distances of ~17 nt or greater (Fig. 1E, Additional file 1: Fig. S2C). These observations indicated that translation initiation decisions involving two proximal, potential AUGs were not strictly sequential, but competitive. Note that the placement of dATGs at various positions did not significantly alter the synonymous codon usage or the formation of mRNA secondary structure in the 30-nt variable sequence downstream of the aAUG (Additional file 1: Fig. S3), two factors known to affect translation initiation or elongation, and therefore, protein synthesis [37, 38].
We then performed an additional, small-scale experiment that strictly controlled the flanking sequence to further characterize the distance-dependent inhibitory effect of dAUGs. Specifically, we introduced an out-of-frame dATG at +8, +14, +20, or +26 positions downstream of the aATG (Fig. 1F). To exclude any potential impacts of the peptide sequence on GFP intensity, we used only synonymous mutations to introduce these out-of-frame dATGs. The results showed that proximal out-of-frame dATGs indeed reduced GFP intensity, while increases in distance between the two ATGs resulted in a gradual increase in GFP intensity. Beyond 20 nt, the negative impacts on translation initiation were no longer detectable (Fig. 1F). Collectively, these results established that out-of-frame dATGs could inhibit GFP synthesis and that these inhibitory effects decreased with increasing distance from the aATG.
Context-dependent translational inhibition by dAUGs
The frame- and distance-dependent inhibitory effects of dAUG suggested that ribosomes could sometimes scan in the 3′–5′ direction, which was compatible with the Brownian ratchet scanning process wherein PICs oscillate in both 5′–3′ and 3′–5′ directions, scanning each successive triplet multiple times. An aAUG that is not recognized by the PIC in the first scan may be recognized in a subsequent scan. When a dAUG is inserted near the aAUG, a PIC that misses the aAUG may be instead retained by that nearby dAUG if it is recognized, thereby reducing the likelihood that a PIC will oscillate 3′–5′ and recognize the aAUG. As the aAUG-dAUG distance increases, there is an increased probability that a given PIC will turn to the 3′–5′ direction before encountering a dAUG, explaining why the inhibitory effect of out-of-frame dAUGs diminishes as the dAUG becomes farther.
The Brownian ratchet scanning model further predicted that the aAUG-dAUG competition depended on the leaky scanning at the aAUG. To test if the observed inhibitory effect of proximal out-of-frame dAUGs is indeed related to the leaky scanning at the aAUG, we divided the Duo variants into four groups based on their aATG −3 context. We found that the inhibitory effect of dATGs was greater when the aATG was in a weaker context (i.e., higher leakage rate, Fig. 2A, Additional file 1: Fig. S2D), which indicated that leaky scanning at aAUGs contributed to dAUG inhibition of translation initiation. To then determine whether these inhibitory effects were due to translation initiation at the dAUG, we also divided the Duo variants into four groups according to their dATG −3 context. We found that the inhibitory effect was greater when the dATG was in a stronger context, indicating the competition of translation initiation between the two AUGs (Fig. 2B, Additional file 1: Fig. S2D).
Context-dependent inhibitory effects on protein synthesis by proximal out-of-frame dATGs. A, B The average GFP intensities (dots) and the 95% confidence intervals (error bars) of Duo variants, grouped by the sequence contexts of the aATG or dATG (letters in green or red, respectively). The number of Duo variants drawn in each panel (n) is shown in its top-right corner (two biological replicates combined). The numbers of A, G, C, and T context Solo variants are 341, 383, 719, and 362, respectively. C The dual-frame reporter experiment to confirm the competition between aAUG and dAUG as the translation initiation site. The reporter is composed of a modified GFP gene in frame 0 (six stop codons were mutated in the +1 frame of its coding sequence, labeled as GFP*), sequences encoding a 2A self-cleaving peptide in frame +1, and a dTomato gene in frame +1. The fluorescence intensities of each dual-frame construct were normalized by the respective control construct as shown at the bottom. The normalized fluorescence intensities of individual replicates are shown by red or green dots and the mean is shown by the black line. P values were given by t-tests
To confirm this apparent competition between aAUGs and dAUGs for translation initiation, we performed an experiment using a reporter construct carrying two fluorescent proteins, GFP and dTomato, encoded in different reading frames (hereafter referred to as a dual-frame reporter). In this reporter, GFP was translated from an aAUG in a weak context, and dTomato was translated from a proximal out-of-frame dAUG (+8 position, Fig. 2C). Furthermore, six "frame +1" stop codons were removed from the GFP coding sequence (mainly via synonymous mutations, see "Methods") to avoid premature termination during dTomato translation. Placing the dATG in two different contexts, we measured both green and red fluorescence intensities with flow cytometry. We observed that dTomato intensity increased with increasing strength of dATG context (i.e., lower leakage rate) while GFP intensity was substantially reduced (Fig. 2C). Meanwhile, the mRNA levels did not significantly vary (Additional file 1: Fig. S4). These results confirmed that translation initiation decisions between two closely spaced AUGs were determined in a competitive manner.
Proximal out-of-frame dAUGs lead to reduced mRNA levels via nonsense-mediated mRNA decay (NMD)
Our findings above thus suggested that proximal out-of-frame dAUGs could compete with aAUG for translation initiation. Since out-of-frame termination codons are abundant in the GFP coding sequence (see "Methods"), we predicted that if translation indeed initiated at a proximal out-of-frame dAUG, a long distance should remain between its (also out-of-frame) termination codon and the poly(A) tail, a signal for mRNA degradation by the NMD pathway [39,40,41]. To test if the insertion of proximal out-of-frame dAUGs can result in lower GFP mRNA stability, we measured the mRNA levels en masse for each variant in the library, as described in previous work [31]. Briefly, we used Illumina sequencing to determine the mRNA levels of each variant, which was normalized by the number of cells for each variant (as reflected by its fraction of sequencing reads in the DNA-seq, Fig. 3A). Since the mRNA levels of dATG variants were highly correlated between two biological replicates (Pearson's correlation coefficient r = 0.86, P < 2.2 × 10−16, Additional file 1: Fig. S5A), we pooled dATG variants from both replicates in subsequent data analyses. We grouped the Duo variants according to the position of their dATGs, as well as by the aATG and dATG contexts. The results showed that mRNA levels were lower in the Duo variants when the out-of-frame dATG was closer to the aATG, particularly when the aATG resided in a weaker context (Additional file 1: Fig. S5B) and dATG resided in a stronger context (Fig. 3B), suggesting competition for translation initiation between closely spaced AUGs.
The reduction in the mRNA level caused by proximal out-of-frame dATGs, via the NMD pathway. A The experimental procedure for high-throughput determination of the mRNA levels for individual variants in the dATG library. B, C The average mRNA levels (dots) and the 95% confidence intervals (error bars) of Duo variants, in the background of hoΔ (B) and upf1Δ (C), grouped by the sequence contexts of the dATG (letters in red). Data from the two biological replicates were combined. The numbers of N-context Solo variants are 1805 (hoΔ) and 1989 (upf1Δ)
To then determine whether the reduction in the mRNA level was caused by NMD activity, we knocked out UPF1, the gene encoding an RNA helicase required for initiating NMD in eukaryotes [42, 43], and created a new yeast library containing a total of 15,256 variants in the background of upf1Δ (Fig. 3A). Note that in an effort to control for the potential cellular effects of the selective marker used for knocking-out UPF1, a yeast strain with a pseudogene (HO) deleted using the same selective marker was used as the wild type for yeast library construction throughout this study. We measured the mRNA levels of these variants and found that the reduction in mRNA levels we previously observed in Duo variants with proximal out-of-frame dAUGs was nearly abolished in the absence of UPF1 (Fig. 3C, Additional file 1: Fig. S5C). These observations are consistent with the idea that the NMD pathway activated by translation initiation at out-of-frame dAUGs could reinforce the inhibitory effect of proximal dAUGs at the translational level.
To exclude the possibility that the distance-dependent inhibitory effect of out-of-frame dATGs is associated with variation in the activation efficiency for NMD, which has been reported depending on the position of the premature stop codon [41], we further computationally excluded variants that contained out-of-frame stop codons in the variable region in the same reading frame of the corresponding dATGs. After that, all Duo variants containing frame +1 (or +2) dAUG would terminate translation at the same location in the coding sequence of GFP (+56 or +60, see "Methods"). The NMD activity induced by proximal out-of-frame dATGs remained observed (Additional file 1: Fig. S5D), excluding the variation in NMD efficiency among dATG variants as a confounding factor.
To examine if the inhibitory effects of proximal out-of-frame dAUGs can be detected without the impact of NMD-related variation in mRNA stability, we used FACS-seq to measure GFP intensities in the genetic background of upf1Δ (Additional file 1: Fig. S6A). Despite full rescue at the mRNA level (Fig. 3C), NMD inactivation via UPF1 deletion did not result in a full restoration of GFP intensity in these out-of-frame dATG variants (Additional file 1: Fig. S6B, C). These findings were further confirmed in small-scale experiments using the same dATG constructs as those shown in Fig. 1F (Additional file 1: Fig. S6D). Taken together, the inhibitory effects of proximal out-of-frame dAUGs persisted even controlling for the impact of NMD-related variation in mRNA stability, indicating direct competition between an aAUG and its proximal dAUG for translation initiation on the transcripts that have escaped NMD.
We surprisingly noticed that dATGs in frames +1 and +2 exhibited slightly different inhibitory effects, in both hoΔ and upf1Δ backgrounds (Additional file 1: Fig. S2D and Fig. S6C). This difference was not observed at the mRNA level (Fig. 3B, C, Additional file 1: Fig. S5B, C), suggesting that it was unlikely caused by the difference in translation initiation between dAUGs in these two frames. Instead, we hypothesized that this phenomenon was related to specific amino acids encoded in frame 0, provided that dATGs at frames +1 and +2 will lead to the overrepresentation of different amino acids in the N-terminus of the GFP reporter. To reduce the possible effects of sequence variation in the N-terminus peptide on GFP folding and fluorescence, we inserted a DNA sequence encoding a 2A self-cleaving peptide [44] upstream of the GFP coding sequence (Additional file 1: Fig. S7A). We controlled the sequence context of both aATG and dATG, generated 3402 Solo variants and 32,140 Duo variants, and performed the FACS-seq and the en masse RNA-seq experiments on this 2A-inserted dATG library (Additional file 1: Fig. S7B, C). The difference in GFP intensity between frame +1 and frame +2 dATGs was no longer detectable, and GFP intensity remained increasing with the aAUG-dAUG distance (Additional file 1: Fig. S7B). These observations further confirmed the inhibitory effect of proximal out-of-frame dAUGs.
Distance-dependent translational inhibition by uAUGs
In general, proximity to the 5′-cap grants an AUG triplet some advantages in competition to initiate translation since they are scanned first [6, 15]. Consistent with this hypothesis, it has been widely reported that out-of-frame uAUGs can inhibit translation at the aAUG because the uAUG can retain a proportion of PICs that would otherwise initiate translation at the aAUG [15, 19, 20]. Given our results showing competition for initiation between a closely spaced aAUG-dAUG pair, we further predicted that a closely spaced uAUG-aAUG pair would also compete for translation initiation. That is, when a uAUG is near the aAUG, a PIC that misses the uAUG (due to leaky scanning) may be retained by the nearby aAUG, thereby reducing the likelihood that the PIC will oscillate 3′–5′ and recognize the uAUG. Therefore, the Brownian ratchet scanning model further predicted that the inhibitory effect by an out-of-frame uAUG should diminish with decreasing uAUG-aAUG distance (Fig. 4A).
The distance-dependent inhibitory effect of out-of-frame uAUGs on translation initiation at the aAUG. A Predictions of the Brownian ratchet scanning model on translational initiation at the aAUG, when an out-of-frame uAUG is introduced at various positions. B Solo and Duo variants in the uATG library. The aATGs, in-frame uATGs, and out-of-frame uATGs are shown in green, orange, and purple, respectively. Each dot in the sequences represents a nucleotide that cannot form an ATG triplet or an in-frame stop codon for the uATG (if exists). C The average GFP intensities (dots) and the 95% confidence intervals (error bars) of Duo variants with the uATG placed at positions ranging from −30 to −3 in the hoΔ (left panel) and the upf1Δ (right panel) backgrounds. The orange dashed line represents the average GFP intensity of all Duo variants with an in-frame uATG, and the purple curve represents the local regression line (span = 1) for the Duo variants with an out-of-frame uATG. Duo variants with the uATG at positions from −5 to −3 had fixed nucleotides at the −3 position of the aATG (due to the uATG) and Duo variants with the uATG at positions from −30 to −28 had fixed nucleotides at the −3 position of the uATG (due to the upstream flanking sequence). These dots are shown in gray and were not used to fit the local regression line. D A small-scale experiment that strictly controlled the flanking sequence of uATG and aATG. The GFP/dTomato fluorescence ratios were normalized to the variant lacking additional uATG. P values were given by t-tests
To test if the inhibitory effects of an out-of-frame uAUG indeed depend on its distance to the aAUG, we synthesized a uATG variant library (Fig. 4B) similar to the dATG variant library. Specifically, we introduced uATGs by chemically synthesizing a 30-nt DNA oligo with doped nucleotides (N) upstream of a fixed aATG triplet. To increase the proportion of variants carrying a uATG, we synthesized 28 additional DNA oligos, each with a uATG fixed at one of the 28 possible upstream triplet positions. We fused these DNA oligos with the full-length GFP sequence and integrated the fusion constructs individually into the yeast genome. GFP intensity and mRNA level of individual variants were then measured by FACS-seq and en masse RNA sequencing of the variable region, respectively, following the same protocol as used for the dATG library. GFP intensity and mRNA level were quantified in two biological replicates, and since the values were highly correlated between replicates (Additional file 1: Fig. S8), the data from both replicates were pooled in subsequent analyses.
The 3112 variants containing stop codons in the frame of uATGs and at a position upstream of the aATG were excluded from the subsequent analyses to avoid their potential impacts of translation reinitiation (i.e., the ability of some short upstream open reading frames to retain the 40S subunit on mRNA post-termination, then reinitiate translation at a downstream AUG). We confirmed that the 6553 Duo variants containing in-frame uATGs indeed showed higher GFP intensities than the 2872 Solo variants and the 9033 Duo variants containing out-of-frame uATGs indeed had lower GFP intensities (Fig. 4C, Additional file 1: Fig. S9). These results led us to further examine the impacts of uATG position relative to aATG, as well as uATG sequence context, on GFP intensities among the Duo variants.
To this end, Duo variants were grouped according to the position of their inserted uATGs, in a manner similar to that used for grouping dATGs in Fig. 1E. The results showed that GFP intensities increased with decreasing uATG-aATG distance, a trend which was especially apparent when the distance between the two ATGs was relatively small (Fig. 4C). We also observed that the inhibitory effect of a proximal, out-of-frame uATG was reduced in the variants harboring the aATG in a strong context (Additional file 1: Fig. S9A) or with a uATG in a weak context (Additional file 1: Fig. S9B). We then performed an additional, small-scale experiment in which the flanking sequence was strictly controlled in order to further characterize the distance-dependent inhibitory effects of uAUGs. Specifically, we introduced an out-of-frame uATG in a weak context (with a T in the −3 position) at positions −25, −19, −13, or −7 upstream of the aATG in a strong context (with an A in the −3 position, Fig. 4D). We observed that decreasing distance between the two ATGs indeed resulted in a gradual increase in GFP intensity (Fig. 4D). Taken together, these results showing distance- and context-dependent inhibitory effects by out-of-frame uATGs suggested that aAUGs compete with proximal uAUGs to initiate translation.
Translation initiation at out-of-frame uAUGs would result in the activation of the NMD pathway. Therefore, if the reduced inhibitory effect of proximal out-of-frame uAUGs did result from competition for translation initiation between the aAUG and a proximal uAUG, we predicted that GFP mRNA level should increase with decreased uAUG-aAUG distance. En masse quantification of mRNA levels for the out-of-frame uATG variants in the hoΔ background revealed that GFP mRNA level was higher in the variants with smaller uATG-dATG distance, weaker uATG context, and/or stronger aATG context (Additional file 1: Fig. S9C, D), and upon UPF1 deletion, the GFP transcripts of Duo variants carrying an out-of-frame uATG were restored to levels comparable with that of Solo variants (Additional file 1: Fig. S10A, B), regardless of the uATG-aATG distance and the sequence context. Similar to the observation of the dATG variants, NMD inactivation also did not result in a full restoration of GFP intensity in the out-of-frame uATG variants of the uATG library (Additional file 1: Fig. S10C, D) or of the small-scale experiment (Additional file 1: Fig. S10E). These results thus indicated that the distance- and context-dependent inhibitory effect of out-of-frame uAUGs was indeed a consequence of competition for translation initiation between a pair of uAUG and aAUG.
Computational modeling reveals that each successive triplet is on average scanned by the PIC approximately ten times
The competition for translation initiation we observed between closely spaced AUGs (either between an aAUG-dAUG pair or between a uAUG-aAUG pair) is qualitatively consistent with a scanning process in which the PIC is tethered to mRNA and progresses toward 3′-end under a Brownian ratchet mechanism and is inconsistent with a strictly unidirectional scanning process. It is worth noting that this observation would also be qualitatively compatible with other scanning models as long as PIC movement in both 5′–3′ and 3′–5′ directions is invoked. For example, some researchers have proposed that the PIC can move to the initiation codon via ATP-independent PIC "diffusion" along the mRNA [10, 45, 46]. Notably, the quantification of GFP intensity we conducted for thousands of variants in this study provided us with an opportunity to estimate the parameters of PIC scanning, such as the number of scans for each triplet, the frequency that a pawl (i.e., the 5′-block) is placed along the mRNA, and the efficiency of AUG recognition by the PIC. If the frequency of pawl placement is estimated to be zero, the diffusion model will be supported. On the contrary, the Brownian ratchet model will be supported if this frequency is greater than zero.
To this end, we simulated the scanning process using a modified random walk model, as the PIC movement consists of a succession of random steps on the discrete positions along the "one-dimensional space" of a linear mRNA (Fig. 5A). We specified the following parameters in our random walk model. During scanning, the 13th–15th position of a PIC-binding mRNA fragment is the P-site [47], where the inspection for complementarity to the Met-tRNAi anticodon occurs. The PIC started out at the 5′-cap and took 1 nt per step in either the 5′–3′ or 3′–5′ direction, with equal probability (i.e., 50% each). However, the PIC could not move further upstream if its 5′-trailing side hits a pawl or the complex of 5′-cap and its binding protein eIF4E. The pawl was stochastically placed along the mRNA at the 5′-trailing side of a PIC (depending on the PIC location at the time) with the probability p.Pawl (Fig. 5A). When an AUG enters the P-site of the PIC, in a probability of p.Leakage the AUG might not be recognized by the PIC, and in a probability of (1 − p.Leakage) the AUG was recognized by the PIC and initiated translation. Note that in our model AUG triplets could be recognized in either 5′–3′ or 3′–5′ PIC movement. Considering that the NMD pathway can reduce mRNA levels and consequently amplify the impact of translation initiation at out-of-frame dAUGs on protein abundance, we used the parameter p.NMD to determine the probability of activating NMD when an out-of-frame dAUG is recognized by the PIC (Fig. 5A).
Estimation of parameters in the Brownian ratchet scanning model by MCMC algorithms. A A flowchart illustrating decisions involving PIC movement, translation initiation (or leakage), placement of a "pawl", and activation of the NMD pathway. While it was reported that PICs loaded on the mRNA could move further upstream (5′) to scan AUGs having extremely short UTRs, this scenario has not been considered in our simulation because we have not yet known all the features regarding the steric hindrance between eIF4E-cap and the PIC mRNA-binding channel. B Two PIC trajectories exemplify the simulated PIC scanning process along the mRNA. According to Archer et al. [47], the 13th–15th nucleotides of a PIC-binding mRNA fragment are inspected for complementarity to the Met-tRNAi anticodon. Therefore, the 13th nucleotide of a PIC-binding mRNA fragment is used to plot the PIC position. C Trace plots show changes in values of p.Pawl, p.Leakage, and p.NMD in an MCMC chain. Green dots mark the MCMC iterations that accepted a new parameter value due to a reduction in the RSS. D The observed GFP intensities of out-of-frame dATG variants in the yeast experiments (two replicates combined) and the simulated GFP intensities using one of the 30 sets of optimized parameters by the MCMC algorithms. The results of all 30 sets of optimized parameters are shown in Additional file 1: Fig. S12. E Two-dimensional density plot shows the distribution of the outcome values of p.Pawl and p.Leakage among the 30 MCMC chains (shown by dots). Two additional density plots involving p.NMD are shown in Additional file 1: Fig. S11C. F, G Estimated parameters and standard error (SE) in the Brownian ratchet scanning model
We employed a Markov Chain Monte Carlo (MCMC) algorithm [48, 49] to calculate numerical approximations for the probability parameters in the Brownian ratchet scanning model. To compare with experimental measurements of GFP intensity (Fig. 1E), we simulated the Brownian ratchet scanning process for 25 Duo variants with N-context dAUGs (representing an "average" dAUG) positioned between +7 and +31. To explore the relevant parameter space by the MCMC sampler, we first tested 1000 parameter sets for p.Pawl, p.Leakage, and p.NMD (each with ten values ranging from 0.001 to 0.8, Additional file 1: Fig. S11A). For each parameter set, we generated 100 simulations of the PIC scanning process for each Duo variant (two examples are shown in Fig. 5B) and estimated GFP intensity based on the number of simulations in which translation was initiated at the aAUG. We identified the ten parameter sets that showed the smallest residual sum of squares (RSS) for the 25 Duo variants and initiated the MCMC simulation using the median value for each parameter among the ten parameter sets: p.Pawl = 0.001, p.Leakage = 0.75, and p.NMD = 0.55 (Additional file 1: Fig. S11B).
We then ran the MCMC algorithm for 300 iterations, in which we sequentially replaced each of the three probability parameters with a random number generated from a uniform distribution (see "Methods"). For each iteration, we calculated the RSS from the simulated and observed GFP intensities in our yeast library, and used the RSS as a proxy to optimize the parameters. If the RSS decreased, the previous set of parameters was replaced by the new parameters, whereas if the RSS increased, the previous set of parameters remained unchanged (Fig. 5C).
The parameters reached the stationary distribution at the end of 300 iterations (Fig. 5C), and the GFP levels observed in the experiments were largely recapitulated by our simulated ratchet-and-pawl mechanism of PIC scanning (Fig. 5D). To obtain a reliable estimation of the parameters, we repeated 30 times of MCMC and found that the estimated parameters were robust after 300 iterations in all MCMCs (Fig. 5E, Additional file 1: Fig. S11C, Fig. S12). The average parameter values that resulted in the smallest RSS were as follows: the probability of adding a pawl to the mRNA was ~ 1 out of 1000 PIC steps; the average leakage rate for every single scan was 77%; and the NMD rate was 62% (Fig. 5F). Based on these values, we estimated that, on average, each triplet was scanned approximately ten times by the PIC (with 95% confidence intervals ranging from 6–14), resulting in a net leakage rate of 8% for a single AUG triplet (i.e., on average 8% of PICs eventually miss an AUG after multiple scans).
So far, we simulated using an N-context dAUG and neglected any difference in the leakage rate among sequence contexts of ATG triplets. To individually estimate p.Leakage for ATGs in the A, G, C, or T contexts, we fixed the values of p.Pawl (= 0.001) and p.NMD (= 0.62) and optimized context-specific p.Leakage by running the MCMC algorithm for another 100 iterations, based on the RSS estimated from the GFP intensities of variants Solo(1A), Solo(1G), Solo(1C), and Solo(1T) observed in our experiments (Fig. 2A). The average p.Leakage values among 30 times of MCMCs were 0.49, 0.68, 0.85, and 0.92 (Fig. 5G, Additional file 1: Fig. S11D), corresponding to the net leakage rates of 0.02, 0.08, 0.12, and 0.21, for ATGs in the A, G, C, or T context, respectively (Fig. 5G).
The Brownian ratchet scanning model and the ATP-independent diffusion model can be distinguished by determining if the probability of adding a pawl (p.Pawl) is equal to zero. p.Pawl was estimated to be significantly greater than zero (0.10% with the standard error equal to 0.02%, Fig. 5F) in our MCMC analyses, supporting a Brownian ratchet model for PIC scanning rather than a diffusion model. Note that a linear relationship between the length of the 5′-untranslated region (5′-UTR) and the time required for the first round of translation products was reported in previous studies [26, 50]; this relationship is also consistent with the Brownian ratchet scanning model instead of a diffusion model, which predicts a square relationship.
Depletion of proximal out-of-frame dATGs in yeast and human genomes
Given the reduced efficiency for translation of canonical ORFs and the possibility of enhanced synthesis of potentially cytotoxic peptides, we predicted that proximal out-of-frame dATGs would be generally deleterious. Therefore, we sought to test if proximal out-of-frame dATGs have been purged from the yeast genome by purifying selection. To this end, we counted the number of genes with ATGs at various positions downstream of the aATG across the yeast genome (Fig. 6A). The results showed that the number of out-of-frame dATGs increased gradually with distance from the aATG. The trend was statistically more significant in frame +1, probably because ~80% of out-of-frame dATGs are located in frame +1 due to the preferred usage of some amino acids or codons in frame 0. Moreover, the paucity of frame +1 proximal dATGs was particularly apparent for dATGs in the stronger context, suggesting that this paucity is related to translation at dAUGs (Fig. 6B; for results of frames 0 and +2, see Additional file 1: Fig. S13).
The depletion of proximal out-of-frame dATGs in eukaryotic genomes. A Meta-gene analysis shows the number of genes that harbor proximal dATGs at individual positions in the yeast genome. The curves represent the local regression line (span = 2). Spearman's correlation coefficients (ρ) and corresponding P values are shown. B–E Numbers of genes that harbor frame +1 dATGs at individual positions in the genomes of yeast (B), humans (C), Escherichia coli (D), and Bacillus subtilis (E). E. coli and B. subtilis are used as negative controls as no ribosomal scanning mechanism is required for translation initiation in prokaryotes. F The dual-luciferase reporter experiment to test the distance-dependent inhibitory effect of dAUGs in HeLa cells. The values of individual replicates are shown by black dots and the average values are shown by red lines. The firefly/Renilla activity ratios were normalized to the variant lacking additional dATG, which were 0.33, 0.74, and 0.92 for the variants with dATGs inserted at the positions +8, +14, and +20, respectively. mRNA levels were measured with quantitative PCR and normalized to the variant lacking additional dATG. P values were given by t-tests
The detrimental impacts of proximal out-of-frame dATGs (e.g., synthesis of toxic peptides) should scale with the gene expression level. Therefore, the mutations that generate proximal out-of-frame dATG should be subject to stronger purifying selection in more highly expressed genes, leading to less proximal out-of-frame dATGs in these genes. Our sequence analyses showed that the 2000 genes with highest expression levels (or transcription rates) in the yeast genome indeed harbored less proximal out-of-frame dATGs than the 2000 genes with lowest expression levels (or transcription rates, Additional file 1: Fig. S14A, B). Collectively, the paucity of proximal out-of-frame dATGs, especially those in the stronger contexts and in more highly expressed genes, suggested that the purifying selection against proximal out-of-frame dATGs can exert a role in yeast genome evolution.
To test if proximal out-of-frame dATGs were also purged from other eukaryotic genomes, we similarly counted the number of frame +1 dATGs at various positions in the human genome. Similar to our analysis of the yeast genome, we found that the number of frame +1 dATGs increased with distance from the aATG, consistent with the observation in a previous study [51]. And the trend was more obvious for dATGs in a stronger context (Fig. 6C) and in more broadly expressed genes (Additional file 1: Fig. S14C). As a negative control, prokaryotes, which do not use the scanning mechanism to search the initiation codon [52, 53], did not show the depletion of proximal out-of-frame dATGs (Fig. 6D, E). These observations implied that the Brownian ratchet scanning process probably generally drove the evolution of eukaryotic genomes.
Although the translation machinery of yeast and humans is largely identical, some differences have been reported in the components of eIFs [6]. To test whether proximal out-of-frame dAUGs can indeed inhibit translation initiation at the aAUG in humans, we constructed a firefly and Renilla dual-luciferase reporter system and designed three additional variants, each with a frame +1 ATG introduced at a different location (+8, +14, and +20) downstream of the firefly luciferase aATG, using synonymous mutations (Fig. 6F). We transfected the reporters individually into HeLa cells and measured the Renilla-normalized firefly luciferase activity and mRNA level. In agreement with our findings in yeast, the results showed that proximal out-of-frame dATGs reduced firefly luciferase activity, partly contributed by the reduced mRNA level (which was likely caused by the NMD pathway activated by translation initiation at the out-of-frame dATG). Moreover, increases in the distance between the two ATGs resulted in a gradual increase in firefly luciferase activity, which was no longer distinguishable from the wild type in ATG variants located 20-nt downstream of the aATG (Fig. 6F). These results confirmed evolutionary conservation of the inhibitory effects on protein synthesis by proximal out-of-frame dATGs between humans and yeast.
Brownian ratchet scanning process provides an explanation for the observed competition for translation initiation between closely spaced AUGs
The massive GFP intensity data generated in this study show context- and distance-dependent inhibitory effect of out-of-frame dAUGs on the translation of the canonical ORF (Figs. 1, 2, and 3). Furthermore, although it is well-known that an out-of-frame uAUG can inhibit translation of the canonical ORF, our data show that such an inhibitory effect decreases as the uAUG approaches the aAUG (Fig. 4). These observations indicate that competition for translation initiation exists between closely spaced AUGs, which undermines the first-AUG rule. Our computational modeling based on the experimental data then shows that the ribosome scanning process is better described as small-amplitude 5′–3′ and 3′–5′ oscillations that result in a net 5′–3′ movement (i.e., Brownian ratchet mechanism) instead of the conventional understanding of strictly unidirectional ribosome scanning (Fig. 5). We further show how such a scanning mechanism can influence the evolution of yeast and human genomes (Fig. 6).
The ribosome movement in the 3′–5′ direction has been reported in virus sequences. For example, Matsuda and Dreher modified the sequence of overlapping bicistronic mRNAs from the Turnip yellow mosaic virus and generated variants that contained two AUGs with various distances [29]. They showed competition for translation initiation between closely spaced AUGs, which were summarized as the evidence suggesting "limited relaxation over distances of a few nucleotides in the reverse direction" during PIC scanning [6, 12]. Similarly, a recent study reported that the small ribosomal subunit recruited by an internal ribosome entry site from poliovirus could initiate translation of an ORF in the upstream, suggesting the small ribosomal subunit had scanned in the 3′–5′ direction [54]. However, it remains unclear if the PIC movement in the 3′–5′ direction occurs only on a few specific (e.g., virus-related) sequences.
The massive GFP intensity data obtained in our experiments enabled excluding the possibility that the 3′–5′ PIC movement previously reported using a couple of variants was caused by specific flanking sequences, as well as excluding the effect of confounding factors (e.g., codon usage or mRNA secondary structure) on the rate of translation initiation. Furthermore, the massive GFP intensity data also provided us an opportunity to model the dynamics of ribosome scanning computationally and, in particular, to investigate how the net 5′–3′ PIC progression can be achieved from the small-amplitude 5′–3′ and 3′–5′ ribosome oscillations.
Our computational modeling of the massive GFP intensity data revealed a 77% average leakage rate for each single scan of an AUG triplet by the PIC and that the net leakage rate of 8% was achieved by multiple scans of the same triplet. A net leakage rate of 8% may ostensibly seem high considering the volume and variety of functional proteins that require accurate translation to maintain cellular function. However, we propose that this leakage rate is reasonable for two reasons. First, the addition of a second in-frame initiation codon as an auxiliary initiation site for translation initiation significantly elevates GFP levels (orange box in Fig. 1D), indicating that a substantial fraction of PICs appears to miss the aAUG and are instead captured by an in-frame, downstream AUG [32]. Second, the canonical ORFs are sometimes efficiently translated in genes bearing out-of-frame uATGs, which indicates that a significant proportion of uAUGs are not recognized by PICs.
Our calculation of a 77% average leakage rate for each single scan of an AUG triplet is indeed unexpectedly high. However, this high rate is intuitively consistent with data showing that a dAUG placed at +6 position (which has a G at its −3 position) can reduce GFP intensity by ~31% in the upf1Δ background (i.e., ~31% PICs that could have initiated at the aAUG were retained by a dAUG 5-nt apart, Additional file 1: Fig. S6B). Furthermore, when the aATG is in a weak context (with a T at its −3 position), the addition of a dATG at the +6 position even causes a ~50% reduction in GFP intensity in upf1Δ cells (Additional file 1: Fig. S6C). Similarly, Gu et al. reported that 50% of PICs were relocated to dAUG placed at +8 position (which has an A at its −3 position), as detected by toe-printing assays [54]. All these observations suggested a substantial single-scan leakage rate.
Computational modeling of the massive GFP intensity data suggested that the PIC movement in the 3′–5′ direction is unlikely some occasional relaxation as described in previous reviews [6, 12], but it is widespread that each triplet on average is scanned back and forth by the PIC for a dozen of times or so. While this number of scans may ostensibly seem high, we propose that this number is consistent with our experimental observations. Specifically, we showed that uAUG placed at the −7 position only reduced GFP intensity by ~42% in the upf1Δ background (Fig. 4C, right panel, here both uAUG and aAUG are in the N-context), clearly violating the first-AUG rule when two AUGs are sufficiently closely spaced. Similarly, dAUG placed at +9 position also reduced GFP intensity by ~22% in upf1Δ cells (Additional file 1: Fig. S6B). These phenomena cannot be well explained by occasional 3′–5′ PIC scanning but is expected when each triplet is scanned back and forth many times so that the first AUG only has a trivial advantage for translation initiation by being firstly scanned by the PIC.
Using an MCMC algorithm, we quantitatively investigate the Brownian ratchet scanning process by estimating the parameters that best fit the observed GFP intensity data in our dATG variant library. Note that although not highlighted in the "Results", we also performed additional analyses to confirm the accuracy of parameter estimation. For example, we simulated the Brownian ratchet scanning process using the context-dependent p.Leakage (estimated from the Solo variants, Fig. 5G) and confirmed that the GFP intensities could be successfully recapitulated for specific aATG/dATG context combinations (Additional file 1: Fig. S11E). We also predicted GFP intensities without NMD activity (by setting p.NMD to zero) and confirmed that the GFP intensities in the upf1Δ background were largely recapitulated (Additional file 1: Fig. S11F). Moreover, we repeated the MCMC algorithms using the GFP intensity data of the uATG library (Additional file 1: Fig. S11G) and estimated the number of scans for each triplet to be 19.8, with 95% confidence intervals from 9–30 times (Additional file 1: Fig. S11H). This range overlapped with the 95% confidence intervals estimated from the dATG library (6–14 times). Cross-validation using various datasets confirmed the robustness of our computational modeling of the PIC scanning process.
Previous studies showed that the PIC scans at a rate of ∼6–8 nt/s in eukaryotic cell lysates based on the relationship between the 5′-UTR length and the time required for the first round of translation products [50] (a similar estimation of ~10 nt/s was obtained independently by another group [26]; see also [55] for a different estimation). However, it is puzzling that this estimated scanning rate is only approximately two times faster than the rate of translation elongation (~3–4.5 nt/s) measured in the same experimental conditions [50]. Note that PIC scanning only inspects for complementarity to the Met-tRNAi anticodon while translation elongation includes several complicated steps such as tRNA selection (which by itself includes multiple rounds of inspection of codon-anticodon complementarity), peptide bond formation, and translocation [56, 57]. We thus propose that PIC scanning rates measured in previous studies represent the "net" scanning rate, and can be better understood in the framework that each triplet is scanned back and forth: the actual scanning rate over individual nucleotides is approximately ten times faster than the net scanning rate, or ~60–100 nt/s.
The relationship between the inhibitory effect of proximal out-of-frame dAUG and NMD activity
One of the major observations of this study is the inhibitory effects of a proximal out-of-frame dAUG on protein production (Figs. 1 and 2), which was used as evidence to support 3′–5′ PIC movement (i.e., the basis for the Brownian ratchet model). This apparent inhibitory effect has two potential sources: (i) the reduction in translation initiation rate at the aAUG because translation initiation at the dAUG consumes PICs and (ii) the elevated NMD activity induced by translation initiation at the out-of-frame dAUG. Both effects are the direct consequences of the competition for translation initiation between two closely spaced AUGs, and both contribute to the (total) inhibitory effects of proximal out-of-frame dAUGs on protein production.
We proposed that elevated NMD activity could be used to assess translation initiation from out-of-frame dAUGs. Note that gene transcripts may be subject to NMD activity for various reasons (e.g., transcription errors, mRNA damage, and/or translation initiation at a distal out-of-frame dATG in the GFP reporter sequence), which together account for the basal NMD activity of GFP mRNA in the Solo variants. In this study, the basal NMD activity has been controlled because we compared the expression levels of Duo variants to those of Solo variants. Our data showed that the NMD activity increased when proximal out-of-frame dAUGs were inserted (Fig. 3), a phenomenon that cannot be fully explained by a strictly unidirectional model only considering leaky scanning, which predicts that the NMD activity should be irrelevant to the aAUG-dAUG distance. Furthermore, the distance-dependent inhibitory effects of both out-of-frame dAUGs and uAUGs can be detected without the impact of NMD-related variation in mRNA stability (Additional file 1: Fig. S6B, C and S10C, D), illustrating that the effects on protein production induced by proximal out-of-frame AUGs were significantly contributed by mechanisms at the protein translation level.
The observed inhibitory effect of proximal out-of-frame dAUGs cannot be explained by the steric hindrance effects
It is also possible in principle that the observed inhibitory effects of dAUG could be explained under the strictly 5′–3′ unidirectional scanning model, at least in part, by steric hindrance effects. That is, a PIC occupying a nearby dAUG waiting for translation initiation could reduce the aAUG accessibility to a 5′-trailing PIC [58], thereby reducing the translation initiation rate at the aAUG. Note that this explanation assumes sequential 5′–3′ decision-making by the PIC, and therefore predicts that the translation initiation rate at the dAUG is independent of the aAUG-dAUG distance. Since the NMD activity reflects the translation initiation rate at out-of-frame dAUGs, this explanation further predicts that the mRNA level of Duo variants should not be affected by the aAUG-dAUG distance. Our data showed that the GFP mRNA level also gradually declined with decreasing aAUG-dAUG distance (Fig. 3), an observation that is not well explained by the strictly unidirectional scanning model (even considering possible steric hindrance effects) but is compatible with the Brownian ratchet scanning model.
Furthermore, the Brownian ratchet model predicts that a closely spaced uAUG-aAUG pair would also compete for translation initiation, and therefore, the inhibitory effect by an out-of-frame uAUG should diminish with decreasing uAUG-aAUG distance (Fig. 4A). On the contrary, the strictly unidirectional model (and considering possible steric hindrance effects) predicts that the inhibitory effect of uAUGs is independent of the uAUG-aAUG distance because of sequential 5′–3′ decision-making by the PIC. Our data showed that GFP intensity increased in response to decreasing uAUG-aAUG distance (Fig. 4C, Additional file 1: Fig. S9A, B), an observation that is not well explained by the model involving the strictly unidirectional PIC movement (even considering possible steric hindrance effects) but fits well with the Brownian ratchet scanning model. Also note that the distance-dependent inhibitory effect of uAUGs persisted in upf1Δ cells (Fig. 4C, Additional file 1: Fig. S10C, D), indicating that the phenomenon cannot be fully explained by the variation in NMD activity associated with the efficiency of translation initiation at the out-of-frame uAUG.
Previous findings explainable by the Brownian ratchet scanning process
Some seminal experiments showed that immediately post-termination, ribosomes can scan in the 3′–5′ direction and thereby reinitiate translation at a nearby AUG triplet upstream of the stop codon [59,60,61,62]. This finding suggests that the 40S ribosomal subunit has an intrinsic capability to migrate in both the 5′–3′ and 3′–5′ directions along unstructured mRNAs [10, 59, 63]. However, these observations differ from our findings reported here, since it remains unknown if the 3′–5′ movement described in previous studies only occurs before the post-termination ribosomes have recruited sufficient eIFs required for the "normal" scanning process that starts at the 5′-cap [59, 64].
Furthermore, it was reported that in case guanosine triphosphate hydrolysis does not occur on time, the small ribosomal subunit that had successfully recognized an AUG codon was capable of resuming "sliding" to search for AUGs [65]. While a proximal dAUG could be efficiently recognized in this sliding process, farther dAUGs were more hardly recognized, as detected by the toe-printing assay [65], which is explainable if the ribosome uses small-amplitude 5′–3′ and 3′–5′ oscillations to search for AUGs. These observations again support that scanning or sliding in both the 5′–3′ and 3′–5′ directions is an intrinsic capability of the small ribosomal subunit.
Previous studies reported that 80S ribosome pauses could trigger the stacking of ribosomes and promote translation initiation [66, 67]. This phenomenon is also explainable under the Brownian ratchet model because paused ribosomes block the 5′–3′ proceeding of PICs, which can elevate the probability of 3′–5′ movement of the PIC, thereby increasing the number of scans for an AUG by the PIC. In contrast, this phenomenon is not easily explained by the strictly unidirectional scanning model that each triplet is scanned only once. Note that the distance between the pausing site and the AUG triplet reported in these studies (43 and 144 nt, respectively) was much greater than the distance between AUGs competing for initiation as reported in our study (< 17 nt), therefore reflecting a different facet of the Brownian ratchet scanning process (i.e., when the PIC movement in the 5′–3′ direction is restricted). Similarly, the Brownian ratchet model can also help explain the phenomenon in a human cell line that translation initiation at uAUGs substantially increased upon the treatment of 3 μM Rocaglamide A. Rocaglamide A specifically increases eIF4A's binding affinity with polypurine RNA sequences, which likely blocks 5′–3′ PIC movement and elevates the probability of 3′–5′ movement, thereby increasing the probability that a uAUG is recognized by the PIC. Consistently, the observed translation initiation sites were often ~24 nt upstream of the Rocaglamide A binding sites [68].
It remains controversial as to how eIF4E-bound mRNA initially enters the mRNA-binding channel of the small ribosomal subunit. A "slotting" model proposes that eIF4E-bound mRNAs "slot" directly into the mRNA-binding channel whereas a "threading" model hypothesizes that mRNAs "thread" into the mRNA-binding channel after disruption of the eIF4E-cap interaction. The "slotting" model was supported by the exit-tunnel location of eIF4E relative to the translational initiation complex in a structural analysis [69] and by the observation of 5′-UTR length-dependent translation inhibition by tethering eIF4E to the 5′-cap [54]. The "slotting" model would further predict the existence of a translation "blind spot" (i.e., AUG triplets sufficiently close to the 5′-cap cannot be recognized due to the steric hindrance between eIF4E-cap and the small ribosomal subunit) under a strictly unidirectional scanning model [70], and this prediction is contradictory to the observations that eukaryotic mRNAs with extremely short 5′-UTRs could still initiate translation [54, 70, 71]. Nevertheless, this apparent contradiction can be resolved by considering the Brownian ratchet scanning process because AUGs close to the 5′-cap can still be scanned through a 3′–5′ PIC movement after the eIF4E-bound mRNA is "slotted" into the mRNA-binding channel.
Our findings in this study suggest at least three major directions for future experimental exploration. First, it would be of great value to confirm or refute the Brownian ratchet model by tracing the movement of a single PIC along an mRNA with super-resolution light microscopy or optical tweezers, in real-time and at single-nucleotide resolution, in an effort to observe small-amplitude 5′–3′ and 3′–5′ oscillations with a net 5′–3′ movement. Second, the quantification of the consumption of ATP usage for PIC scanning along unstructured mRNAs would also help estimate the parameters involved in the Brownian ratchet scanning model, such as the frequency of pawl placement onto mRNAs [25, 72]. Third, the identification of eukaryotic initiation factors that participate in the Brownian ratchet scanning mechanism (e.g., the protein identity of the "pawl") could also offer insight into the mechanism by which directional PIC movement can be achieved from PIC diffusion in both 5′–3′ and 3′–5′ directions [25]. A recent study estimated a net scanning rate of ∼100 nt/s in a reconstituted translation system [55]; this rate is an order of magnitude faster than the net scanning rate reported based on cell lysates in two previous studies [26, 50] and is similar to the scanning rate over individual nucleotides estimated in our study. We suspect that PICs migrate at a faster rate in the reconstituted system due to the lack or excess of some translation-related factors so that PICs do not change directions as often as in cells. Therefore, additional investigation into the compositions of such reconstituted translation systems could shed light on the mechanisms regulating the Brownian ratchet scanning process. It is also possible that 5′-trailing PICs may serve as a pawl that prevents backward scanning of 3′-leading PICs since several previous studies reported that multiple PICs could scan simultaneously on the same 5′-UTR [47, 73, 74] (i.e., the 5′-cap becomes unattached to the scanning PIC after recruitment of the small ribosomal subunit, also known as the "cap-severed" model).
In addition, our findings also have potential medical implications. Specifically, the currently common presumption by experimental biologists and medical scientists that disease-associated, translational defect mutations are likely associated with uAUGs [22, 75] should be expanded to include inhibitory effects of proximal, out-of-frame dAUGs on the translation of canonical ORFs. This new insight will help in computational predictions of disease-causing mutations from whole genome/exome sequencing data in the future.
Proximal out-of-frame dAUGs can reduce protein production of the canonical ORFs in a context-dependent manner. The inhibitory effect of out-of-frame uAUGs diminishes with the decrease of the uAUG-aAUG distance. These phenomena violate the first-AUG rule and indicate the competition for translation initiation between closely spaced AUGs. The massive GFP intensities measured in this study are quantitatively consistent with the Brownian ratchet model of PIC scanning rather than a strictly unidirectional scanning model. Proximal out-of-frame dATGs are purged from eukaryotic genomes during evolution by purifying selection.
Construction of the yeast variant library
We constructed the yeast dATG library as described in a previous study [31]. Specifically, we first constructed a yeast strain (BY4742-dTomato, MATα his3Δ1 leu2Δ0 lys2Δ0 ura3Δ0 gal7Δ0::dTomato-hphMX) which expressed dTomato from the GAL7 promoter in the background of BY4742 [76], using a recombination-mediated polymerase chain reaction (PCR)-directed allele replacement method (primers provided in Additional file 2: Table S1). The dTomato expression would be later used to normalize GFP intensity, which in principle could be affected by cell-to-cell variation in galactose induction and cell-cycle status [77]. We selected the transformants on the 1% yeast extract–2% peptone–2% dextrose (YPD) solid medium with 200 μg/ml hygromycin B (Amresco, Cat#97064–454). We performed PCR on the extracted genomic DNA of the yeast transformants to verify the successful GAL7 deletion and dTomato integration (here and also hereafter when genetic manipulation was performed).
To construct the dATG yeast library in the background of upf1Δ, we further deleted UPF1 in the background of BY4742-dTomato using recombination-mediated PCR-directed allele replacement method (BY4742-dTomato-upf1Δ, primers provided in Additional file 2: Table S1). To this end, we amplified the natMX cassette from plasmid PAG25 (Addgene, Cat#35121), transformed the PCR product into BY4742-dTomato, and selected the transformants on the YPD solid medium with 100 μg/ml nourseothricin (Amresco, Cat#6021-878). To control for the potential cellular effects of natMX expression when GFP expression in the background of upf1Δ was compared with that in the wild type, we replaced the gene encoding homothallic switching endonuclease, HO (a pseudogene in the BY4742 background), using natMX [78]. The resultant yeast strain (i.e., BY4742-dTomato-hoΔ) was used as the wild type in this study.
We chemically synthesized 29 oligos with specific positions using doped nucleotides (Fig. 1B, Additional file 1: Fig. S1A, Additional file 2: Table S2); 28 of them contained an ATG designed at a particular position downstream of the aATG. We mixed these oligos and fused them with the GAL1 promoter, the full-length GFP coding sequence (CDS, with the initiation codon of GFP omitted, including a ten amino-acid "linker" sequence in its N-terminus), the ADH1 terminator, URA3MX, and the GAL1 terminator (in the order shown in Additional file 1: Fig. S1A), using fusion PCR and the GeneArt Seamless Cloning and Assembly Kit (Thermo Fisher, Cat#A14606). The resultant sequence surrounding the synthetic oligo is CTT TAA CGT CAA GGA GAA AAA (NNN NNN ATG NNN NNN NNN NNN NNN NNN NNN NNN NNN NNN) GCA GGT CGA CGG ATC CCC GGG Tta aTt aaC AGt aaA GGA GAA GAA CTT TTC ACT GGA GTT GTC CCA ATT CTT GTt gaA Tta gAT GGt gaT GTt aaT GGG CAC AAA TTT, where out-of-frame dATGs and out-of-frame stop codons are underlined and shown in lowercase letters, respectively. Thirty nucleotides encoding the "linker" sequence are italicized.
To construct the dATG yeast library, we transformed the PCR product to replace the coding sequence of GAL1 in the BY4742-dTomato-hoΔ strain (Additional file 1: Fig. S1A). Specifically, the GAL1 promoter (500 nt) and GAL1 terminator (500 nt) were used as long homologous sequences to allow efficient integration of the PCR product into the yeast nuclear genome [31]. We selected successful transformants in the synthetic complete medium (dextrose as the carbon source) with uracil dropped out. We collected a total of ~50,000 yeast transformants, which most likely contained various sequences surrounding the aATG of GFP, due to the huge number of possible sequences that could be generated from the synthesized oligos (436 ≈ 5 × 1021). We similarly constructed the dATG yeast library in the background of BY4742-dTomato-upf1Δ, and collected a total of ~60,000 yeast transformants.
We also constructed a 2A-inserted dATG yeast library to eliminate the potential impacts of nonsynonymous substitutions introduced by the doped nucleotide in the N-terminus of the GFP reporter. To this end, we inserted a DNA sequence encoding a 2A self-cleaving peptide [44] (GGT TCT GGT GGT GCT ACT AAT TTT TCT TTG TTG AAA TTG GCT GGT GAT GTT GAA TTG AAT CCA GGT CCA) between the "linker" sequence and the GFP CDS. The construction procedure for the 2A-inserted dATG variant library is otherwise similar to the protocol aforementioned for the dATG variant library, except that the oligos synthesized to introduce dATGs in this 2A-inserted dATG yeast library had a fixed trinucleotide sequence upstream of the designated aATG (TTT, "weak" context) and a fixed trinucleotide sequence upstream of each dATG (AAA, "strong" context, whenever not overlapped with the aATG, see Additional file 2: Table S2 for details).
The uATG yeast library was similarly constructed in both hoΔ and upf1Δ backgrounds (Fig. 4B), with the sequence structure CTT TAA CGT CAA GGA GAA AAA TTT (NNN NNN NNN NNN NNN NNN NNN NNN NNN NNN) atg GCA GGT CGA CGG ATC CCC GGG TTA ATT AAC AGT AAA GGA, where the designated aATG is shown in lowercase letters (see Additional file 2: Table S2 for details).
FACS-coupled high-throughput sequencing (FACS-seq)
We gauged GFP intensities for each of the thousands of yeast variants using a high-throughput strategy, FACS-seq, as described in a previous study [31]. Specifically, we pooled yeast variants that contained various sequences surrounding the aATG of GFP and cultured them in the liquid medium (YPGEG) that contained 1% yeast extract, 2% peptone, 2% glycerol and 2% ethanol (both served as the carbon source), and 2% galactose (to induce the expression of GFP and dTomato). We harvested yeast cells after 18 h, in which duration the optical density at 660 nm increased from ~0.1 to ~0.7. We nitrogen froze half of the harvested cells for total RNA and DNA extraction (to perform RNA-seq and DNA-seq as described below), and re-suspended the other in the 1× phosphate-buffered saline for FACS-seq.
We sorted yeast cells into eight bins using Aria III cytometer (BD Biosciences) based on the intensity ratio of GFP and dTomato fluorescence, which were excited by 488- and 561-nm lasers and were detected using 530/30- and 610/20-nm filters, respectively. We recorded the median GFP/dTomato intensity ratio for each bin, as well as the proportion of cells belonging to the "gate" of each bin (Additional file 1: Fig. S1A). We collected at least 20,000 yeast cells for each bin and cultured them individually in YPD overnight at 30°C to amplify the cell population for easier DNA extraction. Since GFP was not expressed in YPD (therefore should confer limited fitness cost), the relative fraction of yeast variants in each bin should be largely maintained during this amplification.
We extracted the genomic DNA from yeast cells of each bin and performed two rounds of PCR amplification on the variable region (e.g., 6-nt upstream and 30-nt downstream of the GFP aATG for the dATG library) to construct the Illumina sequencing libraries. Taking the dATG library for example, in the first round PCR, a pair of sequences identical to the 21-nt sequence upstream of aATG (positions −35 to −15, relative to the A[+1] of the aATG) and the reverse complement of the 20-nt sequence downstream of the aATG (positions +45 to +64) were used to amplify the variable region (primer sequences for other yeast libraries are provided in Additional file 2: Table S1). Meanwhile, we introduced 19-nt or 21-nt sequences identical to the 3′-end of the P5 or P7 adaptor, respectively, a 12-nt stretch of random nucleotides (NNNNNNNNNNNN, designed to avoid difficulty in base calling of Illumina sequencing when sequencing "constant" region), and a 6-nt bin-specific barcode to the ends of the PCR product (Additional file 1: Fig. S1A, Additional file 2: Tables S1, S3). In the second round PCR, the full-length P5 and P7 adaptors as well as the sequencing indices were added to the ends. The PCR products were then subject to Illumina sequencing (NovaSeq 6000 platform, in the PE150 mode).
Small-scale validation of fluorescence intensities using flow cytometer
We randomly isolated 20 yeast variants from the yeast dATG library from individual colonies on the solid medium, and sequenced the variable region for each variant by Sanger sequencing. We induced the expression of the fluorescent proteins in the liquid YPGEG medium for individual strains and harvested yeast cells in the mid-log phase. For each strain, we measured the GFP and dTomato fluorescence intensities by Aria III cytometer using the same settings as in the FACS-seq experiment.
RNA-seq and DNA-seq for the yeast variant library
We extracted the total RNA from the harvested cells of the yeast library (cultured in the YPGEG liquid medium for 18 h) and performed reverse transcription using the GoScriptTM Reverse Transcription System (Promega, Cat#A5001). We built the Illumina sequencing library by two-round PCR amplification of the variable region (similarly to FACS-seq, primers provided in Additional file 2: Table S1). Illumina sequencing was performed on the NovaSeq 6000 platform under the PE150 mode. To control for the variation in the cell number among the yeast variants and the potential bias in Illumina sequencing, we also extracted the total genomic DNA from the harvested cells and PCR-amplified the variable region for Illumina sequencing (primer sequences provided in Additional file 2: Table S1).
Two biological replicates were performed for the dATG library, the 2A-inserted dATG library, or the uATG library by independently inducing GFP and dTomato expression. In each replicate, we performed RNA-seq and DNA-seq on the same group of harvested cells used for the FACS-seq experiments.
Dual-fluorescence reporter assay in yeast
We determined the distance-dependent inhibitory effect of out-of-frame dAUGs on translation initiation at the aAUG in small-scale experiments, using a dual-fluorescence reporter assay. To this end, we first constructed TEF promoter-GFP CDS-CYC1 terminator-TDH3 promoter-dTomato CDS-ADH1 terminator-URA3MX in the background of pUC57 plasmid (GenBank: Y14837.1) using the GeneArt Seamless Cloning and Assembly Kit (sequence shown in Additional file 2: Table S4). Then, with this plasmid as the template, we introduced an out-of-frame dATG at the position +8, +14, +20, or +26 by inserting a 27-nt sequence downstream of the aATG of GFP using fusion PCR (primer sequences provided in Additional file 2: Table S1). A control variant lacking additional dATG was also constructed.
We inserted each of the five variants into the BY4742 genome by replacing the endogenous HO locus in Chromosome IV, using recombination-mediated PCR-directed allele replacement method (59-nt homologous sequences in both ends, primer sequences are provided in Additional file 2: Table S1). We harvested yeast cells in the mid-log phase and used AccuriTM C6 cytometer (BD Biosciences) to measure the GFP and dTomato fluorescence (excited by 473- and 552-nm lasers and detected with 530/30- and 610/20 nm filters, respectively). The reported GFP fluorescence intensity was normalized by the dTomato fluorescence intensity. We similarly performed the experiments for these five variants in the upf1Δ background.
We constructed four uATG variants, each with a uATG inserted at the position −25, −19, −13, −7, and one control variant without uATG in the backgrounds of hoΔ and upf1Δ, and performed dual-fluorescence reporter assay. Primer sequences used for fusion PCR are provided in Additional file 2: Table S1.
Dual-frame reporter assay in yeast
To detect the translational competition between two closely spaced AUGs, we designed a dual-frame reporter, in which GFP (frame 0) and dTomato (frame +1) were encoded in the same transcript expressed from the TDH3 promoter (Fig. 2C). To avoid truncated protein in frame +1, we removed all six "frame +1" stop codons in the GFP CDS (five of them via synonymous mutations). A "frame +1" stop codon residing in ATG AAA (coding Met-Lys in GFP) could not be removed via synonymous mutations, so we replaced Met with its most exchangeable amino acid (according to the BLOSUM matrix), Leu, resulting in a sequence of "CTT AAA." To minimize the influence of the long peptide in the N-terminus (peptide sequence encoded by frame +1 of the GFP CDS) on protein folding of dTomato, we inserted a 2A self-cleaving peptide in frame +1 right upstream of the dTomato CDS. The dual-frame reporter DNA was synthesized by BGI Tech (sequence shown in Additional file 2: Table S4), based on which we generated two dual-frame constructs with 3-nt difference in the sequence upstream of the dTomato ATG and four control constructs lacking either the GFP ATG or the dTomato ATG (Fig. 2C).
We inserted the dual-frame reporter constructs into the yeast genome, which replaced the endogenous HO locus in BY4742, by recombination-mediated PCR-directed allele replacement method (59-nt homologous sequences in both ends, primer sequences provided in Additional file 2: Table S1). We harvested yeast cells in the mid-log phase for each yeast strain and measured the GFP and dTomato fluorescence using AccuriTM C6 cytometer and mRNA levels using the Bio-Rad CFX384 Touch real-time PCR detection system (PCR primers are provided in Additional file 2: Table S1).
Dual-luciferase assay in HeLa cells
HeLa cells were cultured in Dulbecco's modified Eagle's medium containing 10% Fetal Calf Serum, 2 mM L-glutamine at 37°C in a 5% CO2 incubator. To detect the inhibitory effect of proximal out-of-frame dAUGs on translation initiation at the aAUG in HeLa cells, we performed a dual-luciferase assay based on modified pmirGLO plasmids (Promega, Cat#E1330), in which firefly and Renilla luciferases were individually expressed from PGK and SV40 promoters, respectively. Specifically, using site-directed mutagenesis methods, we modified the pmirGLO plasmids by introducing a 6-nt sequence (AATTTT, weak context) right upstream of the firefly luciferase ATG to increase its leakage rate. We designed synonymous mutations to generate four 21-nt sequences that encode the same amino acid sequence; one sequence lacked proximal out-of-frame dATGs and the other three each contained a proximal out-of-frame dATG at the +8, +14, or +20 position relative to the aATG (Fig. 6F).
We introduced each of the four 21-nt sequences right downstream of the firefly luciferase ATG in the plasmid using site-directed mutagenesis, and individually transfected the four modified plasmids into HeLa cells using LipofectamineTM 2000 (Thermo Fisher, Cat#11668030). We determined the activities of luciferases in 96-well microliter plates 48 h after transfection, using a commercial dual-luciferase assay kit (Promega, Cat#E1910) following the manufacturer's protocol. Briefly, we lysed HeLa cells using 500 μL of passive lysis buffer and mixed 20 μL suspension with 100 μL firefly luciferase substrate. We first measured firefly luciferase activity using the Synergy HTX multi-mode microplate reader (BioTek). Then, we added 100 μL of Stop-and-Glo reagent to the solution and measured the Renilla luciferase activity using the same equipment. mRNA levels were determined using the Bio-Rad CFX384 Touch real-time PCR detection system (PCR primers are provided in Additional file 2: Table S1).
Quantification of GFP and mRNA levels for individual variants in the yeast libraries
For the dATG variant library, the "read 1" of a read pair from the DNA-seq data should follow the pattern of N(12)-barcode (6 nt)-CCTCTATACTTTAACGTCAAGGAGAAAAA-N(6)-ATG-N(30)-GCAGGTCGACGGATCCCCGGGTTAATTAACA-barcode (6 nt)-N(12)-P7. Note that P7 adaptor would also be sequenced downstream of the inserted sequence because the length of the inserted sequence of the Illumina sequencing library generated in this study was 135 nt, shorter than that of a sequencing read (150 nt). For the same reason, the reverse complements of the barcodes and the variable region would be sequenced for a second time in the "read 2", in which part of the P5 adaptor would be sequenced. For each read, we extracted the barcodes (6-nt upstream and 6-nt downstream) as well as the 36-nt variable sequence surrounding the ATG using pattern matching. We discarded the whole read pair in the following three scenarios: (1) if either read of a read pair could not match to the pattern, (2) if any of the four barcodes extracted from a read pair was different from the barcodes that were introduced during library preparation for a particular sample, or (3) if the read 1 sequence and the reverse complement sequence of the read 2 were not identical in the variable region. We then classified read pairs into biological replicates according to the barcode sequence and grouped read pairs into variants according to the sequence in the variable region. The sequencing data from the RNA-seq and FACS-seq libraries were similarly analyzed. The numbers of read pairs that passed the three criteria and the numbers of identified variants are summarized in Additional file 2: Table S5.
Some sequences in the variable regions were not detected in all three libraries (FACS-seq, RNA-seq, and DNA-seq), which implied that they were potentially originated from PCR amplification errors during Illumina sequencing library preparation. We therefore discarded the variants that did not appear in all three libraries. Furthermore, the frequencies of some dATG variants appeared to be too low in the DNA-seq (number of read pairs ≤8) and FACS-seq libraries (all read pairs from the eight bins combined ≤64). To be conservative, we also discarded these variants (the remained variant numbers are shown in the Venn diagram of Additional file 1: Fig. S1D). Additional filtering criteria are listed in Additional file 1: Fig. S1D. In particular, variants containing in-frame stop codons in the 30-nt downstream regions showed lower GFP intensities (Additional file 1: Fig. S1E) as they are potential NMD substrates; we removed these variants from the subsequent analyses. Furthermore, we also discarded the variants containing uATG due to their potential impact on translation initiation (Additional file 1: Fig. S1F).
Following a previous study [31], the dTomato-normalized GFP intensity of each yeast variant (GFPj) was calculated as the average GFP/dTomato intensity ratio among the eight bins, weighted by the proportions of its cells distributed in the eight bins (Additional file 1: Fig. S1A, Additional file 2: Table S6). The weight of variant j in bin i was estimated from nij × Pi, where nij was the fraction of read pairs for variant j among all read pairs in bin i, and Pi was the proportion of cells belonging to the "gate" of bin i as recorded by the flow cytometer. The GFP level of variant j was calculated from the formula:
$${GFP}_j=\frac{\sum_{i=1}^8{G}_i\times {n}_{ij}\times {P}_i}{\sum_{i=1}^8\ {n}_{ij}\times {P}_i}$$
where Gi was the median GFP/dTomato ratio estimated from the collected yeast cells in bin i by the flow cytometer. The GFP intensity of each dATG variant is provided in Additional file 2: Table S7.
We estimated the mRNA level for each dATG variant from the ratio of read pair frequencies in the RNA-seq and DNA-seq libraries. Specifically, the read pair frequency of variant i in the RNA library (Ri) or DNA-seq library (Di) was calculated from the fraction of read pairs derived from variant i among the total number of read pairs in the RNA-seq or DNA-seq library, respectively. Then, the mRNA level (abundance per cell) of variant i was estimated from the Ri/Di ratio. The mRNA level of each dATG variant is provided in Additional file 2: Table S7.
For the 2A-inserted dATG yeast library, the sequencing data should follow the pattern of N(12)-barcode (6 nt)-CCTCTATACTTTAACGTCAAGGAGAAAAAAATTTT-ATG-N(30)-GCAGGTCGACGGATCCCCGGGTTAATTAACA-barcode (6 nt)-N(12)-P7. And for this library, we further discarded Duo variants with a dATG located at the +4, +5, or +6 position (relative to the aATG) to ensure that all dATGs were in a strong context (AAA at positions −3 to −1 relative to the dATG). For the uATG yeast library, the "read 1" of a read pair from the sequencing data should follow the pattern of N(12)-barcode (6 nt)-CCTCTATACTTTAACGTCAAGGAGAAAAATTT-N(30)-ATG-GCAGGTCGACGGATCCCCGGGTTAATTAACA-barcode (6 nt)-N(12)-P7. Subsequent analysis procedures for these two libraries were identical to that used for the dATG yeast library. The GFP intensity and mRNA level of each variant in the 2A-inserted dATG library and in the uATG library is provided in Additional file 2: Tables S8 and S9.
Estimation of minimum free energy (MFE) and codon adaptation index (CAI)
The mRNA secondary structure right downstream of ATG was reported to regulate translation initiation/elongation [37, 79, 80]. Since the intrinsic propensity of RNA sequences to form a secondary structure could be inferred from MFE, we estimated MFE in the 30-nt region downstream of the aATG for each variant in our dATG library using the RNAfold (http://rna.tbi.univie.ac.at/) command (RNAfold -d2 --noLP --noPS) in the package ViennaRNA [81].
The synonymous codon usage was also reported to regulate protein synthesis rate [38]. We therefore calculated CAI in the 30-nt (10-codon) region downstream of the aATG for each dATG variant, following the computational procedure described in previous studies [82, 83].
Simulation of Brownian ratchet scanning process using a random walk model
We simulated the ratchet-and-pawl mechanism using a modified random walk model [84]. Based on a previous study [47], the triplet at the 13th–15th position of a PIC-binding mRNA fragment is inspected for complementarity to the Met-tRNAi anticodon. The PIC starts out with its 5′-trailing side at the 5′-cap and takes 1 nt per step along the mRNA. A pawl is stochastically placed onto the mRNA at the 5′-trailing side of the PIC with the probability of p.Pawl, and the disassociation of the pawl from the mRNA is sufficiently slow that is not considered in the model. The PIC moves with equal probabilities in the 5′–3′ and 3′–5′ directions (each 50%) unless its 5′-trailing side hits the eIF4E-cap or a pawl, in which circumstances, the PIC moves in the 5′–3′ direction with 100% probability. When an AUG is located within the 13th–15th positions covered by the PIC, the PIC may recognize the AUG in the probability of (1 − p.Leakage) or miss the AUG in the probability of p.Leakage; in the latter case, the PIC continues scanning. Note that in our simulation we assume that AUG triplets can be recognized when the PIC moves in either the 5′–3′ or 3′–5′ direction. Sometimes the PIC may recognize out-of-frame AUGs, which would activate the NMD pathway and mRNA degradation with the probability of p.NMD.
We simulated the PIC scanning processes on one Solo and 25 Duo variants. The Solo variant used for the computational simulation contained 50-nt sequence upstream and 50-nt sequence downstream of the aATG. The distance between the two ATGs in one of the 25 Duo variants varied within the range of 6–30 nt. When the PIC moved beyond the 50-nt downstream region (i.e., beyond the 3′-end), NMD was also activated with the probability of p.NMD as the PIC might encounter out-of-frame AUGs further downstream (the next three ATG triplets downstream of the variable region of GFP used in our experiments are all out-of-frame). We simulated the PIC scanning process for each variant (the Solo or one of the 25 Duos) 100 times and calculated the fraction of successful translation initiation events at the aATG. The protein expression level of a variant was estimated from the product of this fraction and the proportion of mRNA that did not activate the NMD pathway. We then compared this protein expression level in our simulation (Fig. 5D) with the GFP intensities measured in the experiments (variants of the two replicates combined), by calculating the residual sum of squares (RSS). Note that translation initiation at the in-frame dAUGs in Duo variants would synthesize functional proteins and would not activate NMD (Fig. 5A).
Estimation of p.Pawl, p.Leakage, and p.NMD by the MCMC algorithms
To determine the parameters to start with in the MCMC algorithms, we screened the parameter space for p.Pawl, p.Leakage, and p.NMD. Specifically, we set each of the probability parameters as one of the numbers in 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, and 0.8 (Additional file 1: Fig. S11A). Together, there were (103 =) 1000 combinational value sets. Each parameter set was individually used to simulate the Brownian ratchet scanning process, and then, the RSS was estimated from the simulated protein expression levels and observed GFP intensities in the dATG yeast library. Note that the protein expression levels of Duo variants were normalized by that of the Solo variant in both simulation and experimentation, so that the protein levels were directly comparable.
To estimate p.Pawl, p.Leakage, and p.NMD from the GFP intensities of the dATG variants, three hundred iterations of the MCMC simulation were performed. In each iteration, one of the three parameters was changed sequentially in the order of p.Pawl, p.Leakage, and p.NMD. We set the sampling window width for each parameter as 0.0004, 0.2, and 0.5, respectively, which were proximately two times the standard deviation in the ten parameter sets obtained in the initiation screening (Additional file 1: Fig. S11B). We generated a new parameter randomly based on the uniform distribution in the window defined by the sampling window width, centered at the parameter value in the previous iteration. If the boundary of the window was smaller than 0 or greater than 1, the boundary was set as 0 or 1 since all three parameters were probabilities. The new parameter set was then used to simulate the PIC scanning processes. If the RSS of the current iteration was smaller than the previous iteration, the previous set of parameters was replaced by the new set of parameters, whereas if the RSS increased, the previous set of parameters remained. After 300 iterations, the parameter set that exhibited the minimum RSS values was recorded. The net leakage rate (i.e., the fraction of PICs that eventually miss an AUG after multiple scans) was estimated from the fraction of PICs that failed to initiate translation at the aATG in the Solo variant. Based on the intrinsic relationship that the net leakage rate = (single-scan leakage rate)number of scans, the number of scans can be estimated as log(net leakage rate)/log(single-scan leakage rate). We repeated the simulation 30 times and obtained the distribution of the three parameters as shown in Fig. 5E and Additional file 1: Fig. S11C. The average value and the standard errors (SE) were estimated from the 30 sets of optimized parameters by the MCMC algorithms (Fig. 5F).
To estimate the p.Leakage for the ATG in the A, G, C, and T contexts individually, we performed the MCMC algorithms with 100 additional iterations, during which we estimated the RSS between the simulated protein expression levels and the GFP intensities measured for the Solo variants that with A, G, C, or T at the −3 position in the dATG library. In the simulation, we fixed the p.Pawl and p.NMD values as acquired above (p.Pawl = 0.001 and p.NMD = 0.62). The MCMC algorithms were performed 30 times, and the average p.Leakage for each of the four ATG contexts was estimated from the average value of the 30 MCMC chains (Fig. 5G).
We also used the MCMC algorithms to estimate numerical approximations for the parameters in the Brownian ratchet scanning model from the GFP intensities of the uATG variants. The same procedures (including initial parameter sets and sample window width) as the analyses of the dATG library were used.
Detection of out-of-frame dAUGs in eukaryotic and prokaryotic genomes
The coding sequences of the budding yeast (Saccharomyces cerevisiae, R64-1-1) and human (Homo sapiens, GRCh38) genes were downloaded from the Ensembl database [85] using BioMart [86], and the sequence of the main transcript of each gene defined in the Ensembl database were used for the subsequent analyses. A gene was discarded if its annotated initiation codon was not ATG or if it was shorter than 45 nt in length. The numbers of genes used for the subsequent analyses were 6564 (yeast) and 19,496 (humans). The genes harboring dATGs located at each position within the 45-nt region downstream of the aATG were counted in each genome.
As a negative control, we also retrieved the coding sequences of two prokaryotic genomes [87], Escherichia coli (GenBank: NZ_CP027599.1) and Bacillus subtilis (GenBank: NC_000964.3), where no scanning mechanism is required for recognition of the initiation codon. We applied the same set of criteria as described above for yeast and humans and identified 5113 (E. coli) and 3282 (B. subtilis) genes for the subsequent analyses.
The distribution of dAUGs in all three possible frames was also analyzed in subsets of genes based on their gene expression level (in yeast) or the number of tissues expressing the gene (in humans). The yeast gene expression levels and transcription rates were retrieved from previous studies [88, 89] and the transcript expression levels for each human gene in 54 tissues were retrieved from the GTEx database (dbGaP accession number phs000424.v8.p2) on 09/13/2022. The number of tissues that a human transcript is expressed (transcript per million greater than or equal to 1) was counted.
The reference sequence and reference annotation of the Saccharomyces cerevisiae genome (R64-1-1) and the Homo sapiens genome (GRCh38.p13) were downloaded from Ensembl database [90]. The reference sequence and gene annotation of Escherichia coli genome (NZ_CP027599.1) and Bacillus subtilis genome (NC_000964.3) were retrieved from NCBI database [91]. The expression levels and transcription rates of yeast genes were retrieved from previous studies [88, 89]. The expression levels of human transcripts in 54 tissues were obtained from the GTEx Portal (dbGaP accession number phs000424.v8.p2) on 09/13/2022. The high-throughput sequencing data of ribosome protected fragments were retrieved from a previous Ribo-seq study [92].
The high-throughput sequencing data of FACS-seq, RNA-seq, and DNA-seq have been deposited at Genome Sequence Archive [93] under the accession number CRA005456 [94]. Codes to analyze the data are available at Zenodo [95], under the terms of the Creative Commons Attribution 4.0 license.
Sonenberg N, Hinnebusch AG. Regulation of translation initiation in eukaryotes: mechanisms and biological targets. Cell. 2009;136:731–45.
Alberts B, Johnson A, Lewis J, Morgan D, Raff M, Roberts K, et al. Molecular biology of the cell: W.W. Norton; 2014.
Krebs JE, Goldstein ES, Kilpatrick ST. Lewin's GENES XII: Jones & Bartlett Learning; 2018.
Kozak M. The scanning model for translation: an update. J Cell Biol. 1989;108:229–41.
Kozak M. How do eucaryotic ribosomes select initiation regions in messenger RNA? Cell. 1978;15:1109–23.
Hinnebusch AG. The scanning mechanism of eukaryotic translation initiation. Annu Rev Biochem. 2014;83:779–812.
Merrick WC. Mechanism and regulation of eukaryotic protein synthesis. Microbiol Rev. 1992;56:291–315.
Pelletier J, Sonenberg N. The organizing principles of eukaryotic ribosome recruitment. Annu Rev Biochem. 2019;88:307–35.
Merrick WC, Pavitt GD. Protein synthesis initiation in eukaryotic cells. Cold Spring Harb Perspect Biol. 2018;10:a033092.
Pestova TV, Kolupaeva VG. The roles of individual eukaryotic translation initiation factors in ribosomal scanning and initiation codon selection. Genes Dev. 2002;16:2906–22.
Cigan AM, Feng L, Donahue TF. tRNAi(met) functions in directing the scanning ribosome to the start site of translation. Science. 1988;242:93–7.
Jackson RJ, Hellen CU, Pestova TV. The mechanism of eukaryotic translation initiation and principles of its regulation. Nat Rev Mol Cell Biol. 2010;11:113–27.
Saini AK, Nanda JS, Lorsch JR, Hinnebusch AG. Regulatory elements in eIF1A control the fidelity of start codon selection by modulating tRNA(i)(met) binding to the ribosome. Genes Dev. 2010;24:97–110.
Hinnebusch AG. Structural insights into the mechanism of scanning and start codon recognition in eukaryotic translation initiation. Trends Biochem Sci. 2017;42:589–611.
Kozak M. Pushing the limits of the scanning mechanism for initiation of translation. Gene. 2002;299:1–34.
Kozak M. Selection of initiation sites by eucaryotic ribosomes: effect of inserting AUG triplets upstream from the coding sequence for preproinsulin. Nucleic Acids Res. 1984;12:3873–93.
Noderer WL, Flockhart RJ, Bhaduri A, Diaz de Arce AJ, Zhang J, Khavari PA, et al. Quantitative analysis of mammalian translation initiation sites by FACS-seq. Mol Syst Biol. 2014;10:748.
Cuperus JT, Groves B, Kuchina A, Rosenberg AB, Jojic N, Fields S, et al. Deep learning of the regulatory grammar of yeast 5' untranslated regions from 500,000 random sequences. Genome Res. 2017;27:2015–24.
Dvir S, Velten L, Sharon E, Zeevi D, Carey LB, Weinberger A, et al. Deciphering the rules by which 5'-UTR sequences affect protein expression in yeast. Proc Natl Acad Sci U S A. 2013;110:E2792–801.
Johnstone TG, Bazzini AA, Giraldez AJ. Upstream ORFs are prevalent translational repressors in vertebrates. EMBO J. 2016;35:706–23.
Hinnebusch AG. Molecular mechanism of scanning and start codon selection in eukaryotes. Microbiol Mol Biol Rev. 2011;75:434–67.
Zhang H, Wang Y, Lu J. Function and evolution of upstream ORFs in eukaryotes. Trends Biochem Sci. 2019;44:782–94.
Hinnebusch AG, Ivanov IP, Sonenberg N. Translational control by 5'-untranslated regions of eukaryotic mRNAs. Science. 2016;352:1413–6.
Kozak M. Regulation of translation via mRNA structure in prokaryotes and eukaryotes. Gene. 2005;361:13–37.
Spirin AS. How does a scanning ribosomal particle move along the 5'-untranslated region of eukaryotic mRNA? Brownian ratchet model. Biochemistry. 2009;48:10688–92.
Berthelot K, Muldoon M, Rajkowitsch L, Hughes J, McCarthy JE. Dynamics and processivity of 40S ribosome scanning on mRNA in yeast. Mol Microbiol. 2004;51:987–1001.
Alekhina OM, Vassilenko KS. Translation initiation in eukaryotes: versatility of the scanning model. Biochemistry (Mosc). 2012;77:1465–77.
Kozak M. Adherence to the first-AUG rule when a second AUG codon follows closely upon the first. Proc Natl Acad Sci U S A. 1995;92:2662–6.
Matsuda D, Dreher TW. Close spacing of AUG initiation codons confers dicistronic character on a eukaryotic mRNA. RNA. 2006;12:1338–49.
Williams MA, Lamb RA. Effect of mutations and deletions in a bicistronic mRNA on the synthesis of influenza B virus NB and NA glycoproteins. J Virol. 1989;63:28–35.
Chen S, Li K, Cao W, Wang J, Zhao T, Huan Q, et al. Codon-resolution analysis reveals a direct and context-dependent impact of individual synonymous mutations on mRNA level. Mol Biol Evol. 2017;34:2944–58.
Benitez-Cantos MS, Yordanova MM, O'Connor PBF, Zhdanov AV, Kovalchuk SI, Papkovsky DB, et al. Translation initiation downstream from annotated start codons in human mRNAs coevolves with the Kozak context. Genome Res. 2020;30:974–84.
Hamilton R, Watanabe CK, de Boer HA. Compilation and comparison of the sequence context around the AUG startcodons in Saccharomyces cerevisiae mRNAs. Nucleic Acids Res. 1987;15:3581–93.
Kozak M. Point mutations define a sequence flanking the AUG initiator codon that modulates translation by eukaryotic ribosomes. Cell. 1986;44:283–92.
Kozak M. An analysis of 5'-noncoding sequences from 699 vertebrate messenger RNAs. Nucleic Acids Res. 1987;15:8125–48.
Simonetti A, Guca E, Bochler A, Kuhn L, Hashem Y. Structural insights into the mammalian late-stage initiation complexes. Cell Rep. 2020;31:107497.
Yang JR, Chen X, Zhang J. Codon-by-codon modulation of translational speed and accuracy via mRNA folding. PLoS Biol. 2014;12:e1001910.
Chu D, Kazana E, Bellanger N, Singh T, Tuite MF, von der Haar T. Translation elongation can control translation initiation on eukaryotic mRNAs. EMBO J. 2014;33:21–34.
Kurosaki T, Popp MW, Maquat LE. Quality and quantity control of gene expression by nonsense-mediated mRNA decay. Nat Rev Mol Cell Biol. 2019;20:406–20.
Losson R, Lacroute F. Interference of nonsense mutations with eukaryotic messenger RNA stability. Proc Natl Acad Sci U S A. 1979;76:5134–7.
Muhlrad D, Parker R. Aberrant mRNAs with extended 3' UTRs are substrates for rapid degradation by mRNA surveillance. RNA. 1999;5:1299–307.
He F, Li X, Spatrick P, Casillo R, Dong S, Jacobson A. Genome-wide analysis of mRNAs regulated by the nonsense-mediated and 5' to 3' mRNA decay pathways in yeast. Mol Cell. 2003;12:1439–52.
Leeds P, Peltz SW, Jacobson A, Culbertson MR. The product of the yeast UPF1 gene is required for rapid turnover of mRNAs containing a premature translational termination codon. Genes Dev. 1991;5:2303–14.
Souza-Moreira TM, Navarrete C, Chen X, Zanelli CF, Valentini SR, Furlan M, et al. Screening of 2A peptides for polycistronic gene expression in yeast. FEMS Yeast Res. 2018;18:foy036.
Agalarov S, Sakharov PA, Fattakhova D, Sogorin EA, Spirin AS. Internal translation initiation and eIF4F/ATP-independent scanning of mRNA by eukaryotic ribosomal particles. Sci Rep. 2014;4:4438.
Shirokikh NE, Spirin AS. Poly(A) leader of eukaryotic mRNA bypasses the dependence of translation on initiation factors. Proc Natl Acad Sci U S A. 2008;105:10738–43.
Archer SK, Shirokikh NE, Beilharz TH, Preiss T. Dynamics of ribosome scanning and recycling revealed by translation complex profiling. Nature. 2016;535:570–4.
Hastings WK. Monte Carlo sampling methods using Markov chains and their applications. Biometrika. 1970;57:97–109.
Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E. Equation of state calculations by fast computing machines. J Chem Phys. 1953;21:1087–92.
Vassilenko KS, Alekhina OM, Dmitriev SE, Shatsky IN, Spirin AS. Unidirectional constant rate motion of the ribosomal scanning particle during eukaryotic translation initiation. Nucleic Acids Res. 2011;39:5555–67.
Kochetov AV. AUG codons at the beginning of protein coding sequences are frequent in eukaryotic mRNAs with a suboptimal start codon context. Bioinformatics. 2005;21:837–40.
Kapp LD, Lorsch JR. The molecular mechanics of eukaryotic translation. Annu Rev Biochem. 2004;73:657–704.
Kozak M. Initiation of translation in prokaryotes and eukaryotes. Gene. 1999;234:187–208.
Gu Y, Mao Y, Jia L, Dong L, Qian SB. Bi-directional ribosome scanning controls the stringency of start codon selection. Nat Commun. 2021;12:6604.
Wang J, Shin BS, Alvarado C, Kim JR, Bohlen J, Dever TE, et al. Rapid 40S scanning and its regulation by mRNA structure during eukaryotic translation initiation. Cell. 2022;185:4474–87.
Zhao TL, Zhang S, Qian WF. Cis-regulatory mechanisms and biological effects of translation elongation. Yi Chuan. 2020;42:613–31.
Schuller AP, Green R. Roadblocks and resolutions in eukaryotic translation. Nat Rev Mol Cell Biol. 2018;19:526–41.
Zhao T, Chen YM, Li Y, Wang J, Chen S, Gao N, et al. Disome-seq reveals widespread ribosome collisions that promote cotranslational protein folding. Genome Biol. 2021;22:16.
Skabkin MA, Skabkina OV, Hellen CU, Pestova TV. Reinitiation and other unconventional posttermination events during eukaryotic translation. Mol Cell. 2013;51:249–64.
Peabody DS, Subramani S, Berg P. Effect of upstream reading frames on translation efficiency in simian virus 40 recombinants. Mol Cell Biol. 1986;6:2704–11.
Firth AE, Brierley I. Non-canonical translation in RNA viruses. J Gen Virol. 2012;93:1385–409.
Gould PS, Dyer NP, Croft W, Ott S, Easton AJ. Cellular mRNAs access second ORFs using a novel amino acid sequence-dependent coupled translation termination-reinitiation mechanism. RNA. 2014;20:373–81.
Kozak M. Migration of 40 S ribosomal subunits on messenger RNA when initiation is perturbed by lowering magnesium or adding drugs. J Biol Chem. 1979;254:4731–8.
Kozak M. Constraints on reinitiation of translation in mammals. Nucleic Acids Res. 2001;29:5226–32.
Terenin IM, Akulich KA, Andreev DE, Polyanskaya SA, Shatsky IN, Dmitriev SE. Sliding of a 43S ribosomal complex from the recognized AUG codon triggered by a delay in eIF2-bound GTP hydrolysis. Nucleic Acids Res. 2016;44:1882–93.
Ivanov IP, Shin BS, Loughran G, Tzani I, Young-Baird SK, Cao C, et al. Polyamine control of translation elongation regulates start site selection on antizyme inhibitor mRNA via ribosome queuing. Mol Cell. 2018;70:254–264 e256.
Dinesh-Kumar SP, Miller WA. Control of start codon choice on a plant viral RNA encoding overlapping genes. Plant Cell. 1993;5:679–92.
Iwasaki S, Floor SN, Ingolia NT. Rocaglates convert DEAD-box protein eIF4A into a sequence-selective translational repressor. Nature. 2016;534:558–61.
Brito Querido J, Sokabe M, Kraatz S, Gordiyenko Y, Skehel JM, Fraser CS, et al. Structure of a human 48S translational initiation complex. Science. 2020;369:1220–7.
Kumar P, Hellen CU, Pestova TV. Toward the mechanism of eIF4F-mediated ribosomal attachment to mammalian capped mRNAs. Genes Dev. 2016;30:1573–88.
Akulich KA, Andreev DE, Terenin IM, Smirnova VV, Anisimova AS, Makeeva DS, et al. Four translation initiation pathways employed by the leaderless mRNA in eukaryotes. Sci Rep. 2016;6:37905.
Pestova TV, Hellen CU. Ribosome recruitment and scanning: what's new? Trends Biochem Sci. 1999;24:85–7.
Shirokikh NE, Preiss T. Translation initiation by cap-dependent ribosome recruitment: recent insights and open questions. Wiley Interdiscip Rev RNA. 2018;9:e1473.
Shirokikh NE, Dutikova YS, Staroverova MA, Hannan RD, Preiss T. Migration of small ribosomal subunits on the 5' untranslated regions of capped messenger RNA. Int J Mol Sci. 2019;20:4464.
Calvo SE, Pagliarini DJ, Mootha VK. Upstream open reading frames cause widespread reduction of protein expression and are polymorphic among humans. Proc Natl Acad Sci U S A. 2009;106:7507–12.
Brachmann CB, Davies A, Cost GJ, Caputo E, Li J, Hieter P, et al. Designer deletion strains derived from Saccharomyces cerevisiae S288C: a useful set of strains and plasmids for PCR-mediated gene disruption and other applications. Yeast. 1998;14:115–32.
Chen Y, Li K, Chu X, Carey LB, Qian W. Synchronized replication of genes encoding the same protein complex in fast-proliferating cells. Genome Res. 2019;29:1929–38.
Wu S, Li K, Li Y, Zhao T, Li T, Yang YF, et al. Independent regulation of gene expression level and noise by histone modifications. PLoS Comput Biol. 2017;13:e1005585.
Kudla G, Murray AW, Tollervey D, Plotkin JB. Coding-sequence determinants of gene expression in Escherichia coli. Science. 2009;324:255–8.
Zu W, Zhang H, Lan X, Tan X. Genome-wide evolution analysis reveals low CpG contents of fast-evolving genes and identifies antiviral microRNAs. J Genet Genomics. 2020;47:49–60.
Lorenz R, Bernhart SH, Honer Zu Siederdissen C, Tafer H, Flamm C, Stadler PF, et al. ViennaRNA package 2.0. Algorithms Mol Biol. 2011;6:26.
Sharp PM, Li WH. The codon adaptation index--a measure of directional synonymous codon usage bias, and its potential applications. Nucleic Acids Res. 1987;15:1281–95.
Yang YF, Zhang X, Ma X, Zhao T, Sun Q, Huan Q, et al. Trans-splicing enhances translational efficiency in C. elegans. Genome Res. 2017;27:1525–35.
Codling EA, Plank MJ, Benhamou S. Random walk models in biology. J R Soc Interface. 2008;5:813–34.
Yates AD, Achuthan P, Akanni W, Allen J, Allen J, Alvarez-Jarreta J, et al. Ensembl 2020. Nucleic Acids Res. 2020;48:D682–8.
Kinsella RJ, Kahari A, Haider S, Zamora J, Proctor G, Spudich G, et al. Ensembl BioMarts: a hub for data retrieval across taxonomic space. Database (Oxford). 2011;2011:bar030.
NCBI Resource Coordinators. Database resources of the National Center for biotechnology information. Nucleic Acids Res. 2018;46:D8–D13.
Nagalakshmi U, Wang Z, Waern K, Shou C, Raha D, Gerstein M, et al. The transcriptional landscape of the yeast genome defined by RNA sequencing. Science. 2008;320:1344–9.
Pelechano V, Chavez S, Perez-Ortin JE. A complete set of nascent transcription rates for yeast genes. PLoS One. 2010;5:e15442.
Cunningham F, Allen JE, Allen J, Alvarez-Jarreta J, Amode MR, Armean IM, et al. Ensembl 2022. Nucleic Acids Res. 2022;50:D988–95.
Sayers EW, Bolton EE, Brister JR, Canese K, Chan J, Comeau DC, et al. Database resources of the national center for biotechnology information. Nucleic Acids Res. 2022;50:D20–6.
Ingolia NT, Ghaemmaghami S, Newman JR, Weissman JS. Genome-wide analysis in vivo of translation with nucleotide resolution using ribosome profiling. Science. 2009;324:218–23.
Wang Y, Song F, Zhu J, Zhang S, Yang Y, Chen T, et al. GSA: genome sequence archive. Genomi Proteom Bioinform. 2017;15:14–8.
Li K, Kong J, Zhang S, Zhao T, Qian W. Distance-dependent inhibition of translation initiation by downstream out-of-frame AUGs is consistent with a Brownian ratchet process for ribosome scanning. Genome Sequence Archive. https://bigd.big.ac.cn/gsa/browse/CRA005456. 2022.
Li K, Kong J, Zhang S, Zhao T, Qian W. Distance-dependent inhibition of translation initiation by downstream out-of-frame AUGs is consistent with a Brownian ratchet process for ribosome scanning. Zenodo. 2022. https://doi.org/10.5281/zenodo.5781855.
We thank Dr. Xiaolei Su and Dr. Junjie Guo from Yale University, Dr. Lucas Carey from Ginkgo Bioworks, Dr. Mengyi Sun from Northwestern University, Dr. Yuanchao Xue from Institute of Biophysics CAS, and Dr. Yuqiang Jiang, Dr. Zhuo Du, and Dr. Qiang Tu from Institute of Genetics and Developmental Biology CAS for discussion. We thank Dr. Chaorui Duan and Dr. Zeyu Zhang from Institute of Genetics and Developmental Biology CAS for technical supports.
Review history
The review history is available as Additional file 3.
Tim Sands was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.
This work was supported by grants from the National Key Research and Development Program of China (2019YFA0508700) and the National Natural Science Foundation of China (31922014 and 32100443).
Ke Li, Jinhui Kong, and Shuo Zhang contributed equally to this work.
State Key Laboratory of Plant Genomics, Institute of Genetics and Developmental Biology, Innovation Academy for Seed Design, Chinese Academy of Sciences, Beijing, 100101, China
Ke Li, Jinhui Kong, Shuo Zhang & Wenfeng Qian
University of Chinese Academy of Sciences, Beijing, 100049, China
Jinhui Kong, Shuo Zhang & Wenfeng Qian
Institute of Microbiology, Chinese Academy of Sciences, Beijing, 100101, China
Tong Zhao
Ke Li
Jinhui Kong
Shuo Zhang
Wenfeng Qian
W.Q. and K.L. designed the study; K.L., S.Z., and T.Z. performed experiments; J.K. and S.Z. performed computational analyses; K.L., J.K., S.Z., and W.Q. wrote the manuscript. The author(s) read and approved the final manuscript.
Correspondence to Wenfeng Qian.
13059_2022_2829_MOESM1_ESM.docx
Additional file 1: Fig. S1. High-throughput quantification of GFP intensities by FACS-seq. Fig. S2. Frame- and context-dependent inhibitory effects on protein synthesis by proximal dAUGs. Fig. S3. Codon adaptation index (CAI) and minimum free energy (MFE) do not significantly vary among Duo variants that contain dATGs at different positions. Fig. S4. The mRNA levels of dual-frame reporters and their respective controls measured by quantitative PCR. Fig. S5. High-throughput measurement of mRNA levels for dATG variants. Fig. S6. GFP intensities of dATG variants in the background of upf1Δ. Fig. S7. GFP intensity and mRNA level of the dATG variants in the 2A-inserted library. Fig. S8. Scatter plots showing GFP intensity and mRNA level in two biological replicates for yeast libraries. Fig. S9. GFP intensity and mRNA level for uATG variants in the background of hoΔ. Fig. S10. GFP intensity and mRNA level for uATG variants in the background of upf1Δ. Fig. S11. The optimization of the parameters in the Brownian ratchet scanning model using the MCMC algorithms. Fig. S12. The observed GFP intensities of dATG variants in FACS-seq experiments and the simulated GFP intensities under the Brownian ratchet scanning model. Fig. S13. Numbers of genes that harbor frame 0 or frame +2 dATGs at individual positions downstream of the aATG in the yeast or human genomes. Fig. S14. Numbers of genes harboring out-of-frame dATG for highly/broadly and lowly/narrowly expressed genes.
13059_2022_2829_MOESM2_ESM.xlsx
Additional file 2: Table S1. Primers used in this study. Table S2. Doped nucleotide oligos used to construct yeast libraries. Table S3. Indices and barcodes used for Illumina sequencing library. Table S4. Sequences of the dual-fluorescence and dual-frame reporter. Table S5. Number of sequencing reads and variants in each sample. Table S6. Detailed information on individual bins in the FACS-seq experiment. Table S7. Detailed information on dATG variants in this study. Table S8. Detailed information on 2A-inserted dATG variants in this study. Table S9. Detailed information on uATG variants in this study.
Additional file 3. Peer review history.
Li, K., Kong, J., Zhang, S. et al. Distance-dependent inhibition of translation initiation by downstream out-of-frame AUGs is consistent with a Brownian ratchet process of ribosome scanning. Genome Biol 23, 254 (2022). https://doi.org/10.1186/s13059-022-02829-1
Translation initiation
Leaky scanning
Downstream AUG
Strictly unidirectional scanning model
Brownian ratchet scanning model
Preinitiation complex
Protein homeostasis
Submission enquiries: [email protected] | CommonCrawl |
Removal of multiple-tip artifacts from scanning tunneling microscope images by crystallographic averaging
Jack C. Straton1,
Bill Moon1,2,
Taylor T. Bilyeu1 &
Peter Moeck1
Advanced Structural and Chemical Imaging volume 1, Article number: 14 (2015) Cite this article
Crystallographic image processing (CIP) techniques may be utilized in scanning probe microscopy (SPM) to glean information that has been obscured by signals from multiple probe tips. This may be of particular importance for scanning tunneling microscopy (STM) and requires images from samples that are periodic in two dimensions (2D). The image-forming current for double-tips in STM is derived with a slight modification of the independent-orbital approximation (IOA) to allow for two or more tips. Our analysis clarifies why crystallographic averaging works well in removing the effects of a blunt STM tip (that consists of multiple mini-tips) from recorded 2D periodic images and also outlines the limitations of this image-processing technique for certain spatial separations of STM double-tips. Simulations of multiple mini-tip effects in STM images (that ignore electron interference effects) may be understood as modeling multiple mini-tip (or tip shape) effects in images that were recorded with other types of SPMs as long as the lateral sample feature sizes to be imaged are much larger than the effective scanning probe tip sizes.
Scanning probe microscopy (SPM) images are often degraded due to the effects of two (or more) protrusions on the probe tip (i.e. effective mini-tips on a blunt tip), as well as containing sample tilt errors, image bow and drift, and stepping errors that occur while scanning the tip in two dimensions (2D) over the sample surface. Averaging methods have long been used to remove scanning errors. There are also well-established techniques for straightening out keystone-shaped images that result from sample tilt and image drift, and for the removal of image bow by z-flattening using least-squares higher-order polynomials to model this distortion [1–3]. Removing multiple-tip artifacts from SPM images has, however, only recently been accomplished through the adoption of crystallographic image processing (CIP) techniques [4–7], which one may consider as being a kind of a crystallographic averaging in reciprocal (Fourier) space of the intensity of symmetry-related features in direct space.
The transmission electron crystallography community developed CIP to enable the extraction of structure factor amplitudes and phase angles from (parallel illumination) high-resolution phase contrast images of crystalline materials within the weak phase object approximation [8, 9]. It has also been used for the correction of these images for the effects of the phase contrast transfer function, two-fold astigmatism, sample tilt away from low-indexed zone axes, and beam tilt away from the optical axis of the microscope. The central ideas of this kind of 2D crystallographic symmetry averaging have also been applied to scanning transmission electron microscopy (STEM) in order to increase the signal-to-noise ratio of Z-contrast imaging [10].
In the context of SPM, CIP addresses multiple scanning probe tip imaging artifacts effectively. This is an application that is beyond its original conception by the electron crystallography community and also does not apply to Z-contrast STEM imaging.
Since one may define 2D image-based crystallography independent of the source of the 2D patterns as being concerned with categorizing, specifying, and quantifying 2D long-range ordered patterns [4], CIP is also a good term for procedures as applied to SPM images of 2D periodic objects.
This process consists in its simplest form in the application of a Fourier transform to the 2D digitized image (called Fourier analysis), detection of the most likely plane symmetry in reciprocal space, enforcement of this symmetry by averaging of the symmetry related Fourier coefficients to remove all kinds of degradations, and finally inverse-Fourier image reconstruction (called Fourier synthesis into direct space). Irregularities in the 2D periodic array that is to be imaged, e.g. 2D periodic motif vacancies, are "averaged out" by CIP. For representative results, one should therefore aim for a ratio of regularly repeating features to irregularities of at least 50 (or better 100) to one.
By means of CIP, one can also extract the prevailing point spread functionFootnote 1 of the SPM [4] and use it for the correction of subsequently recorded images [7]. One may refer to this function loosely as the "effective scanning probe tip" as it represents the convolution of the effects of the actual tip shape with all kinds of scanning and signal processing irregularities.
The symmetrizing is done in reciprocal space because of its computational efficiency. Since the Fourier coefficients were symmetrized, the CIP processed images are also symmetrized to the chosen 2D space group. The 2D space groups are also known as plane symmetry groups and combine 2D translations symmetries with 2D point symmetries, see "Appendix A". We use the international (Hermann–Mauguin) notations for plane symmetry and 2D point symmetry groups [11] throughout the paper. When compared to CIP, conventional Fourier filtering [12] of 2D periodic images leads to translation averaging only. This means that the latter technique does not take advantage of the site symmetries in the plane groups (so that pure translation averaging will be up to 12 times less effective than CIP).
Consider, for example, the image shown in Fig. 1a, whose p4-symmetry is "symmetrically perfect" because we imposed this symmetry on an experimental "nearly-p4" STM image [6, 13] using CIP. (Crystallographic notations such as p4 and basic 2D crystallography are briefly discussed in "Appendix A".) Using Photoshop,Footnote 2 we have artificially constructed in Fig. 1b an image somewhat akin to what one would see with three SPM tips shifted laterally and vertically with respect to each other, simultaneously scanning the same surface, with signals beating against each other.
Modeling the effects of a triple-SPM-tip on the image of a p4 source. a A 550 by 550 pixel image whose p4-symmetry is known by design. (Based on its "experimental counterparts" in refs. [6, 13], the area of this image corresponds to approximately 340 nm2). b A "hypothetical image" to model what a triple-SPM-tip would produce when imaging this "sample," constructed in Photoshop by overlaying two copies of the p4 image, shifting them, and setting the blend mode to Overlay (footnote 2), with the opacity reduced for each to model different heights for the three tips. (A small ~15 by 26 pixel wide margin of the unobscured image is seen in the upper–left-hand corner behind the overlain image). c Crystallographically averaged p4 plane symmetry reconstruction of a 512 by 512 pixel fully obscured portion of this "sample."
We note that both the unobscured image, Fig. 1a, and the obscured one, Fig. 1b, possess the same translation symmetry, which is that of the square 2D Bravais lattice. It was noted in Ref. [14] that subsequently recorded images from the same 2D periodic array that possess variations in the motif but possess the same translation symmetry are the hallmarks of blunt scanning probe tips. While obscured images have typically been discarded in the past, CIP presents an alternative to recover information from them. Figure 1c shows the inverse-Fourier image reconstruction after p4 symmetry enforcement in reciprocal space (following the guidelines in "Appendix B") of the fully obscured portion of Fig. 1b. One sees a quite faithful reproduction (apart from a decrease in contrast) of the one-tip image, Fig. 1a as the 2D point symmetry of the motif is restored to group 4.
In the case of images that were recorded with multiple mini-tips, the whole plane symmetry enforcing procedure can, by virtue of the Fourier shift theorem [15], be thought of as aligning the 2D periodic motifs of all independent SPM images from the multiple mini-tips on top of each other, thus enhancing the signal-to-noise ratio significantly when done correctly. Within this context, CIP can be understood as a "sharpening up" of the effective scanning probe tip.
The present work shows in detail why CIP works and builds upon prior work [4–7] that shows how it is done at a practical level. In order to show in detail why CIP works, we will modify a common approach for simplifying the details of the problem, the independent-orbital approximation (IOA) to allow for the beating of signals from multiple mini-tips in STM. That is, we explore how "scanning tunneling probe tip surface structures" add both linearly and quantum mechanically to the recorded signal in convolution with the features of the "sample surface structure".
Although the underlying physics of the IOA approach is specific to STM imaging, simulations of multiple-tip effects that ignore electron interference effects may be understood as modeling multiple mini-tip (or tip shape) effects in images that are recorded with other types of SPMs (where quantum mechanical interference effects can be safely ignored). It is well known that the nominal probe size is in STM imaging typically of the same (atomic or molecular) order of magnitude as the sample surface features that are to be imaged. For CIP to be applicable to images of 2D periodic arrays that were recorded with other types of SPMs (footnote 1), the effective probe size has to be much smaller than the lateral size of the features to be imaged. Although this requirement is trivial for any kind of meaningful imaging with SPMs (other than STMs, atomic or molecular resolution atomic force microscopes, and critical dimension SPMsFootnote 3), it needs to be stated repeatedly as the literature abounds with conclusions that largely ignore it.
We first review the IOA, show how to modify it for two tips, and then trace back the resultant image to the salient details within its Fourier transform to show why CIP works. The changes wrought in the tunneling current by having two (or more) tips are outlined thereafter. The arrangements of multiple mini-tips in our analyses do not possess projected 3D point symmetries higher than 1, i.e. 360 degree rotations about arbitrary axes.
We begin with a treatment of double-tips since one may consider it a worst-case scenario of multiple tips, as will be illustrated later in the paper. We also examine the effect of double-tip height variations on the images and on the applicability of CIP.
In particular, we show that the 2D Fourier transform of the derived current resulting from two tips is comprised of the same Fourier coefficients as a single tip. The currents from the two tips differ in a phase term in reciprocal space [15] arising from the addition of complex numbers with different phases. These phase differences between two contributors may reduce the amplitudes (at a given reciprocal space point). CIP lessens this effect by averaging the Fourier coefficient amplitude and phase at such a point with amplitudes and phases at symmetry-related points.
We show the wide range of double-tip separations that are amenable to CIP. There are, however, certain double-tip separations for which some of these phases take prominent Fourier coefficients to zero, thereby obscuring the current map to the extent that even CIP cannot improve it.
The independent-orbital approximation
We first sketch Chen's derivation [16] of an STM image for a surface structure having plane symmetry p4mm (and, thus, a square lattice) using the IOA, in which the total tunneling current is approximated by the sum of the tunneling currents from independent atomic states. (The difference between lattices and structures is clarified in "Appendix A".) Since a square lattice/structure combines two identical perpendicular one-dimensional lattices/structures, we find the total tunneling conductance to be of the form:
$$G\left( {x,y,z} \right) = \sum\limits_{n = - \infty }^{\infty } {\sum\limits_{m = - \infty }^{\infty } {g\left( {x - na,y - ma,z} \right)} } = \sum\limits_{h = - \infty }^{\infty } {\sum\limits_{k = - \infty }^{\infty } {} \tilde{G}_{hk} \left( z \right)} e^{ihbx + ikby} ,$$
where the conductance of the nth atom g(x − na, y − ma, z) is a function with periodicity a in both directions, yielding a discrete Fourier transform with identical primitive lattice vector lengths b = 2/a and Fourier coefficients,
$$\tilde{G}_{hk} \left( z \right) = \frac{1}{{a^{2} }}\int\limits_{ - \infty }^{\infty } {dx\int\limits_{ - \infty }^{\infty } {\,dye^{{ - ib\left( {hx + ky} \right)}} g\left( {x,y,z} \right)} } .$$
Only the lowest five terms in the Fourier series contribute significantly to the STM image, due to the reflection symmetry of the conductance function g( r ), \({\tilde{G}_0} (z)\) and \(\tilde{G}_{ - 1,0} \left( z \right) = \tilde{G}_{1,0} \left( z \right) = \tilde{G}_{0, - 1} \left( z \right) = \tilde{G}_{0,1} \left( z \right) \equiv \tilde{G}_{1} \left( z \right)\). Then the total conductance function to this order is
$$G\left( {\mathbf{r}} \right) = \tilde{G}_{0} \left( z \right) + 2\tilde{G}_{1} \left( z \right)\,\left( {\rm cos\left( {bx} \right) + \rm cos\left( {by} \right)} \right) .$$
The topographic SPM image, due to ∆z(x) corrugation altering a smooth surface and representing a structure, is related to the current image by, [16]
$$\begin{aligned} \Delta z\left( {\mathbf{r}} \right) & = - \frac{{\Delta I\left( {\mathbf{r}} \right)}}{{\left( {\frac{{dI_{0} \left( z \right)}}{{dz}}} \right)}} \\ & = - \frac{{2\tilde{G}_{1} \left( {z_{0} } \right)}}{{\left( {\frac{{d\tilde{G}_{0} \left( {z_{0} } \right)}}{{dz_{0} }}} \right)}}\left( {{\rm cos}\left( {bx} \right) + {\rm cos}\left( {by} \right)} \right). \\ \end{aligned}$$
To calculate the required Fourier coefficients, Chen notes that the term with the highest power of r dominates the behavior of hydrogenic wavefunctions at low-energies (up to a few eV), so one can effectively approximate them with Slater orbitals [17, 18],
$$\psi_{nlm} \left( {r,\theta ,\varphi } \right) = Cr^{n - 1} e^{ - \lambda r} Y_{lm} \left( {\theta ,\varphi } \right)$$
where, unlike hydrogen eigenstates, the principal quantum number is n ≥ 0. Here \(Y_{lm} \left( {\theta ,\varphi } \right)\) is the standard spherical harmonic function. These are convenient also because they may be calculated by taking derivatives with respect to the orbital exponent λ (proportional to the square root of the energy of the state) and to z of \(\psi_{000} \equiv C{{e^{ - \lambda r} } \mathord{\left/ {\vphantom {{e^{ - \lambda r} } r}} \right. \kern-0pt} r}\) (also recognized as the Yukawa potential).
The conductance distribution for an s sample state and an s tip state is \(e^{ - 2\kappa r}\) (see Chen's Table 6.1 for other combinations, such as \(\cos^{2} \theta e^{ - 2\kappa r}\) if either the sample or the tip is a p z state and the other is an s state). Then taking the derivative of an integral identity [19] gives,
$$\begin{aligned} \tilde{G}_{0} \left( z \right) & \equiv \tilde{G}_{{00}} \left( z \right) \\ & = \left. {\int\limits_{{ - \infty }}^{\infty } {dx\int\limits_{{ - \infty }}^{\infty } {{\mkern 1mu} dye^{{ - ib\left( {hx + ky} \right)}} e^{{ - 2\kappa r}} } } } \right|_{{h = k = 0}} \\ & = \left( { - \frac{\partial }{{2\partial \kappa }}} \right)\left. {\int\limits_{{ - \infty }}^{\infty } {dx\int\limits_{{ - \infty }}^{\infty } {{\mkern 1mu} dye^{{ - ib\left( {hx + ky} \right)}} \frac{{e^{{ - 2\kappa r}} }}{r}} } } \right|_{{h = k = 0}} \\ {\mkern 1mu} & = \left( { - \frac{\partial }{{2\partial \kappa }}} \right)\left. {\frac{{2\pi e^{{ - 2\gamma z}} }}{\gamma }} \right|_{{h = k = 0}} \\ & \cong \frac{{2\pi }}{\gamma }\left( { - \frac{\partial }{{2\partial \kappa }}} \right)\left. {e^{{ - 2\gamma z}} } \right|_{{h = k = 0}} = \frac{{\pi z}}{\kappa }e^{{ - 2\kappa z}} \\ \end{aligned}$$
and the similarly derived,
$$\tilde{G}_{1} \left( z \right) = \frac{4\pi \kappa z}{{\gamma^{2} }}e^{ - \gamma z} ,$$
$$\gamma^{2} = 4\kappa^{2} + b^{2} \left( {h^{2} + k^{2} } \right)_{h = 1,k = 0} .$$
So the topographic image is given by,
$$\varDelta z\left( {\mathbf{r}} \right) = \frac{16\kappa }{{\gamma^{2} }}e^{ - \beta z} \left( {\rm cos\left( {bx} \right) + \rm cos\left( {by} \right)} \right)$$
for an s sample state and an s tip state, where,
$$\beta = \gamma - 2\kappa .$$
If either the sample or the tip is a p z state and the other is an s state, the topographic image is given by,
$$\varDelta z\left( {\mathbf{r}} \right) = \left( {\frac{\gamma }{2\kappa }} \right)^{2} \frac{16\kappa }{{\gamma^{2} }}e^{ - \beta z} \left( {\rm cos\left( {bx} \right) + \rm cos\left( {by} \right)} \right) ,$$
and so on, with the corrugation (real-space lattice/structure) multiplied by a z-dependent amplitude.
Two scanning probe tips
If one were imaging using an atomic state with two lobes aligned parallel to the x-axis, one could follow the procedure Chen outlines [20] in which for a quantum mechanical p x tip state, say, one takes "derivatives of the sample wave function at the nucleus of the apex atom of the tip" with respect to x to get the tunneling matrix elements. This results in the current images from each sample atom being doubled, as pictured in his 1987 paper [21].
In many cases, however, an STM tip having a pair of mini-tips—due to manufacturing error, damage to the tip, or the originally atomically sharp tip having picked up some material from the sample or the surrounding—is likely to have them separated by a much larger distance than the lobes of an atomic orbital. Indeed the separation distance will likely be of the same order as the inter-atomic or inter-molecular spacings of the sample.
In such a case, we can treat such a doubled tip as two well-spaced s tips (keeping our s sample), for example, and rely upon the reciprocity principle: [22] by "interchanging the tip state and the sample state, the conductance distribution [and hence the image is] unchanged". We saw above that a p x tip state imaging a real-space structure would result in a current image having each sample atom (or molecule) doubled. One would get a similar looking current image using a single tip on a lattice/structure one has cloned, after shifting the second lattice/structure's origin along the x-axis by the distance between the lobes of the p x tip. With a double-tip whose spacing is significantly larger, the same principle applies. We will see, however, that tip separations on a scale matching the sample lattice constant give the new possibility that the two currents will beat against each other.
As the pair of s tips (on a blunt scanning probe tip) is scanned over the surface, each tip would encounter the largest charge density in the x direction at different positions of the scanning head holding the two tips. If the tip separation w were precisely (an integer times) the periodicity of the real-space lattice/structure, the conduction signal would simply be twice as large and the topographic image would be unchanged except for brightness from what a single tip would yield. If, on the other hand, the tips were separated by any other distance, the two tips would register different tunneling charge densities at each position of the scanning head, and the pair of conduction signals would beat against each other, altering the topographic image registered.
For our single-tip on a cloned lattice/structure, we still have atoms that are independent of each other so that they do not shift position when new neighbors are slipped into the interstices by the duplication and shift process. This is a reasonable assumption if the spacing between atoms is (much) larger than the atomic extent.
The resulting topographic image would be given by,
$$\begin{aligned} \Delta z_{2} \left( {\mathbf{r}} \right) & = \frac{{16\kappa }}{{\gamma ^{2} }}e^{{ - \beta z}} \left( {{\rm cos}\left( {b\left[ {x + u} \right]} \right) + {\rm cos}\left( {by} \right)} \right.\quad \\ & \quad + \left. {{\rm cos}\left( {b\left[ {x - u} \right]} \right) + {\rm cos}\left( {by} \right)} \right), \\ \end{aligned}$$
where we have shifted the cloned lattice/structure by u = w/2 in the positive x direction and the original lattice/structure by u in the negative x direction, as that simplifies the Fourier transform we will consider in a moment. The resultant topographic images at various tip separations are shown in Fig. 2 and we indeed do see increasing beating between the two signals as (b times) the tip separation approaches π/4 relative to the IOA p4mm surface wave functions having a period of 2π.
Topographic images due to various tip separations. Superpositions of the two IOA current sources with (b times) an STM tip half-separation a bu = 0, b bu = 0.6, c bu = 0.74, and d bu = 0.77 = π/4 − ε units in the horizontal direction, relative to the IOA p4mm surface wave functions having a period of 2π. A unit cell is inset in each case
To see where this loss of periodicity in the horizontal direction is coming from, we take the Fourier transform of (12),
$$\begin{aligned} F\left[ {\Delta z_{2} \left( {\mathbf{r}} \right)} \right] & = \frac{{32\pi \kappa }}{{\gamma ^{2} }}e^{{ - \beta z}} \left( {{\text{Cos}}[{\text{bu}}]\left( {\delta [ - {\text{b + H}}]} \right.} \right. \\ & \quad + \left. {\left. {\delta [{\text{b + H}}]} \right)\delta [{\text{K}}]{\text{ + }}\delta [{\text{H}}]\left( {\delta [ - {\text{b + K}}]{\text{ + }}\delta [{\text{b + K}}]} \right)} \right){\mkern 1mu} . \\ \end{aligned}$$
This transform confirms that the reciprocal lattice spacing is independent of the number of tips. This property is a necessary condition for CIP to reconstruct a corrected image in real space. Reciprocal lattice vectors {H, K} are marked in Fig. 3a. "Appendix B" mentions a recently developed procedure to detect unambiguously the underlying 2D Bravais lattice of a 2D periodic surface structure [23] that aids the detection of multiple-tip artifacts in SPM images [24].
Fourier components at these tip separations. Fourier transforms of IOA p4mm wave functions with (b times) an STM tip half-separation a bu = 0, b bu = 0.6, c bu = 0.74, and d bu = 0.77 = π/4 − ε units in the horizontal direction, relative to the IOA p4mm surface wave functions having a period of 2π. Reciprocal lattice vectors {1,0} and {0,1} (= {H,K}) are marked in (a)
The transform (10) also reveals that suppression of Fourier components in the horizontal direction in reciprocal space by the phase terms Cos[n bu], seen in Fig. 3, is the cause of the significant change in the image registered by this model double STM tip in Fig. 2. In Fig. 3d, for π/4 − ε, this suppression becomes so severe that the character of the original image is entirely obscured for vanishing ε, see Fig. 2d.
Figure 4 shows the results of plane symmetry enforcements of the underlying p4mm symmetry for the superposition of IOA p4mm wave functions. This figure represents the final result of the CIP procedure on the images of Fig. 2. Even with significant suppression of spatial frequency information due to rather wide double-tip separations, CIP still is able to recover sufficiently reconstructed symmetrized "images" of the IOA p4mm wave functions, as seen for example in Fig. 4c, when compared with the single-tip image Fig. 2a.
Plane symmetry enforcement at these tip separations. Plane symmetry enforcement of the underlying p4mm symmetry for the superposition of IOA p4mm wave functions with (b times) an STM tip half-separation of Fig. 2. a bu = 0, b bu = 0.6, c bu = 0.74, and d bu = 0.77 = π/4 − ε units in the horizontal direction, relative to the IOA p4mm surface wave functions having a period of 2π
For bu = 0.77, Fig. 2d, we are beyond the limit at which one might confidently use CIP without a priori knowledge and/or an unambiguous determination of the underlying translation symmetry. With our prior knowledge of the underlying plane symmetry of the sample 2D periodic array, and/or with our recently developed geometric Akaike information criterion (AIC) for the unambiguous identification of 2D Bravais lattices [24] (see "Appendix B"), we can direct the popular CIP program CRISP [25] to produce a reconstruction, Fig. 4d, much more faithful to the IOA p4mm wave functions, Fig. 2 a than that contained in the two-tip image, Fig. 2d.
In the worst cases, e.g. for vanishing ε as extrapolated from Figs. 2d and 3d when even CIP cannot reliably reconstruct the correct images, they may be discarded.
Different heights for the two tips
We assumed a worst-case scenario in Eq. (12) in which the two tips were at precisely the same distance z above the surface structure. If one of the two tips is closer to the sample, its current will dominate the current from the higher tip, thereby exponentially reducing the obscuration of the image. In Sect. "Results", above, we represented a double-tip by a single tip above a cloned lattice/structure, having shifted the clone one way along the x-axis and the original the other way by the same amount. In modeling two tips at different heights in such an approach, one could also raise the cloned lattice/structure higher than the original to yield the exponential dominance of the current from that original lattice/structure.
Tsukada, Kobayashi, and Ohnishi [26] found a reduction in interference with tip-elevation angle in their calculations using an antibonding H2 orbital model for a tip on graphite. By the time they reached a 0.26 rad elevation difference, the interference was much reduced.
Let us examine an STM tip separation that caused severe image artifacts, bu = 0.77 = π/4 − ε. We see from Fig. 5b that when one tip is at z = 1 Å from the surface and the second is raised to z = 1.2 Å, the obscurations in the image are much reduced. When the second is raised to z = 1.5 Å in Fig. 5c, the current dominated by the closer tip is not distinguishable from a single tip, Fig. 3a. This result is in agreement with the textbook statement that the exponential decay of the tunneling current with height over the sample ensures often sufficiently clear images even if there is more than one scanning probe tip.
The effect of uneven tip height on Topographic images. Superpositions of the two IOA current sources with (b times) an STM tip half-separation bu = 0.77 = π/4 − ε units in the horizontal direction, relative to the IOA p4mm surface wave functions having a period of 2π. In a both tips are at the same height. In b one tip is 20 % higher from the surface than the other, and c 50 % higher
Multiple tips
The final case to explore is the effect of multiple tips on image obscuration. Consider the two-tip separation that is the most problematic, with bu = 0.77 = π/4 − ε units in the horizontal direction, Fig. 3d. Suppose we add a second pair of tips separated by, say, one-third of that value, or bu = 0.26. We see in Fig. 6b that this addition does ameliorate the obscuration. (One gets a similar result if one makes the second pair of tips nonsymmetrical with respect to the origin, so that one is at bu = 0.26 and the second at bu = −0.15.) In Fig. 6c we add a third pair of tips at one-fifth of the separation of the first pair, with bu = 0.15. One sees that with six rather than two tips, the resultant image is hardly distinguishable from a single tip, Fig. 2a.
The effect of tip multiplicity on Topographic images. a Superpositions of the two IOA current sources with (b times) an STM tip half-separation bu = 0.77 = π/4 − ε units in the horizontal direction, with b a second pair of tips separated by one-third of that value, or bu = 0.26, and c a third pair of tips separated by one-fifth of the separation of the first pair, with bu = 0.15
Thus we see that the double-tip case is indeed some kind of a worst case. Additional tips provide nonzero contributions to the reciprocal space amplitudes at spatial frequencies that would otherwise be completely suppressed. This facilitates the application of CIP to bring out even more underlying information in the "sample". So we expect that crystallographic averaging would work well in removing the effects of a blunt STM tip, consisting of multiple mini-tips.
Summary and conclusions
CIP may often be used to remove multiple-tip artifacts from SPM images. Alternatively, one can think of the application of CIP as being analog to the "sharpening up" of a blunt tip to enhance the signal-to-noise level.
We have modified the independent-orbital approximation (IOA) to account for the beating of signals from two tips. Tracing back the resultant image to the salient details within its Fourier transform shows why CIP is effective. The tunneling currents from the two tips differ in a phase term in reciprocal space that may reduce the Fourier amplitudes (and hence, the real-space modulation) at a given reciprocal space point. We show that CIP lessens this effect by averaging the amplitude and phase at such a point with amplitudes and phases at symmetry-related points.
We have also shown that the existence of more than two tips at random separations will tend to ameliorate pair-wise destructive beating of signals at a given reciprocal space point, providing additional amplitude at that Fourier point to restore some real-space modulation. Finally, we have recovered textbook knowledge that tip height variations will ameliorate image degradations because of the exponential falloff of the signal with the tip-surface distance.
In particular, we have shown that the 2D Fourier transform of the derived tunneling current resulting from two tips is comprised of the same Fourier coefficients as a single tip. We show the wide range of double-tip separations that are amenable to CIP. There are, however, certain double-tip separations for which some of these phases take prominent Fourier coefficients to zero, thereby obscuring the current map to the extent that even CIP cannot improve it.
Reference [7] demonstrates, for example, the application of CIP to two 2D periodic images (that were recorded from the same commercial calibration sample with the same atomic force microscope) under (i) standard and (ii) non-standard imaging conditions, i.e. an open feed back loop. That calibration sample was designed to possess plane symmetry p4mm and its lateral 2D periodic feature size were one order of magnitude larger than the nominal probe sizes. (The horizontal sample feature size was approximately a tenth of the nominal probe sizes.) The effective scanning probe tips were de-convoluted from these images and the one that corresponded to the standard imaging conditions was less than half of the size of its non-standard imaging conditions counterpart.
One duplicate of the p4 image was pasted on top of the p4 image and then shifted 3 pixels to the right and 15 pixels down, out of 550 pixels and a second duplicate was shifted up 9 pixels and right 26 pixels. The three layers were then combined using Photoshop's overlay blend mode, the formulas for which are given at http://www.stackoverflow.com/questions/5825149/overlay-blend-mode-formula, with the opacity of the duplicate layers set at 70 and 30 %, respectively.
Critical dimension SPMs were developed specifically for the assessment of narrow and deep trenches as well as steep and high walls either as transients in the building-up of integrated circuits or in micro- and nano-electromechanical systems.
Yurov, V.Y., Klimov, A.N.: Scanning tunneling microscope calibration and reconstruction of real image: drift and slope elimination. Rev. Sci. Instrum. 65(5), 1551–1557 (1994)
Edwards, H., McGlothlin, R.: Vertical metrology using scanning-probe microscopes: imaging distortions and measurement repeatability. J. Appl. Phys. 83(8), 3952–3971 (1998)
Tsaftaris, S.A., Zujovic, J, Katsaggelos, A.K.: Automated line flattening of atomic force microscopy images. In: Proceedings of the International Conference on Image Processing, 12–15 october 2008, pp. 2968–2971, San Diego, California (2008)
Moeck, P: Crystallographic image processing for scanning probe microscopy. In: Méndez-Vilas, A., Diaz, J. (eds.) Microscopy: Science Technology, Applications and Education, Formatex Microscopy Series, no 4, vol. 3, pp. 1951–1962 (2010). http://www.formatex.info/microscopy4/1951-1962.pdf
Moeck, P., Straton, J.C., Toader, M., Hietschold, M.: Crystallographic processing of scanning tunneling microscopy images of cobalt phthalocyanines on silver and graphite. Mater. Res. Soc. Symp. Proc. 1318, 149–154 (2011). doi:10.1557/opl.2011.278
Moeck, P., Straton, J.C., Hipps, K.W., Bilyeu, TT., Rabe, J-P., Mazur, U., Hietschold, M., Toader, M.: Crystallographic STM image processing of 2d periodic and highly symmetric molecule arrays. In: Proceedings 11th IEEE International Conference on Nanotechnology, pp. 891–896 (2011), doi: 10.1109/NANO.2011.6144508
Moon, B, Employment of Crystallographic Image Processing Techniques to Scanning Probe Microscopy Images of Two-Dimensional Periodic Objects, Master of Science Thesis (Portland State University, 2011); http://www.nanocrystallography.research.pdx.edu/media/thesis14acorr.pdf
Hovmöller, S.: In: Ragan, C.I., Cherry, R.J. (eds.) Techniques for the Analysis of Membrane Proteins, pp. 315–344. Chapman and Hall, London (1986)
Zou, X., Hovmöller, S., Oleynikov, P.: Electron Crystallography. Oxford University Press, Electron Microscopy and Electron Diffraction (2011)
Morgan, D.G., Ramasse, Q.M., Browning, N.D.: Application of two-dimensional crystallography and image processing to atomic resolution Z-contrast images. J. Electron Microsc. 58(3), 223–244 (2009)
Hahn., T (ed.) Brief Teaching Edition of Volume A, Space-group symmetry, International Tables for Crystallography, 5th revised edition, International Union of Crystallography (IUCr), Chester (2005)
Park, S., Nogami, J., Quate, C.F.: Effect of tip morphology on images obtained by scanning tunneling microscopy. Phys. Rev. B 36(5), 2863–2866 (1987)
Mazur, U., Leonetti, M., English, W., Hipps, K.W.: Spontaneous solution-phase redox deposition of a dense cobalt(ii) phthalocyanine monolayer on gold. J. Phys. Chem. B 108(44), 17003–17006 (2004)
Iski, E.V., Jewell, A.D., Tierney, H.L., Kyriakou, G., Sykes, C.H.: Organic thin film induced substrate restructuring: an STM study of the interaction of naphtho[2,3-a]pyrene Au(111) herringbone reconstruction. J. Vac. Sci. Techn. A. 29(4), 041510 (2011)
Tables of Integral Transforms, Volume 1, Based in part on notes left by Harry Bateman, and compiled by the staff of the Bateman Manuscript Project. Erdelyi, A (ed.) (McGraw-Hill, 1954), p. 117, Eq. 3.1.5
Chen, C.J.: Introduction to Scanning Tunneling Microscopy, pp. 149–63. Oxford University Press, New York, Oxford (1993) (Oxford Series in Optical and Imaging Science 4, Eds. Lapp, M, Nishizawa, J-I, Snavely, BB, Stark, H, Tam, AC, Wilson, T, ISBN 0-19-507150-6)
Ibid, pp. 122
Slater, J.C.: Atomic shielding constants. Phys. Rev. 36, 57–65 (1930); Zener, C.: Analytic atomic wave functions. Phys. Rev. 36, 51–56 (1930)
Goodman, FO.: Summation of the Morse pairwise potential in gas-surface interaction calculations. J. Chem. Phys. 65(4), 1561–1564 (1976). This may also be proved using Gradshteyn, I. S, Ryzhik, I. M., Table of Integrals, Series, and Products 5ed (Academic Press, New York, 1980), p. 382 No. 3.462.3, p. 1095, No. 9.253, p. 1057, No. 8.950.3, and p. 385, No. 3.471.12
Chen, C.J.: Unified perturbation theory for STM and SFM. In: Wiesendanger, R., Güntherodt, H.J. (eds.) Scanning Tunneling Microscopy III, 2nd edn, pp. 161–162. Springer, Berlin (1996)
Chen, C.J.: Theory of scanning tunneling spectroscopy. J. Vac. Sci. Technol A 6(2), 319–322 (1988)
Supra note 16, p. 154
Bilyeu, TT.: Crystallographic image processing with unambiguous 2D Bravais lattice identification on the basis of a geometric Akaike information criterion, Master of Science Thesis (Portland State University, May 2013). http://www.nanocrystallography.research.pdx.edu/media/cms_page_media/6/Taylor_thesis_final.pdf
Straton, J.C., Bilyeu, T.T., Moon, B., Moeck, P.: Double-tip effects on Scanning Tunneling Microscopy imaging of 2D periodic objects: unambiguous detection and limits of their removal by crystallographic averaging in the spatial frequency domain, special issue "Advances in Structural and Chemical Imaging". Cryst. Res. Technol. 49, 663–680 (2014). doi:10.1002/crat.201300240
Hovmöller, S.: CRISP: crystallographic image processing on a personal computer. Ultramicroscopy 41(1), 121–135 (1992). (This Windows™ based software is the quasi-standard for electron crystallography of inorganics in the weak phase object approximation. Just as "2dx", its quasi-standard counterpart for electron crystallography of 2D membrane protein crystals (Gipson, B, Zeng, X, Zhang, ZY, Stahlberg, H: 2dx—user-friendly image processing for 2D crystals. J. Struct. Biol. 157(1), 64–72 (2007)), this program is based on ideas of Nobel Laureate Sir Aaron Klug and coworkers that resulted in the creation of the MRC image processing software suite over more than a quarter of a century (e.g. Crowther, R.A., Henderson, R., Smith, JM.: MRC image processing programs. J. Struct. Biol. 116(1), 9–16 (1996))
Tsukada, M., Kobayashi, K., Ohnishi, S.: First-principles theory of the scanning tunneling microscopy simulation. J. Vac. Sci. Technol. A 8(1), 160–165 (1990)
Aroyo, MI.: Book Review Foundations of Crystallography: with Computer Applications by M. M. Julian, Acta Cryst. A 65, 543–545 (2009)
Julian, MM.: Foundations of Crystallography: with Computer Applications, CRC Press (2008)
Hahn, Th (ed.) International Tables for Crystallography, volume A, Space group symmetry, 5th Edition, International Union of Crystallography (IUCr), Chester (2005)
Förster, S., Meinel, K., Hammer, R., Trautmann, M., Widdra, W.: Quasicrystalline structure formation in a classical crystalline thin-film system. Nature 502, 215–218 (2013). doi:10.1038/nature12514
Wasio, N.A., Quardokus, R.C., Forrest, R.P., Lent, C.S., Corcelli, S.A., Christie, J.A., Henderson, K.W., Kandel, S.A.: Self-assembly of hydrogen-bonded two-dimensional quasicrystals. Nature 507, 86–89 (2014). doi:10.1038/nature12993
Kanatani, K.: Geometric information criterion for model selection. Int. J. Computer Vision 26(3), 171–189 (1998)
Triono, I., Otha, N., Kanatani, K.: Automatic recognition of regular figures by geometric AIC. IEICE Trans. Inf. Syst. E81–D(2), 224–226 (1998)
JS crafted the mathematical structure of the paper, generated the "hypothetical images" that model the obscurations CIP is capable of removing, performed some of the image processing, and drafted the whole paper. BM performed some of the image processing and helped edit the paper. TB contributed to the development of our method for the unambiguous detection of the underlying Bravais lattice of a 2D periodic SPM image, provided Fig. A1, and helped edit the paper. PM helped write and edit the paper, drafted the appendices, performed some of the image processing, and provided overall guidance to the whole project since applying CIP to SPM images was his basic idea (for which he also secured a patent for his employer). BM and TB each prepared Master of Science theses on the application of CIP to SPM images of 2D periodic arrays. All authors read and approved the final manuscript.
This research was supported by awards from Portland State University's Venture Development Fund and the Faculty Enhancement program. A grant from Portland State University's Internationalization Council is also acknowledged. JS would like to thank C. Julian Chen for helpful comments on charge density distributions.
PM secured for Portland State University a patent for applying CIP to SPM images. He is also a Deputy Editor-in-Chief of Advanced Structural and Chemical Imaging.
Nano-Crystallography Group, Department of Physics, Portland State University, Portland, OR, 97207-0751, USA
Jack C. Straton, Bill Moon, Taylor T. Bilyeu & Peter Moeck
Intel, Hillsboro, OR, USA
Bill Moon
Jack C. Straton
Taylor T. Bilyeu
Peter Moeck
Correspondence to Jack C. Straton.
40679_2015_14_MOESM1_ESM.doc
Plane symmetry groups and their type I inclusion relations. The hierarchy of the 17 plane symmetry groups and their type I inclusion relations. A group lower on the diagram is a subgroup of (included in) a group to which it is connected higher on the diagram. Color (grayscale) indicates the Bravais lattice type, and the multiplicity of the general position per lattice point is indicated by height in the diagram. (The graph is of the contracted type, where some of the nodes refer to conjugate subgroups [27]).
Appendix A: Brief introduction to 2D space and point symmetries
A lattice is the array of all points (lattice points) in a pattern with identical surroundings. That is to say, a pattern will look the same from one lattice point as it does from any other lattice point (if the pattern extends to infinity). The lattice is therefore not a physical entity, but an abstract mathematical construct that is useful for dealing with translation symmetry. In a two-dimensional (2D) periodic pattern, translation symmetry is conveniently represented by a lattice vector t(s 1, s 2) = s 1 a 1 + s 2 a 2 with components of two linearly independent unit translation vectors a 1 and a 2 (basis vectors of the primitive unit cell of the lattice) and s 1 and s 2 integers. That is to say, shifting a 2D periodic pattern along any lattice vector that possess these unit vectors as (integer) components leaves the pattern invariant when translation symmetry is present. Mathematically exact 2D translation symmetry (and the 2D crystallography that builds on it) requires patterns that are infinite in extent and perfect, but the concepts are also useful as approximations for periodic patterns of finite size and patterns where a few individual array members are missing or misplaced, i.e. typical SPM images. A lattice can, therefore, be assigned to finite periodic structures that consist of atoms or molecules.
Only five types of lattices are compatible with the 10 crystallographic point symmetry types in 2D. The former are known as the 2D Bravais lattices, typically referred to as oblique, rectangular primitive (rectangular), rectangular centered (centered), square, and hexagonal lattices. A 2D crystallographic point group is a group of symmetry operations (e.g. a combination of the identity, rotations, and reflections) that leaves at least one point of a plane object invariant, and contains only those rotations that are considered crystallographic rotation operations because of their compatibility with the five 2D Bravais lattices. There are only 10 such symmetry groups (1, 2, m, 2mm, 4, 4mm, 3, 3m, 6, and 6mm) in 2D.
The leading number in the point group symbol denotes the highest-order rotational symmetry operation about a point in the plane. The one or two m's in the symbol denote the presence of one or more mirror (reflection) symmetry operations. The normals of these mirror lines are within the plane of the figure. The 2D crystallographic point groups possess subgroup-supergroup relations (inclusion relations), where a supergroup contains all of the symmetry operations of a corresponding subgroup, plus some additional symmetry operation(s). (In mathematical terms, one often speaks of non-disjoint entities and inclusion relations when there are subgroups-supergroup relations in a general sense.)
By combinations of the five translation symmetry types (Bravais lattices) with the ten crystallographic point symmetry types, a finite set of 2D space symmetry types is obtained. Each of the 17 plane symmetry groups in this set tiles 2D space in a long-range ordered manner with no gaps. Any 2D periodic pattern that tiles 2D space must have the symmetry of one of these 17 groups. The leading letters p (for primitive) and c (for centered) in all plane symmetry group symbols, i.e. p1, p2, pm, pg, cm, p2mm, p2mg, p2gg, c2mm, p4, p4mm, p4gm, p3, p3mL, p31m, p6, and p6mm refer to the lattice type. There are, thus, 15 plane symmetries group on the basis of primitive lattices and two on the basis of centered lattices.
The 2D crystallographic space groups possess subgroup-supergroup relations as well. A distinction is made between so called translationengleiche (type I) and klassengleiche (type II) subgroup-supergroup relations. In this paper, we are only concerned with the maximal and minimal type I subgroup-supergroup relations, which are based on unit cells of the same size (area). Maximal and minimal mean in this context that there is no other group between a subgroup and its supergroup and vice versa. The hierarchy of the 17 plane symmetry groups, along with their (maximal and minimal type I) inclusion relations and Bravais lattices is illustrated in Additional file 1: Figure S1.
The nomenclature of the plane symmetry groups might seem dauntingly complex to the novice, while it relies in fact on only a few rules. In this paper, we use the Hermann–Mauguin symbols [11] as they provide deeper insight into the orientation and mutual arrangement of symmetry operations.
As mentioned above, the leading letters in the symbols of plane symmetry groups refers to the type of lattice: p for primitive (i.e. containing one lattice point) and c for centered (i.e. containing two lattice points). If the p or c is followed by a number, it refers to the highest rotation symmetry about a point in the plane. When one views a 2D plane symmetry group as an orthogonal projection of a 3D space group, these rotation points are projections of rotation axes that are oriented perpendicular to the plane.
If the second entry in the plane symmetry group symbol is an m or g and there is no third and fourth symbol, these letters refer to a mirror or glide line perpendicular to one of the coordinate axes. This is typically the x-axis (parallel to unit translation a 1 ), but there can be different settings. The full Hermann–Mauguin symbols for these three plane symmetry groups are: p1m1, p1g1, and c1m1, whereby the first and last numbers signify that there are only identity (360º) rotations about the projected z-axis and the y-axis (parallel to unit translation a 2 ), respectively. The underlying projected z (1st), x (2nd), and y (3rd) axis sequence is typical for plane symmetry group names that are based on the two rectangular Bravais lattices. As there are no perpendicular x and y axes in the oblique Bravais lattice, the short Hermann–Mauguin symbol of p1 and p2 is indistinguishable from the full (four-entry) symbol.
For the square and hexagonal Bravais lattices, the first symbol after the leading p designation for the lattice type in a plane symmetry group symbol refers to the projected z-axis direction. For both 2D lattice types, the 3rd and 4th symbols in a full (and short) Hermann–Mauguin plane symmetry group symbol refer to symmetries along the x-axis and the \(\left\langle {1\bar{1}} \right\rangle\) directions. While rotation axes are oriented parallel to these directions, mirror and glide lines are represented by their normals, which are oriented perpendicular to these directions.
Plane symmetry groups that contain a c or g in their symbol are either centered or non-symmorphic. This results in the necessity of certain Fourier coefficients being zero. This is analogous to 3D X-ray crystallography, where centered and non-symmorphic space groups result in "systematically absent" or in other words "extinct" reflections.
The Bravais lattices possess the holohedral (highest) plane symmetries, i.e. p2, p2mm, c2mm, p4mm, p6mm, within each type I subgroup-supergroup tree. Point symmetries within a plane symmetry are referred to as site symmetries. The point positions with the lowest Wyckoff letter and multiplicity possess point symmetries 2, 2, 4, and 6mm in the four primitive Bravais lattices. These are the positions of the (one) lattice point that defines the primitive unit cell of the 2D periodic pattern over the application of the unit translations. In the unit cell of the rectangular centered Bravais lattice, there are two lattice points and both possess point symmetry 2mm. This lattice possesses a primitive sub-lattice, which contains only one lattice point (just as all of the primitive Bravais lattices do). The size of the unit cell of the primitive sub-lattice is one half of the size of the rectangular centered cell. This sub-unit cell is characterized by unit translations of equal magnitude that can be oriented with respect to each other at any angle other than 60º, 90º, or 120º.
The primitive unit cells possess the shapes of a parallelogram, a rectangle, a square, and a hexagon. The primitive sub-unit of the rectangular centered unit cell possesses the shape of a rhombus. The convention for the Bravais lattice unit cells is that the x-axis is taken downwards from the upper left vertex with direct space coordinates (0,0) and the y-axis is taken to the right, leading to the coordinates (1,0) for the lower left vertex and (0,1) for the upper right vertex. Ref. [28] provides a concise and elementary introduction to crystallography in general, and covers all of the material above in considerably more detail. In addition, there is a plethora of other introductory texts and information online readily available if the reader desires a better understanding of 2D crystallography. Ref. [29] is the definitive crystallographic standard and covers the direct space aspects of all 17 plane (and 230 space) symmetry groups comprehensively. Ref. [11] is the "brief teaching edition" that complements Ref. [29].
Quasicrystallinity in 2D, i.e. non-periodic long-range order coupled with non-crystallographic point symmetries has been observed recently [30, 31], but is beyond the scope of CIP as described in this paper.
Appendix B: Decisions as to which plane group to enforce
In order to determine the plane symmetry to which an image most likely belongs, the traditional approach is to use Fourier coefficient (FC) amplitude (RA % or Ares) and phase angle (φ Res or φ res) residuals [4, 8, 9, 23, 24]. These kinds of residuals are used as figures of merit for determining which plane symmetry group best models the image (or the sample surface structure having been imaged). As a general heuristic, smaller residuals indicate a closer match between the experimental image and an ideal plane symmetry model. Amplitudes are generally less reliable than phases so that a small FC phase angle residual has traditionally been more useful for identifying plane symmetries.
In addition, one traditionally utilizes the so called A o/A e ratio [4, 8, 9] for those six plane symmetry groups that possess systematic absences [11]. This ratio is defined as the amplitude sum of the Fourier coefficients that are forbidden by the plane symmetry but were nevertheless observed (A o) divided by the amplitude sum of all other observed Fourier coefficients that are allowed (A e) by the plane symmetry. For the six plane groups to which this ratio is applicable, a large ratio makes it more unlikely that the respective group is the right plane symmetry group.
There is, however, currently no fully objective way to use these traditional residuals to assign the correct plane symmetry group. The reason for this is type I subgroup and supergroup relations [11] between many of the 16 higher symmetric plane symmetry groups [23]. Whenever the FC phase and amplitude residuals of an image are not significantly larger for a higher symmetric plane symmetry group than for its respective type I subgroups, and the A o/A e ratio is not too high, one would generally conclude that this particular group is the more likely plane group, in comparison to other groups in its subgroup/supergroup tree. As implicitly mentioned above, there is currently no objective criterion on what "not significantly larger and not too high" may mean in numerical terms.
Given the subjectivity inherent in the use of the three traditional plane symmetry deviation quantifiers, there has been a need for a statistics-rooted measure that quantifies deviations from 2D translation symmetries (in reciprocal and direct space). Such a measure has been recently developed and allows the 2D Bravais lattice to be unambiguously identified because it is based on a geometric Akaike information criterion (AIC). Geometric AICs have been successfully used in a wide range of classification schemes involving non-disjoint models [32, 33].
In brief, the new assessment method involves the position of the (1,0), (0,1) and (1,1) FC peaks in a 2D Fourier transform amplitude map relative to the (0,0) FC peak of an experimental or simulated image. These positions are directly related to the reciprocal and direct lattice parameters of a 2D periodic image, and thereby to the shape of the 2D primitive unit cell (or sub-unit cell in case of the rectangular centered 2D Bravais lattice in direct space).
Residuals J are defined as the sums of squared distances from the vertices of the reciprocal space unit cell of a 2D periodic image to the corresponding vertices of the quadrilaterals that represent the shapes of the unit cells of the 2D Bravais lattices (in reciprocal space). As the conversion to direct space is straightforward, one can obtain from these kinds of residuals as well how much the shape of the direct space unit cell of 2D periodic data differs from the shapes of the quadrilaterals that represent the unit cell of the four primitive 2D Bravais lattices and the unit sub-unit cell of the rectangular centered lattice.
Assuming that deviations from the translation symmetry in the 2D periodic image are only due to random errors with a Gaussian distribution of mean zero, a geometric AIC is applicable as described in refs. [23, 24]. This allows for unambiguous identifications of the prevalent translation symmetry (2D Bravais lattice) and restricts the plane symmetry group an image may possess to those that are compatible with this particular translation symmetry.
For example, we obtain for both bu = 0.6, Fig. 2b, and bu = 0.74, Fig. 2c, a square unit cell as underlying translation symmetry from the application of our geometric AIC procedure. This is as expected, because we showed in the main text of this paper that the underlying translation symmetry cannot be affected by double-tips (and, thus, cannot vary with their separation). Our new procedure is, thus, optimal for detecting blunt tip artifacts in SPM images.
Our identification of square lattices justifies the enforcement of plane symmetry group p4mm on the basis of the three traditional figures of merit for plane symmetry group determinations for both bu = 0.6 and 0.74 (Figs. 2b, c, resulting in the plane symmetry enforced reconstructions of Figs. 4b, c). Note that there are only two other plane symmetry groups, i.e. p4 and p4gm, that are compatible with a square lattice, see Fig. A1. (While p4 is a maximal type I subgroup of p4mm, the plane symmetry groups p4gm and p4mm are disjoint.)
For \(bu \, = \, 0.77 \, = \, \frac{\pi }{4} - \varepsilon\), Fig. 2d, we obtain again a square unit cell from the application of our geometric AIC procedure, while CRISP [25] determines a rectangular unit cell and suggests p2 mg as most likely plane symmetry group. Note that even the extreme banding as seen in Fig. 2d can be corrected by CIP because we were able to identify the correct translation symmetry with the help of our geometric AIC procedure (and had prior knowledge on this anyway). Indeed, the enforcement of p4mm symmetry does give a recognizable reconstruction of the sample image in Fig. 4d, although the motif of the unit cell is now somewhat "squarish" rather than "rounded".
Straton, J.C., Moon, B., Bilyeu, T.T. et al. Removal of multiple-tip artifacts from scanning tunneling microscope images by crystallographic averaging. Adv Struct Chem Imag 1, 14 (2015). https://doi.org/10.1186/s40679-015-0014-6
Scanning tunneling microscopy
Crystallographic image processing
Scanning probe microscopy | CommonCrawl |
CAT 2019 Set-2 | Question: 59
go_editor asked in Logical Reasoning Mar 20, 2020 edited Mar 22, 2022 by Lakshman Patel RJIT
Three pouches (each represented by a filled circle) are kept in each of the nine slots in a $3\times3$ grid, as shown in the figure. Every pouch has a certain number of one-rupee coins. The minimum and maximum amounts of money (in rupees) among the three pouches in each of the nine slots are given in the table. For example, we know that among the three pouches kept in the second column of the first row, the minimum amount in a pouch is Rs. $6$ and the maximum amount is Rs. $8$.
There are nine pouches in any of the three columns, as well as in any of the three rows. It is known that the average amount of money (in rupees) kept in the nine pouches in any column or in any row is an integer. It is also known that the total amount of money kept in the three pouches in the first column of the third row is Rs. $4$.
What is the total amount of money (in rupees) in the three pouches kept in the first column of the second row ________
logical-reasoning
numerical-answer
by go_editor
go_editor asked in Logical Reasoning Mar 20, 2020
Students in a college are discussing two proposals -- $\text{A}$ : a proposal by the authorities to introduce dress code on campus, and $\text{B}$ : a proposal by the students to allow multinational food franchises to set ... proposals. Among the students surveyed who supported proposal $\text{A}$, what percentage preferred Sunita for student union president _______
Students in a college are discussing two proposals -- $\text{A}$ : a proposal by the authorities to introduce dress code on campus, and $\text{B}$ : a proposal by the students to allow multinational food franchises to set up outlets ... . What percentage of the students surveyed who did not support proposal $\text{A}$ preferred Ragini as student union president _______
Three pouches (each represented by a filled circle) are kept in each of the nine slots in a $3\times3$ grid, as shown in the figure. Every pouch has a certain number of one-rupee coins. The minimum and maximum amounts of money (in rupees) among ... in the three pouches in the first column of the third row is Rs. $4$. How many pouches contain exactly one coin ______
Three pouches (each represented by a filled circle) are kept in each of the nine slots in a $3\times3$ grid, as shown in the figure. Every pouch has a certain number of one-rupee coins. The minimum and maximum amounts of money (in rupees) among ... Rs. $4$. What is the number of slots for which the average amount (in rupees) of its three pouches is an integer ______
Three pouches (each represented by a filled circle) are kept in each of the nine slots in a $3\times3$ grid, as shown in the figure. Every pouch has a certain number of one-rupee coins. The minimum and maximum amounts of money (in rupees) among ... row is Rs. $4$. The number of slots for which the total amount in its three pouches strictly exceeds Rs. $10$ is_______ | CommonCrawl |
MBE Home
Flow optimization in vascular networks
June 2017, 14(3): 581-606. doi: 10.3934/mbe.2017034
Modeling and simulation for toxicity assessment
Cristina Anton 1, , Jian Deng 2, , Yau Shu Wong 2, , Yile Zhang 2, , Weiping Zhang 3, , Stephan Gabos 4, , Dorothy Yu Huang 5, and Can Jin 6,
Department of Mathematics and Statistics, Grant MacEwan University, Edmonton, Alberta, T5P2P7, Canada
Department of Mathematical and statistical Sciences, University of Alberta, Edmonton, Alberta, T6G2G1, Canada
Alberta Health, Edmonton, Alberta, T5J1S6, Canada
Department of Laboratory Medicine and Pathology, University of Alberta, Edmonton, Alberta, T6G2B7, Canada
Alberta Centre for Toxicology, University of Calgary, Calgary, Alberta, T2N4N1, Canada
ACEA Biosciences Inc, San Diego, California, 92121, USA
Received February 29, 2016 Accepted October 17, 2016 Published December 2016
Figure(14) / Table(3)
The effect of various toxicants on growth/death and morphology of human cells is investigated using the xCELLigence Real-Time Cell Analysis High Troughput in vitro assay. The cell index is measured as a proxy for the number of cells, and for each test substance in each cell line, time-dependent concentration response curves (TCRCs) are generated. In this paper we propose a mathematical model to study the effect of toxicants with various initial concentrations on the cell index. This model is based on the logistic equation and linear kinetics. We consider a three dimensional system of differential equations with variables corresponding to the cell index, the intracellular concentration of toxicant, and the extracellular concentration of toxicant. To efficiently estimate the model's parameters, we design an Expectation Maximization algorithm. The model is validated by showing that it accurately represents the information provided by the TCRCs recorded after the experiments. Using stability analysis and numerical simulations, we determine the lowest concentration of toxin that can kill the cells. This information can be used to better design experimental studies for cytotoxicity profiling assessment.
Keywords: Mathematical model, cytotoxicity, parameter estimation, persistence.
Mathematics Subject Classification: Primary: 93A30, 37N25; Secondary: 60G35.
Citation: Cristina Anton, Jian Deng, Yau Shu Wong, Yile Zhang, Weiping Zhang, Stephan Gabos, Dorothy Yu Huang, Can Jin. Modeling and simulation for toxicity assessment. Mathematical Biosciences & Engineering, 2017, 14 (3) : 581-606. doi: 10.3934/mbe.2017034
C. Biernacki, G. Celeux and G. Govaert, Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate gaussian mixture models, Comput. Statist. Data Anal., 41 (2003), 561-575. doi: 10.1016/S0167-9473(02)00163-9. Google Scholar
F. Cannavó, Sensitivity analysis for volcanic source modeling quality assessment and model selection, Computers & Geosciences,, 44 (2012), 52-59. Google Scholar
Z. Ghahramani and S. Roweis, Learning nonlinear dynamical systems using an EM algorithm, in Advances in Neural Information Processing Systems (eds. M. Kearns, S. Solla and C. D. A.), MIT Press, 1999,599-605.Google Scholar
T. Hallam, C. Clark and G. Jordan, Effects of toxicants on populations: A qualitative approach Ⅱ. First order kinetics, J. Math. Biology, 18 (1983), 25-37. Google Scholar
J. He and K. Wang, The survival analysis for a population in a polluted environment, Nonlinear Analysis: Real World Applications, 10 (2009), 1555-1571. doi: 10.1016/j.nonrwa.2008.01.027. Google Scholar
B. Huang and J. Xing, Dynamic modeling and prediction of cytotoxicity on microelectronic cell sensor array, The Canadian Journal of Chemical Engineering, 84 (2006), 393-405. Google Scholar
Q. Huang, L. Parshotam, H. Wang, C. Bampfylde and M. Lewis, A model for the impact of contaminants on fish population dynamics, Journal of Theoretical Biology, 334 (2013), 71-79. doi: 10.1016/j.jtbi.2013.05.018. Google Scholar
F. Ibrahim, B. Huang, J. Xing and S. Gabos, Early determination of toxicant concentration in water supply using MHE, Water Research, 44 (2010), 3252-3260. Google Scholar
A.M. Jarrett, Y. Liu, N. Cogan and M.Y. Hussaini, Global sensitivity analysis used to interpret biological experimental results, Journal of Mathematical Biology, 71 (2015), 151-170. doi: 10.1007/s00285-014-0818-3. Google Scholar
J. Jiao, W. Long and L. Chen, A single stage-structured population model with mature individuals in a polluted environment and pulse input of environmental toxin, Nonlinear Analysis: Real World Applications, 10 (2009), 3073-3081. doi: 10.1016/j.nonrwa.2008.10.007. Google Scholar
S. Julier, J. Uhlmann and H. Durrant-White, A new method for the nonlinear transformation of means and covariances in filters and estimators, IEEE Trans. Aut. Control, 45 (2000), 477-482. doi: 10.1109/9.847726. Google Scholar
S. Julier, J. Uhlmann and H. Durrant-Whyte, A new approach for filtering nonlinear systems, in American Control Conference, Seattle, Washington, 1995,1628–1632. doi: 10.1109/ACC.1995.529783. Google Scholar
A. Kiparissides, S. Kucherenko, A. Mantalaris and E.N. Pistikopoulos, Global sensitivity analysis challenges in biological systems modeling, Industrial & Engineering Chemistry Research, 48 (2009), 7168-7180. doi: 10.1021/ie900139x. Google Scholar
K. Kothawad, A. Pathan and M. Logad, Evaluation of in vitro anti-cancer activity of fruit lagenaria siceraria against MCF7, HOP62 and DU145 cell line, Int. J. Pharm. & Technol, 4 (2012), 3909-4392. Google Scholar
M. Liu and K. Wang, Survival analysis of stochastic single-species population models in polluted environments, Ecological Modeling, 220 (2009), 1347-1357. Google Scholar
M. Liu, K. Wang and X. Liu, Long term behaviors of stochastic single-species growth models in a polluted environment, Applied Mathematical Modelling, 35 (2011), 752-762. doi: 10.1016/j.apm.2010.07.031. Google Scholar
X. Meng and D. Van Dyk, The EM algorithm -an old folk-song to a fast new tune, J.R. Statist. Soc.B, 59 (1997), 511-567. doi: 10.1111/1467-9868.00082. Google Scholar
R. Neal and G. Hinton, A view of the EM algorithm that justifies incremental, sparse, an other variants, in Learning in Graphical Models (ed. M. Jordan), 89 (1998), 355-368. doi: 10.1007/978-94-011-5014-9_12. Google Scholar
T. Pan, B. Huang, W. Zhang, S. Gabos, D. Huang and V. Devendran, Cytotoxicity assessment based on the AUC50 using multi-concentration time-dependent cellular response curves, Anal. Chim. Acta, 764 (2013), 44-52. Google Scholar
T. Pan, S. Khare, F. Ackah, B. Huang, W. Zhang, S. Gabos, C. Jin and M. Stampfl, In vitro cytotoxicity assessment based on KC50 with real-time cell analyzer (RTCA) assay, Comp. Biol. Chem., 47 (2013), 113-120. Google Scholar
L. Perko, Differential Equations and Dynamical Systems, Springer, New York, 2001. doi: 10.1007/978-1-4613-0003-8. Google Scholar
R. Shumway and D. Stoffer, An approach to time series smoothing and forecasting using the EM algorithm, J. Time Ser. Anal., 3 (1982), 253-264. doi: 10.1111/j.1467-9892.1982.tb00349.x. Google Scholar
I.M. Sobol, Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Mathematics and Computers in Simulation, 55 (2001), 271-280. doi: 10.1016/S0378-4754(00)00270-6. Google Scholar
H. Thieme, Mathematics in Population Biology, Princeton Series in theoretical and Computational Biology., 2003 Google Scholar
E. A. Wan, R. Van der Merwe and A. T. Nelson, Dual estimation and the unscented transformation, in Advances in Neural Information Processing Systems (ed. M. I. J. et al.), MIT Press, 2000.Google Scholar
C. Wu, On the convergence properties of the EM algorithm, The Annals of Statistics, 11 (1983), 95-103. doi: 10.1214/aos/1176346060. Google Scholar
Z. Xi, S. Khare, A. Cheung, B. Huang, T. Pan, W. Zhang, F. Ibrahim, C. Jin and S. Gabos, Mode of action classification of chemicals using multi-concentration time-dependent cellular response profiles, Comp. Biol. Chem., 49 (2014), 23-35. doi: 10.1016/j.compbiolchem.2013.12.004. Google Scholar
J. Xing, L. Zhu, S. Gabos and L. Xie, Microelectronic cell sensor assay for detection of cytotoxicity and prediction of acute toxicity, Toxicology in Vitro, 20 (2006), 995-1004. doi: 10.1016/j.tiv.2005.12.008. Google Scholar
M. Zhang, D. Aguilera, C. Das, H. Vasquez, P. Zage, V. Gopalakrishnan and J. Wolff, Measuring cytotoxicity: A new perspective on LC50, Anticancer Res., 27 (2007), 35-38. Google Scholar
Figure 1. TCRCs for (a) PF431396 and (b) monastrol
Figure 2. Trajectories corresponding to monastrol and initial values $0<n(0)<K$, $C_0(0)=0$, and (a) $CE(0)<\frac{\beta\eta_1^2}{\alpha\lambda_1^2}=6.51$.(b) $CE(0)>\frac{\beta\eta_1^2}{\alpha\lambda_1^2}=6.51$
Figure 3. The separation between persistence and extinction according to the initial values $n(0)$ and $CE(0)$, red $*$: persistence; blue $\circ$: extinction
Figure 4. Negative control data fitted by logistic model, dot: experimental data, line: logistic model
Figure 5. Smooth spline approximation, dot: experimental data, line: smooth spline
Figure 6. Estimation results for PF431396, dot: experimental data, line: filtered or predicted observations; (a) CE(0)=5.00uM, (b) CE(0)= 1.67uM, (c) CE(0)=0.56uM, (d) CE(0)=0.19uM, (e) CE(0)=61.73nM, (f) CE(0)= 20.58nM, (g) CE(0)= 6.86nM, (h) CE(0)=2.29nM
Figure 7. Estimation results for monastrol, dot: experimental data, line: filtered or predicted observations; (a) CE(0)=100.00uM, (b) CE(0)=33.33uM, (c) CE(0)=11.11uM, (d) CE(0)= 3.70uM, (e) CE(0)=1.23uM, (f) CE(0)= 0.41uM, (g) CE(0)=0.14uM, (h) CE(0)=45.72nM
Figure 8. Estimation results for ABT888, dot: experimental data, line: filtered or predicted observations; (a) CE(0)=308.00uM, (b) CE(0)=102.67uM, (c) CE(0)=34.22uM, (d) CE(0)=11.41uM, (e) CE(0)=3.80uM, (f) CE(0)=1.27uM, (g) CE(0)=0.42uM, (h) CE(0)=0.14uM
Figure 10. (a) Experimental TCRCs for PF431396 for CE(0)=5uM, 1.67uM, 0.56uM (b) Expected cell index and probability of extinction for different concentrations for PF431396
Figure 11. (a) Experimental TCRCs for ABT888 for CE(0)=308uM, 103uM, 34uM (b) Expected cell index and probability of extinction for different concentrations for ABT888
Figure 9. Estimation results for HA1100 hydrochloride, dot: experimental data, line: filtered or predicted observations; (a) CE(0)=1.00mM, (b) CE(0)=0.33mM, (c) CE(0)=0.11mM, (d) CE(0)= 37.04uM, (e) CE(0)=12.35uM, (f) CE(0)=4.12uM, (g) CE(0)=1.37uM, (h) CE(0)= 0.46uM
Figure 12. The first order GSA indices ranking for PF431396 (higher rank means more sensitive)
Figure 13. The first order GSA indices ranking for ABT888 (higher rank means more sensitive)
Figure 14. Network graph visualizing the second order GSA indices for (a) PF431396 with CE(0)=10uM (b) ABT888 with CE(0)=400uM
Table 1. List of Variables and Parameters
Symbol Definition
$n(t)$ cell index ≈ cell population
$C_0(t)$ toxicant concentration inside the cell
$CE(t)$ toxicant concentration outside the cell
$\beta$ cell growth rate in the absence of toxicant
$K$ capacity volume
$\alpha$ effect coefficient of toxicant on the cell's growth
$\lambda_1^2$ the uptake rate of the toxicant from environment
$\lambda_2^2$ the toxicant uptake rate from cells
$\eta_1^2$ the toxicant input rate to the environment
$\eta_2^2$ the losses rate of toxicant absorbed by cells
Table 2. The EM algorithm
Initialize the model parameters $\Theta=\{Q, R,\alpha, \lambda_1,\lambda_2, \eta_1, \eta_2\}$
Repeat until the log likelihood has converged
The E step
For k=1 to N
Run the UF filter to compute $\bar{x}_{k+1}$, $\bar{P}_{k+1}$, $\hat{x}_{k+1}$, $\hat{P}_{k+1}$ and $\bar{P}_{x_kx_{k+1}}$
For k=N to 1
Calculate the smoothed values $x_{k|N}$, and $P_{k|N}$ using (13), (14)
The M step
Update the values of the parameters $\Theta$ to maximize $\hat{E}$
Table 3. Estimated Values of Parameters
Toxicant Cluster β K $\eta_1$ $\lambda_1$ $\lambda_2$ $\eta_2$ $\alpha$
PF431396 Ⅹ 0.077 21.912 0.273 0.058 0 0.008 0.238
monastrol Ⅹ 0.074 18.17 0.209 0.177 0.204 0.5 0.016
ABT888 Ⅰ 0.083 17.543 0.079 0.177 0.205 0.5 0.005
HA1100 hydrochloride Ⅰ 0.077 21.913 0.143 0.0098 0.0786 0.147 0.351
Azmy S. Ackleh, Jeremy J. Thibodeaux. Parameter estimation in a structured erythropoiesis model. Mathematical Biosciences & Engineering, 2008, 5 (4) : 601-616. doi: 10.3934/mbe.2008.5.601
Alex Capaldi, Samuel Behrend, Benjamin Berman, Jason Smith, Justin Wright, Alun L. Lloyd. Parameter estimation and uncertainty quantification for an epidemic model. Mathematical Biosciences & Engineering, 2012, 9 (3) : 553-576. doi: 10.3934/mbe.2012.9.553
Marcello Delitala, Tommaso Lorenzi. A mathematical model for value estimation with public information and herding. Kinetic & Related Models, 2014, 7 (1) : 29-44. doi: 10.3934/krm.2014.7.29
Gianni Gilioli, Sara Pasquali, Fabrizio Ruggeri. Nonlinear functional response parameter estimation in a stochastic predator-prey model. Mathematical Biosciences & Engineering, 2012, 9 (1) : 75-96. doi: 10.3934/mbe.2012.9.75
Simon Hubmer, Andreas Neubauer, Ronny Ramlau, Henning U. Voss. On the parameter estimation problem of magnetic resonance advection imaging. Inverse Problems & Imaging, 2018, 12 (1) : 175-204. doi: 10.3934/ipi.2018007
Robert Azencott, Yutheeka Gadhyan. Accurate parameter estimation for coupled stochastic dynamics. Conference Publications, 2009, 2009 (Special) : 44-53. doi: 10.3934/proc.2009.2009.44
Francisco de la Hoz, Anna Doubova, Fernando Vadillo. Persistence-time estimation for some stochastic SIS epidemic models. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2933-2947. doi: 10.3934/dcdsb.2015.20.2933
Blaise Faugeras, Olivier Maury. An advection-diffusion-reaction size-structured fish population dynamics model combined with a statistical parameter estimation procedure: Application to the Indian Ocean skipjack tuna fishery. Mathematical Biosciences & Engineering, 2005, 2 (4) : 719-741. doi: 10.3934/mbe.2005.2.719
Azmy S. Ackleh, H.T. Banks, Keng Deng, Shuhua Hu. Parameter Estimation in a Coupled System of Nonlinear Size-Structured Populations. Mathematical Biosciences & Engineering, 2005, 2 (2) : 289-315. doi: 10.3934/mbe.2005.2.289
Krzysztof Fujarewicz, Krzysztof Łakomiec. Parameter estimation of systems with delays via structural sensitivity analysis. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2521-2533. doi: 10.3934/dcdsb.2014.19.2521
Dominique Chapelle, Philippe Moireau, Patrick Le Tallec. Robust filtering for joint state-parameter estimation in distributed mechanical systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 65-84. doi: 10.3934/dcds.2009.23.65
Andrea Arnold, Daniela Calvetti, Erkki Somersalo. Vectorized and parallel particle filter SMC parameter estimation for stiff ODEs. Conference Publications, 2015, 2015 (special) : 75-84. doi: 10.3934/proc.2015.0075
Ferenc Hartung. Parameter estimation by quasilinearization in differential equations with state-dependent delays. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1611-1631. doi: 10.3934/dcdsb.2013.18.1611
Alessandro Corbetta, Adrian Muntean, Kiamars Vafayi. Parameter estimation of social forces in pedestrian dynamics models via a probabilistic method. Mathematical Biosciences & Engineering, 2015, 12 (2) : 337-356. doi: 10.3934/mbe.2015.12.337
Suqi Ma. Low viral persistence of an immunological model. Mathematical Biosciences & Engineering, 2012, 9 (4) : 809-817. doi: 10.3934/mbe.2012.9.809
Abraão D. C. Nascimento, Leandro C. Rêgo, Raphaela L. B. A. Nascimento. Compound truncated Poisson normal distribution: Mathematical properties and Moment estimation. Inverse Problems & Imaging, 2019, 13 (4) : 787-803. doi: 10.3934/ipi.2019036
Laura Martín-Fernández, Gianni Gilioli, Ettore Lanzarone, Joaquín Míguez, Sara Pasquali, Fabrizio Ruggeri, Diego P. Ruiz. A Rao-Blackwellized particle filter for joint parameter estimation and biomass tracking in a stochastic predator-prey system. Mathematical Biosciences & Engineering, 2014, 11 (3) : 573-597. doi: 10.3934/mbe.2014.11.573
H. T. Banks, D. Rubio, N. Saintier, M. I. Troparevsky. Optimal design for parameter estimation in EEG problems in a 3D multilayered domain. Mathematical Biosciences & Engineering, 2015, 12 (4) : 739-760. doi: 10.3934/mbe.2015.12.739
Zhigang Ren, Shan Guo, Zhipeng Li, Zongze Wu. Adjoint-based parameter and state estimation in 1-D magnetohydrodynamic (MHD) flow system. Journal of Industrial & Management Optimization, 2018, 14 (4) : 1579-1594. doi: 10.3934/jimo.2018022
Scott R. Pope, Laura M. Ellwein, Cheryl L. Zapata, Vera Novak, C. T. Kelley, Mette S. Olufsen. Estimation and identification of parameters in a lumped cerebrovascular model. Mathematical Biosciences & Engineering, 2009, 6 (1) : 93-115. doi: 10.3934/mbe.2009.6.93
Cristina Anton Jian Deng Yau Shu Wong Yile Zhang Weiping Zhang Stephan Gabos Dorothy Yu Huang Can Jin | CommonCrawl |
BMC Pregnancy and Childbirth
Effect of four or more antenatal care visits on facility delivery and early postnatal care services utilization in Uganda: a propensity score matched analysis
Edson Mwebesa1,
Joseph Kagaayi1,
Anthony Ssebagereka1,
Mary Nakafeero1,
John M. Ssenkusu1,
David Guwatudde1 &
Nazarius Mbona Tumwesigye1
BMC Pregnancy and Childbirth volume 22, Article number: 7 (2022) Cite this article
Maternal mortality remains a global public health issue, more predominantly in developing countries, and is associated with poor maternal health services utilization. Antenatal care (ANC) visits are positively associated with facility delivery and postnatal care (PNC) utilization. However, ANC in itself may not lead to such association but due to differences that exist among users (women). The purpose of this study, therefore, is to examine the effect of four or more ANC visits on facility delivery and early PNC and also the effect of facility-based delivery on early PNC using Propensity Score Matched Analysis (PSMA).
The present study utilized the 2016 Uganda Demographic and Health Survey (UDHS) dataset. Women aged 15 – 49 years who had given birth three years preceding the survey were considered for this study. Propensity score-matched analysis was used to analyze the effect of four or more ANC visits on facility delivery and early PNC and also the effect of facility-based delivery on early PNC.
The results revealed a significant and positive effect of four or more ANC visits on facility delivery [ATT (Average Treatment Effect of the Treated) = 0.118, 95% CI: 0.063 – 0.173] and early PNC [ATT = 0.099, 95% CI: 0.076 – 0.121]. It also found a positive and significant effect of facility-based delivery on early PNC [ATT = 0.518, 95% CI: 0.489 – 0.547].
Policies geared towards the provision of four or more ANC visits are an effective intervention towards improved facility-based delivery and early PNC utilisation in Uganda.
Maternal mortality, defined as death of a mother due to complications from child birth or pregnancy, is still an important global public health problem [1], majorly in low income countries such as Sub-Saharan African countries [2] due to majorly poor maternal health services utilization (MHSU) [3]. Globally, approximately 0.3 million women and adolescent girls died in 2015 from pregnancy and childbirth-related complications and 2.6 million stillbirth babies occurred [4, 5] with 60% of stillbirth occurring during the antepartum period due to untreated infections, poor fetal growth and hypertension [6]. The Sustainable Development Goal (SDG) 3.1 aims at reducing maternal mortality ratio to less than 70 per 100,000 live births globally by 2030. However, based on recent trends, maternal mortality remains a huge challenge [7].
In 2015, Uganda was ranked among the top ten countries with the highest maternal mortality in the world, with a maternal mortality rate of 343 per 100 000; and number of maternal deaths of 57,000 mothers [4]. Stillbirth, maternal mortality and morbidity and other poor maternal health outcomes are high in Uganda and are associated with inadequate utilization of maternal health services, including inadequate utilization of antenatal care (none, incomplete and late ANC attendance), failure to deliver in health facilities, untimely postnatal checkups or no checkups at all [8,9,10,11,12,13,14].
The promise of early and full attendance of ANC visits is that, it would improve facility-based deliveries, postnatal care utilization and consequently improve maternal and child health [15, 16]. During pregnancy, ANC attendance plays an important role towards positive pregnancy outcomes because it is through these visits that screening and treatments of pregnancy complications such as preeclampsia, anemia, sexually transmitted infections, and non-communicable disease such as diabetes is done. Other services provided during this time include weight and height measure, tetanus immunization, provision of supplements such as folic acid, provision of information on behavioral modification and prevention and treatment of intermittent malaria [17,18,19]. Without proper management of pregnancy, adverse pregnancy outcomes such as low birth weight, preterm delivery, spontaneous abortion maternal and perinatal mortality and morbidity may result [18, 20].
Most studies have investigated factors affecting ANC, facility delivery, skilled birth attendance and postnatal care, and some studies have investigated how ANC affects neonatal and infant mortality, its association to low birth weight, stunting and underweight [21] and its relationship with facility-based delivery and perinatal survival [22]. Through the use of conventional logistic regression, positive associations between ANC attendance on facility-based delivery [23,24,25,26,27,28,29,30] and PNC utilization [31,32,33,34,35,36,37] have been observed. In addition, facility-based delivery has been associated with PNC utilization [38,39,40,41,42].
However, ANC in and of itself may not directly result in facility delivery and early PNC utilization, rather it may be due to individual differences in unknown factors that enable facility-based delivery and early PNC among mothers who utilize ANC [43]. For example, these mothers may be from wealthy households, educated and exposed to media. The use of propensity score matched analysis offers a better option compared to conventional logistic regression analyses in controlling for confounding that may exist in analyzing associations between ANC and facility delivery and early PNC utilization. The use of Propensity Score (PS) matches women who attended 4 + ANC visits (exposed) and those who attended less than 4 ANC visits (unexposed) with similar conditional probabilities of attending 4 + ANC visits hence reducing the bias that may persist when conventional logistic regression is used. This study applied PS matched analysis in examining whether a mother having had four or more ANC visits increases probability of facility-based delivery and early PNC utilization and also whether having facility-based delivery leads to increased probability of PNC utilization in Uganda. Four or more ANC visits were considered because, it is believed that having 4 + visits increases the likelihood of a pregnant woman receiving a full range of required maternal health interventions during pregnancy [44, 45] and by the time of data collection, the Uganda's Ministry of Health equally recommended at least 4 or more ANCs for pregnant mothers. Studies that have examined the effect of ANC visits on health outcomes, specifically health facility delivery have used logistic regression models. Analyses using PSM to answer the same research question not only checks on consistency of previous results using another method but also reduces the bias in the intervention effect estimate.
Propensity score analysis (PSA) involves statistical methods for estimating treatment effects with observational data [46]. It offers an alternative approach for program evaluation in cases where randomized controlled trials are either infeasible, unethical or when researchers need to evaluate treatment effects from survey data. Associations between an outcome and given set of exposures may be biased due to unobservable individual characteristics in survey research. The use of propensity score matching (PSM) reduces such bias by matching women who attended 4 or more ANC visits (exposed) and those who attended less than 4 ANC visits (unexposed) with similar conditional probabilities to receive the treatment and is thus more preferred than traditional regression adjustments, such as logistic regression [43]. The PS is a balancing score that balances baseline characteristics between the exposed and unexposed groups based on survey data, therefore mimicking characteristics of randomized trials. [47,48,49]. It also helps create comparable balanced groups of respondents with respect to observed covariates and help minimize the influence of confounders such as age, education level, wealth index [50,51,52,53]. Propensity score matched analysis is used to estimate the average treatment effects of the treated (ATT) of a given covariate on outcome of interest [43, 54]. In this study, we assessed the effect of four or more ANC visits on facility-based delivery, the effect of four or more ANC visits on timing of PNC and the effect of facility-based delivery on timing of PNC using data drawn from Uganda Demographic and Health Survey of 2016.
Data Source and Study Population
This study used secondary data from the Uganda DHS of 2016. This is the most recent DHS survey conducted in over 20,000 households in all regions of Uganda carried out every after five years by Uganda Bureau of Statistics (UBOS). The study population comprised women of reproductive age (15–49 years) who had given birth three years preceding the survey. The Uganda DHS of 2016 used a two-stage cluster and stratified sampling technique to generate a nationally representative sample of women aged 15–49 years and men aged 15–59 years in the sampled households. Datails about the conduct of the survey can be found in Uganda Demographic and Health Survey key indicators report [55].
Propensity Scores Analysis
In this study, we estimated the ATT of having 4 + ANC visits on facility delivery, and on early PNC check-up as the outcomes. We also estimated the ATT of facility delivery on early PNC check-up (having a PNC check-up after delivery within 48 h) as another outcome. A 1:1 ratio was used for propensity score matching [52, 56] and were constructed using individual-/household and community specific variables such as age, education level of the mother, and wealth index and type of place of residence. In this study, logistic regression was used as an estimation algorithm and radius and kernel as the matching algorithm from 0.05 to 0.08 tolerance level [47]. According to Rosenbaum and Rubin [57], a propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates. It is generally expressed as follows: [Eq. 1]
$$\mathrm{p}\left(\mathrm{X}\right)=\mathrm{pr}(\mathrm{D}=1|\mathbf{X})$$
where \(\mathrm{p}\left(\mathrm{X}\right)\) is the conditional probability of receiving a given exposure (4 + ANCs, facility delivery), \(\mathrm{D}=(0, 1)\) is the exposure to appropriate covariate of interest and \({\varvec{X}}\) is a vector of covariates associated with ANC, facility delivery and early PNC check-up. The estimation of ATT follows a counterfactual framework and is expressed as: [Eq. 2]
$$\mathrm{ATT}=\mathrm{E}\left({\mathrm{Y}}_{1\mathrm{i}}|{\mathrm{D}}_{\mathrm{i}}=1\right)-\mathrm{E}({\mathrm{Y}}_{0\mathrm{i}}|{\mathrm{D}}_{\mathrm{i}}=1)$$
where \(\mathrm{E}\left({\mathrm{Y}}_{1\mathrm{i}}|{\mathrm{D}}_{\mathrm{i}}=1\right)\) is the expected outcome of facility delivery and early PNC check-up (\({\mathrm{Y}}_{1\mathrm{i}}\)) if all exposed mothers received 4 + ANC visits (\({\mathrm{D}}_{\mathrm{i}}=1\)) and is also the expected outcome of early PNC check-up if all exposed mothers had a facility delivery. \(\mathrm{E}({\mathrm{Y}}_{0\mathrm{i}}|{\mathrm{D}}_{\mathrm{i}}=1)\) is the expected outcome of facility delivery and early PNC check-up (\({\mathrm{Y}}_{0\mathrm{i}}\)) among mothers who received 4 + ANC visits had they not received 4 + ANC visits (unobserved). It also meant the expected outcome of early PNC check-up among mothers who received facility delivery if none of these mothers received facility delivery (unobserved) [43, 58,59,60].
The ATT is interpreted as the average difference in facility delivery and early PNC check-up that would be found if all treated women that have given birth preceding the survey received \(\ge\) 4 ANC visits compared to the same women if they had not received 4 + ANC visits. It was also interpreted as the average difference in early PNC check-up that would be found if all treated women that had given birth in a health facility compared to the same women if they had not given birth in a health facility [59]. The steps that were followed are proposed by [59] in estimating treatment effects which are: 1) "Estimating propensity score", 2)"stratifying and balancing propensity score", and 3) "estimating causal effect". Since it is impossible to observe the effects of treatment among women if they had received and not received 4 + ANC visits simultaneously, or if they have had facility-based delivery and not had facility-based delivery simultaneously [59], in this study, the counterfactual was constructed by matching women who received 4 + ANC visits with those who did not and women that had facility-based delivery and those that did not on a set of observable characteristics. Thus, a woman who did not receive 4 + ANC visits given that they had given birth three years preceding the survey served as a counterfactual case to those who received 4 + ANC visits. Also, women who had not given birth at a health facility preceding the survey served as a counterfactual case to those who did [43].
In assessing the balance of propensity score across women who had four or more ANC visits (treatment) and those who had less than four ANC visits (comparison), and also for women who gave birth in a health facility (treatment) and those who did not (comparison), graphs of propensity scores across these groups were used. These graphs helped to determine whether there was an overlap in the range of propensity scores across groups (treatment and comparison groups) also called common support [61]. The graphs show that there is an overlap between propensity scores and there exist similar distribution (balance) between the treated and untreated groups. The Graphs are shown in Fig. 1 to Fig. 3.
Propensity Scores across Four or more ANCs (1 = Exposed, 0 = Unexposed) by Facility Delivery. Source: UDHS Data 2016
After matching, a weighted sample of 7,903 women aged 15 – 49 years formed a sample in this study. This section presents the characteristics of women in this study, effect of 4 + ANC visits on facility-based delivery, effect of 4 + ANC visits on early PNC check-up, and the effect of facility-based delivery on early PNC check-up.
Characteristics of Women Aged 15 – 49 Years
Most of the women who had given birth three years preceding the survey were young, aged 15 – 24 years with 3,140 (39.4%), had primary level of education 4,784 (60.5%), had some kind of work 6,494 8 (82.2%). Of these women, almost 4,723 (60%) perceived the distance to the health center as not a big problem, 2,686 (34%) used modern contraceptives. Most of the women had access to some form of media (newspapers, radio, or television) with 6,079 (77%) and had not had caesarean birth 7,315 (93%). Most women had a preceding birth interval of 2 to 5 years, 4,069 (66%) and were from households with low wealth index (poor and poorer) with 3,409 (43%). Most of the women were from rural areas 6,167 (78%) and from Eastern Uganda 2,175 (27.5%). The rest of the results are presented in Table 1
Table 1 Distribution of Women Aged 15–49 Years by Selected Background Characteristics, Using Data Derived from Uganda DHS 2016
The propensity score graph evaluating the quality of matching among women who have had 4 + ANC visits (exposed) versus those who had not had 4 + ANC visits by facility-based delivery, shows that the groups are balanced and have, to a large extent similar distribution. The off support is negligible. This implies that the two groups can then be compared. See Fig. 1 above. The propensity score graph evaluating the quality of matching among women who have had 4 + ANC visits (exposed) versus those who had not had 4 + ANC visits by early PNC check-up, shows that the groups are balanced and have similar distribution. There was no off support. This implies that the two groups are matched. See Fig. 2 below. The propensity score graph evaluating the quality of matching among women who have had facility-based delivery (exposed) versus those who had not had a facility-based delivery (unexposed) by early PNC check-up, shows that the groups are balanced and have similar distribution. There was no off support. This implies that the two groups are matched. See Fig. 3 below.
Propensity Scores across Four or more ANCs (1 = Exposed, 0 = Unexposed) by Early PNC Use. Source: UDHS Data 2016
Propensity Scores across Facility Delivery (1 = Exposed, 0 = Unexposed) by Early PNC Use. Source: UDHS Data 2016
The descriptive statistics obtained after matching on estimation of ATT on facility-based delivery and early PNC (EPNC) check-up reveal that the ATT (the difference among women who received 4 or more ANC visits and of the same women had they not had 4 or more ANC visits (counterfactual) on facility-based delivery and early PNC) is 0.118 ± 0.030 (almost 12%) on facility-based delivery and 0.099 ± 0.013 (almost 10%) on early PNC. The effect of facility-based delivery on early PNC was also investigated. The study found out that the ATT of facility-based delivery on early PNC was 0.518 ± 0.013 (52%). The results are presented in Table 2. The inferences about these results are shown in Table 3, where the standard errors were bootstrapped with 150 repetitions.
Table 2 Estimation of ATT of Four or More ANCs on Facility Delivery and Early PNC and of Facility Delivery on Early PNC
Table 3 Quality of Matching and Average Treatment Effects on the Treated (ATT) of Four or More ANC Visits on Facility Delivery and Early PNC Use and of Facility Delivery on Early PNC Use
After matching, the probability of facility-based delivery was 12% [ATT = 0.118; 95% CI = 0.063 – 0.173] higher among women who had 4 + ANCs compared to the same women had they not received 4 + ANC visits. This indicates that having a 4 + ANC visits increases the chances of having a facility-based delivery by 12% compared to when they have not had 4 + ANC visits. Regarding the effect of 4 + ANC visits on early PNC check-up, it was found out that the probability of early PNC check-up was 10% [ATT = 0.099; 95% CI = 0.076 – 0.121] higher among women who had 4 + ANCs compared to the same women had they not received 4 + ANC visits. This implies that having had 4 + ANC visits increases the chances of early PNC check-up by 10%. On the effect of facility-based delivery of early PNC check-up, the results revealed that the probability of early PNC was 52% [ATT = 0.518; 95% CI = 0.489 – 0.547] higher among women who had facility-based delivery compared to the same women had they not had facility-based delivery. This implies that having had a facility delivery increases the chances of early PNC check-up by 52%. Over all, these results reveal that ANC visits (four or more ANC visits) significantly affect the probability of facility-based delivery and early PNC. They also show that facility-based delivery significantly affects the probability of early PNC. Results are presented in Table 3.
We found that ANC attendance of 4 + visits was associated with a 12% higher probability of health facility-based delivery compared to the same women had they not attended 4 + ANC visits. We also found out that ANC attendance of 4 + visits was associated with a 10% higher probability of early PNC check-up among women compared to the same women had that not attended 4 + ANC visits. The study also revealed that having a health facility-based delivery was associated with 52% higher probability of early PNC check-up compared to the same women had they not had had facility-based delivery.
Literature shows that ANC attendance during pregnancy is positively associated with facility-based delivery [24, 26, 28, 29] and PNC utilization [31, 32, 37]. It also shows that facility-based delivery is positively associated with PNC utilization [41, 42], based on conventional regression models. The present study revealed a significant and positive effect of 4 + ANC visits on facility-based delivery and EPNC utilisation, and facility-based delivery on early PNC utilisation after matching exposed and unexposed women on observable and significant characteristics within 2016 UDHS dataset.
The results align with previous studies which highlighted a positive association between appropriate ANC attendance and facility delivery and PNC utilisation [24, 26, 28, 29, 31, 32, 34, 37] and with those carried out in Uganda [11, 13, 62, 63]. Regarding the effect of 4 + ANC visits on facility-based delivery, our results agree with similar studies linking ANC with facility-based delivery in Bangladesh and India that used propensity score matched analysis [43, 58]. This is likely due to the fact that women who attend ANC receive maternal education and are often referred to health facilities for delivery [43].
The study further observed that ANC attendance affects early PNC utilisation and also that facility-based delivery affected early PNC utilisation. Studies using propensity score matched analysis investigating these effects could not be found in literature. Overall, the findings of this study confirm the belief that ANC attendance improves the likelihood of facility-based delivery and PNC utilisation and also that facility-based delivery improves the probability of early PNC use.
However, the results from this study are based on observational data to infer causality or causal relationship between; 4 + ANC visits and facility-based delivery, 4 + ANC visits and early PNC utilisation and facility-based delivery and early PNC utilisation. Even though propensity score matching removes bias based on observable woman characteristics, bias due to unobservable confounders is not accounted for leading to overestimated effects of ANC visits on facility-based delivery and early PNC utilisation and facility-based delivery on early PNC utilisation [43]. However, the use of propensity scores provides a better method for assessing interventions where the use of controlled randomized trial is impossible or inappropriate. It matches the treated with controls based on observable confounders which leads to better estimates of treatment effect. It ensures covariate balance across groups leading to unbiased estimates through the use of observational data.
Conclusions and Recommendations
The results from propensity score matched analysis illustrate a significant and positive relationship between; 4 + ANC visits and facility-based delivery, 4 + ANC visits and early postnatal care utilisation, and facility-based delivery and early postnatal utilisation among mothers in Uganda. The implementation of policies towards provision of ANC services (at least four ANC visits) plays as an effective intervention to increase facility-based delivery and ultimately early postnatal utilisation in Uganda.
The datasets generated and/or analyzed during the current study are publicly available in the Demographic Health Survey repository, https://dhsprogram.com/data/available-datasets.cfm.
WHO. WHO recommendations on antenatal care for a positive pregnancy experience. 2016.
Garenne M. Maternal mortality in Africa: Investigating more, acting more. Lancet Glob Heal. 2015;3(7):e346–7. https://doi.org/10.1016/S2214-109X(15)00027-3.
Zhao P, Han X, You L, Zhao Y, Yang L, Liu Y. Maternal health services utilization and maternal mortality in China: A longitudinal study from 2009 to 2016. BMC Pregnancy Childbirth. 2020;20(1):1–10. https://doi.org/10.1186/s12884-020-02900-4.
W. B. World Health Organization (WHO), UNICEF, UNFPA, "Trends in maternal mortality 2010 - 2015, WHO," World Heal. Organ., p. 92, 2015, [Online]. Available: http://www.who.int/ reproductivehealth/publications/monitoring/maternal-mortality2015.
Alkema L, et al. National, regional and global levels and trend in MMR between 1990 and 2015. Lancet. 2016;387(10017):462–74. https://doi.org/10.1016/S0140-6736(15)00838-7.National.
Blencowe H, et al. National, regional, and worldwide estimates of stillbirth rates in 2015, with trends from 2000: A systematic analysis. Lancet Glob Heal. 2016;4(2):e98–108. https://doi.org/10.1016/S2214-109X(15)00275-2.
W. B. G. and the U. N. P. D. WHO, UNICEF, UNFPA, "TRENDS IN MATERNAL MORTALITY 2000 - 2017: estimates by WHO, UNICEF, UNFPA, World Bank Group and the United Nations Population Division. Geneva," p. 17, 2017, [Online]. Available: https://apps.who.int/iris/bitstream/handle/10665/327596/WHO-RHR-19.23-eng.pdf?ua=1#:~:text=The global MMR in 2017,it was estimated at 342.
Rutaremwa G, Wandera SO, Jhamba T, Akiror E, Kiconco A. Determinants of maternal health services utilization in Uganda. BMC Health Serv Res. 2015;15(1):1–8. https://doi.org/10.1186/s12913-015-0943-8.
Ediau M, et al. Trends in antenatal care attendance and health facility delivery following community and health facility systems strengthening interventions in Northern Uganda. BMC Pregnancy Childbirth. 2013;13:189. https://doi.org/10.1186/1471-2393-13-189.
Benova L, et al. Two decades of antenatal and delivery care in Uganda: A cross-sectional study using Demographic and Health Surveys. BMC Health Serv Res. 2018;18(1):1–15. https://doi.org/10.1186/s12913-018-3546-3.
Atusiimire LB, Waiswa P, Atuyambe L, Nankabirwa V, Okuga M. Determinants of facility based–deliveries among urban slum dwellers of Kampala, Uganda. PLoS ONE. 2019;14(4):1–11. https://doi.org/10.1371/journal.pone.0214995.
Namazzi G, et al. Stakeholder analysis for a maternal and newborn health project in Eastern Uganda. BMC Pregnancy Childbirth. 2013;13:58. https://doi.org/10.1186/1471-2393-13-58.
Ndugga P, Namiyonga NK, ogratiousSebuwufu D. Determinants of early postnatal care attendance: analysis of the 2016 Uganda demographic and health survey. BMC Pregnancy Childbirth. 2020;20(1):1–14. https://doi.org/10.1186/s12884-020-02866-3.
Kawungezi PC, et al. Attendance and Utilization of Antenatal Care (ANC) Services: Multi-Center Study in Upcountry Areas of Uganda. Open J Prev Med. 2015;05(03):132–42. https://doi.org/10.4236/ojpm.2015.53016.
I. Kisuule et al., "Timing and reasons for coming late for the first antenatal care visit by pregnant women at Mulago hospital , Kampala Uganda," pp. 1–7, 2013.
Poote A, McKenzie-McHarg K. "Antenatal care," in Cambridge Handbook of Psychology, Health and Medicine: Third Edition. 2019. p. 622–3.
Laganà AS, Favilli A, Triolo O, Granese R, Gerli S. Early serum markers of pre-eclampsia: are we stepping forward? J Matern Neonatal Med. 2016;29(18):3019–23. https://doi.org/10.3109/14767058.2015.1113522.
Macedo TCC, et al. Prevalence of preeclampsia and eclampsia in adolescent pregnancy: A systematic review and meta-analysis of 291,247 adolescents worldwide since 1969. Eur J Obstet Gynecol Reprod Biol. 2020;248(March):177–86. https://doi.org/10.1016/j.ejogrb.2020.03.043.
E. Mwebesa, "Multilevel Models for Determinants of Maternal Health Services Utilization in Uganda Using 2016 DHS Data," 2021. [Online]. Available: http://makir.mak.ac.ug/handle/10570/8455.
Ciancimino L, Laganà AS, Chiofalo B, Granese R, Grasso R, Triolo O. Would it be too late? A retrospective case–control analysis to evaluate maternal–fetal outcomes in advanced maternal age. Arch Gynecol Obstet. 2014;290(6):1109–14. https://doi.org/10.1007/s00404-014-3367-5.
J. Kuhnt and S. Vollmer, "Antenatal care services and its implications for vital and health outcomes of children : evidence from 193 surveys in 69 low-income and middle- income countries," pp. 1–7, 2017, doi: https://doi.org/10.1136/bmjopen-2017-017122.
Pervin J, et al. Association of antenatal care with facility delivery and perinatal survival - a population-based study in Bangladesh. BMC Pregnancy Childbirth. 2012;12:1–12. https://doi.org/10.1186/1471-2393-12-111.
Teferra AS, Alemu FM, Woldeyohannes SM. Institutional delivery service utilization and associated factors among mothers who gave birth in the last 12 months in Sekela District, North West of Ethiopia: A community - based cross sectional study. BMC Pregnancy Childbirth. 2012;12:1–11. https://doi.org/10.1186/1471-2393-12-74.
Mochache V, Lakhani A, El-Busaidy H, Temmerman M, Gichangi P. Correlates of facility-based delivery among women of reproductive age from the Digo community residing in Kwale, Kenya. BMC Res Notes. 2018;11(1):4–9. https://doi.org/10.1186/s13104-018-3818-3.
Kamal SMM, Hassan CH, Alam GM. Determinants of institutional delivery among women in Bangladesh. Asia-Pacific J Public Heal. 2015;27(2):NP1372–88. https://doi.org/10.1177/1010539513486178.
Weldemariam S, Kiros A, Welday M. Utilization of institutional delivery service and associated factors among mothers in North West Ethiopian. BMC Res Notes. 2018;11(1):1–6. https://doi.org/10.1186/s13104-018-3295-8.
Feyissa TR, Genemo GA. Determinants of institutional delivery among childbearing age women in Western Ethiopia, 2013: Unmatched case control study. PLoS ONE. 2014;9(5):1–7. https://doi.org/10.1371/journal.pone.0097194.
Eshete T, Legesse M, Ayana M. Utilization of institutional delivery and associated factors among mothers in rural community of Pawe Woreda northwest Ethiopia, 2018. BMC Res Notes. 2019;12(1):1–6. https://doi.org/10.1186/s13104-019-4450-6.
Sadik W, Bayray A, Debie A, Gebremedhin T. Factors associated with institutional delivery practice among women in pastoral community of Dubti district, Afar region, Northeast Ethiopia: A community-based cross-sectional study. Reprod Health. 2019;16(1):1–8. https://doi.org/10.1186/s12978-019-0782-x.
Shahabuddin ASM, De Brouwere V, Adhikari R, Delamou A, Bardaj A, Delvaux T. Determinants of institutional delivery among young married women in Nepal: Evidence from the Nepal Demographic and Health Survey, 2011. BMJ Open. 2017;7:4. https://doi.org/10.1136/bmjopen-2016-012446.
Fekadu GA, Kassa GM, Berhe AK, Muche AA, Katiso NA. The effect of antenatal care on use of institutional delivery service and postnatal care in Ethiopia: A systematic review and meta-analysis. BMC Health Serv Res. 2018;18(1):1–11. https://doi.org/10.1186/s12913-018-3370-9.
Sakeah E, et al. The role of community-based health services in influencing postnatal care visits in the Builsa and the West Mamprusi districts in rural Ghana. BMC Pregnancy Childbirth. 2018;18(1):1–9. https://doi.org/10.1186/s12884-018-1926-7.
Akunga D, Menya D, Kabue M. Determinants of Postnatal Care Use in Kenya. African Popul Stud. 2014;28:3. https://doi.org/10.11564/28-3-638.
Fekadu GA, Ambaw F, Kidanie SA. Facility delivery and postnatal care services use among mothers who attended four or more antenatal care visits in Ethiopia: further analysis of the 2016 demographic and health survey. BMC Pregnancy Childbirth. 2019;19(1):64.
Bwalya BB, Mulenga MC, Mulenga JN. Factors associated with postnatal care for newborns in Zambia: Analysis of the 2013–14 Zambia demographic and health survey. BMC Pregnancy Childbirth. 2017;17(1):1–13. https://doi.org/10.1186/s12884-017-1612-1.
Khanal V, Adhikari M, Karkee R, Gavidia T. Factors associated with the utilisation of postnatal care services among the mothers of Nepal: Analysis of Nepal Demographic and Health Survey 2011. BMC Womens Health. 2014;14(1):1–13. https://doi.org/10.1186/1472-6874-14-19.
Berhe A, et al. Determinants of postnatal care utilization in Tigray, Northern Ethiopia: A community based cross-sectional study. PLoS ONE. 2019;14(8):1–13. https://doi.org/10.1371/journal.pone.0221161.
Singh PK, Rai RK, Alagarajan M, Singh L. Determinants of maternity care services utilization among married adolescents in rural India. PLoS ONE. 2012;7(2):e31666. https://doi.org/10.1371/journal.pone.0031666.
Darega B, Dida N, Tafese F, Ololo S. Institutional delivery and postnatal care services utilizations in Abuna Gindeberet District, West Shewa, Oromiya Region, Central Ethiopia: A Community-based cross sectional study. BMC Pregnancy Childbirth. 2016;16(1):1–7. https://doi.org/10.1186/s12884-016-0940-x.
Paudel D, Nilgar B, Bhandankar M. Determinants of postnatal maternity care service utilization in rural Belgaum of Karnataka, India: A community based cross-sectional study. Int J Med Public Heal. 2014;4(1):96. https://doi.org/10.4103/2230-8598.127167.
AbukaAbebo T, JemberTesfaye D. Postnatal care utilization and associated factors among women of reproductive age Group in Halaba Kulito Town, Southern Ethiopia. Arch Public Heal. 2018;76(1):1–10. https://doi.org/10.1186/s13690-018-0256-6.
Angore BN, Tufa EG, Bisetegen FS. Determinants of postnatal care utilization in urban community among women in Debre Birhan Town, Northern Shewa, Ethiopia. J Heal Popul Nutr. 2018;37(1):1–9. https://doi.org/10.1186/s41043-018-0140-6.
Ryan BL, Krishnan RJ, Terry A, Thind A. Do four or more antenatal care visits increase skilled birth attendant use and institutional delivery in Bangladesh? A propensity-score matched analysis. BMC Public Health. 2019;19(1):1–6. https://doi.org/10.1186/s12889-019-6945-4.
WHO, "Antenatal care coverage - at least four visits (%)," Glob. Heal. Obs., pp. 21–24, 2021, [Online]. Available: https://www.who.int/data/gho/indicator-metadata-registry/imr-details/80.
Guo S, Fraser MW. Propensity score analysis : statistical methods and applications. 2015.
Yaya S, Gunawardena N, Bishwajit G. Association between intimate partner violence and utilization of facility delivery services in Nigeria: A propensity score matching analysis. BMC Public Health. 2019;19(1):1–8. https://doi.org/10.1186/s12889-019-7470-1.
Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivariate Behav Res. 2011;46(3):399–424. https://doi.org/10.1080/00273171.2011.568786.
Park K, Ewing R, Scheer BC, Ara Khan SS. Travel Behavior in TODs vs. Non-TODs: Using Cluster Analysis and Propensity Score Matching. Transp Res Rec. 2018;2672(6):31–9. https://doi.org/10.1177/0361198118774159.
Qiu H. Complete cytoreductive surgery plus hyperthermic intraperitoneal chemotherapy for gastric cancer with peritoneal metastases: Results of a propensity score matching analysis from France. Cancer Commun. 2019;39(1):7–9. https://doi.org/10.1186/s40880-019-0391-7.
Chan GJ, Stuart EA, Zaman M, Mahmud AA, Baqui AH, Black RE. The effect of intrapartum antibiotics on early-onset neonatal sepsis in Dhaka, Bangladesh: A propensity score matched analysis. BMC Pediatr. 2014;14(1):1–8. https://doi.org/10.1186/1471-2431-14-104.
Chen L, Liu F, Wang B, Wang K. Subxiphoid vs transthoracic approach thoracoscopic surgery for spontaneous pneumothorax: A propensity score-matched analysis. BMC Surg. 2019;19(1):1–4. https://doi.org/10.1186/s12893-019-0503-y.
Rui Q, Titler MG, Shever LL, Kim T. Estimating Effects of Nursing Intervention via Propensity Score Analysis. Nurs Res. 2008;57(6):444–52. https://doi.org/10.1097/NNR.0b013e31818c66f6.
EmawayAltaye D, Karim AM, Betemariam W, FessehaZemichael N, Shigute T, Scheelbeek P. Effects of family conversation on health care practices in Ethiopia: A propensity score matched analysis. BMC Pregnancy Childbirth. 2018;18(Suppl 1):372. https://doi.org/10.1186/s12884-018-1978-8.
Uganda Bureau of Statistcs (UBOS) and ICF, Uganda Demographic and Health Survey 2016: Key Indicators Report. Kampala, Uganda. 2017.
Lin KF, Wu HF, Huang WC, Tang PL, Wu MT, Wu FZ. Propensity score analysis of lung cancer risk in a population with high prevalence of non-smoking related lung cancer. BMC Pulm Med. 2017;17(1):1–8. https://doi.org/10.1186/s12890-017-0465-8.
Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Matched Sampl Causal Eff. 2006;1083:170–84. https://doi.org/10.1017/CBO9780511810725.016.
Dixit P, Dwivedi LK, Ram F. Estimating the impact of antenatal care visits on institutional delivery in India: A propensity score matching analysis. Health (Irvine Calif). 2013;5(5):862–78. https://doi.org/10.4236/health.2013.55114.
Li M. Using the Propensity Score Method to Estimate Causal Effects: A Review and Practical Guide. Organ Res Methods. 2013;16(2):188–226. https://doi.org/10.1177/1094428112447816.
Wang W, Temsah G, Mallick L. The impact of health insurance on maternal health care utilization: Evidence from Ghana, Indonesia and Rwanda. Health Policy Plan. 2017;32(3):366–75. https://doi.org/10.1093/heapol/czw135.
Garrido MM, et al. Methods for constructing and assessing propensity scores. Health Serv Res. 2014;49(5):1701–20. https://doi.org/10.1111/1475-6773.12182.
Atuhaire R, Atuhaire LK, Wamala R, Nansubuga E. Interrelationships between early antenatal care, health facility delivery and early postnatal care among women in Uganda: a structural equation analysis. Glob Health Action. 2020;13(1):1830463. https://doi.org/10.1080/16549716.2020.1830463.
Bariagaber H, Towongo MF, Ayiga N. Determinants of the disparities in antenatal care and delivery care services in Uganda. Stud Ethno-Medicine. 2016;10(4):411–24. https://doi.org/10.1080/09735070.2016.11905514.
The authors would like to thank the MEASURE Demographic Health Survey (DHS) Program for providing us access to their dataset.
This work was supported through the DELTAS Africa Initiative Grant No. 107754/Z/15/Z-DELTAS Africa SSACAB. The DELTAS Africa Initiative is an independent funding scheme of the African Academy of Sciences (AAS)'s Alliance for Accelerating Excellence in Science in Africa (AESA) and supported by the New Partnership for Africa's Development Planning and Coordinating Agency (NEPAD Agency) with funding from the Wellcome Trust (Grant No. 107754/Z/15/Z) and the UK government. The views expressed in this publication are those of the author(s) and not necessarily those of AAS, NEPAD Agency, Wellcome Trust or the UK government.
Makerere University School of Public Health, Kampala, Uganda
Edson Mwebesa, Joseph Kagaayi, Anthony Ssebagereka, Mary Nakafeero, John M. Ssenkusu, David Guwatudde & Nazarius Mbona Tumwesigye
Edson Mwebesa
Joseph Kagaayi
Anthony Ssebagereka
Mary Nakafeero
John M. Ssenkusu
David Guwatudde
Nazarius Mbona Tumwesigye
EM – Conceptualized the manuscript, wrote initial draft. JK, and AS – Conceptualized the manuscript, methods and analysis. MN, JMS, DG, and NMT – Provided extensive inputs and edits. All authors read and approved the final draft.
Correspondence to Edson Mwebesa.
Ethics approval was not required. This is because, this study carried out a secondary analysis of non-identifiable publicly available data. Also, due to the retrospective nature of the data used in this study, no consent for this study was required. The researchers treated DHS data as confidential and no effort was made to identify any individual women interviewed in the survey. We confirm that all methods were carried out in accordance with relevant guidelines and regulations.
Mwebesa, E., Kagaayi, J., Ssebagereka, A. et al. Effect of four or more antenatal care visits on facility delivery and early postnatal care services utilization in Uganda: a propensity score matched analysis. BMC Pregnancy Childbirth 22, 7 (2022). https://doi.org/10.1186/s12884-021-04354-8
Facility-based delivery
Propensity Score Matched Analysis
Submission enquiries: [email protected] | CommonCrawl |
Higgs decay to light (pseudo)scalars in the semi-constrained NMSSM
Shiquan Ma 1 ,
Kun Wang 1 ,
Jingya Zhu 1,2,,
Center for Theoretical Physics, School of Physics and Technology, Wuhan University, Wuhan 430072, China
School of Physics and Electronics, Henan University, Kaifeng 475004, China
The next-to minimal supersymmetric standard model (NMSSM) with non-universal Higgs masses, i.e., the semi-constrained NMSSM (scNMSSM), extends the minimal supersymmetric standard model (MSSM) by a singlet superfield and assumes universal conditions, except for the Higgs sector. It can not only maintain the simplicity and grace of the fully constrained MSSM and NMSSM and relieve the tension they have been facing since the discovery of the 125-GeV Higgs boson but also allow for an exotic phenomenon wherein the Higgs decay into a pair of light ($10\sim 60\;{\rm{GeV}}$ ) singlet-dominated (pseudo)scalars (hereafter, in this paper, we use "scalar" for both scalars and pseudoscalars, considering pseudoscalars can also be called CP-odd scalars). This condition can be classified into three scenarios according to the identitiesof the SM-like Higgs and the light scalar: (i) the light scalar is CP-odd, and the SM-like Higgs is $h_2$ ; (ii) the light scalar is CP-odd, and the SM-like Higgs is $h_1$ ; and (iii) the light scalar is CP-even, and the SM-like Higgs is $h_2$ . In this work, we compare the three scenarios, checking the interesting parameter regions that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and sensitivities in the detection of the exotic decay at the HL-LHC and future lepton colliders such as CEPC, FCC-ee, and ILC. Finally, several interesting conclusions are drawn, which are useful for understanding the different delicate mechanisms of the exotic decay and designing colliders in future.
Higgs ,
supersymmetry phenomenology ,
NMSSM
[1] G. Aad et al. (ATLAS), Phys. Lett. B 716, 1-29 (2012), arXiv:1207.7214[hep-ex doi: 10.1016/j.physletb.2012.08.020
[2] S. Chatrchyan et al. (CMS), Phys. Lett. B 716, 30-61 (2012), arXiv:1207.7235[hep-ex doi: 10.1016/j.physletb.2012.08.021
[3] G. Aad et al. (ATLAS and CMS), JHEP 08, 045 (2016), arXiv:1606.02266[hep-ex
[4] A. M. Sirunyan et al. (CMS), Eur. Phys. J. C 79(5), 421 (2019), arXiv:1809.10733[hep-ex doi: 10.1140/epjc/s10052-019-6909-y
[5] G. Aad et al. (ATLAS), Phys. Rev. D 101(1), 012002 (2020), arXiv:1909.02845[hep-ex doi: 10.1103/PhysRevD.101.012002
[CMS], CMS-PAS-HIG-19-005
[7] A. Sopczak (ATLAS and CMS), PoS FFK2019, 006 (2020), arXiv:2001.05927[hep-ex
[8] R. Barate et al. (LEP Working Group for Higgs boson searches, ALEPH, DELPHI, L3 and OPAL), Phys. Lett. B 565, 61-75 (2003), arXiv:hep-ex/0306033[hep-ex doi: 10.1016/S0370-2693(03)00614-2
[9] A. M. Sirunyan et al. (CMS), JHEP 11, 161 (2018), arXiv:1808.01890[hep-ex
[10] [ATLAS], ATLAS-CONF-2019-036
[11] A. M. Sirunyan et al. (CMS), Phys. Lett. B 785, 462 (2018), arXiv:1805.10191[hep-ex doi: 10.1016/j.physletb.2018.08.057
[12] M. Aaboud et al. (ATLAS), Phys. Lett. B 790, 1-21 (2019), arXiv:1807.00539[hep-ex doi: 10.1016/j.physletb.2018.10.073
[13] A. M. Sirunyan et al. (CMS), Phys. Lett. B 795, 398-423 (2019), arXiv:1812.06359[hep-ex doi: 10.1016/j.physletb.2019.06.021
[14] A. M. Sirunyan et al. (CMS), JHEP 08, 139 (2020), arXiv:2005.08694[hep-ex
[16] A. M. Sirunyan et al. (CMS), Phys. Lett. B 800, 135087 (2020), arXiv:1907.07235[hep-ex doi: 10.1016/j.physletb.2019.135087
[17] V. Khachatryan et al. (CMS), JHEP 10, 076 (2017), arXiv:1701.02032[hep-ex
[CMS], CMS-PAS-HIG-16-035.
[20] M. Aaboud et al. (ATLAS), JHEP 06, 166 (2018), arXiv:1802.03388[hep-ex
[22] M. Aaboud et al. (ATLAS), Phys. Lett. B 782, 750-767 (2018), arXiv:1803.11145[hep-ex doi: 10.1016/j.physletb.2018.06.011
[CMS], CMS-PAS-FTR-18-035
[25] F. An, Y. Bai, C. Chen et al., Chin. Phys. C 43(4), 043002 (2019), arXiv:1810.09037[hep-ex doi: 10.1088/1674-1137/43/4/043002
[26] Z. Liu, L. T. Wang, and H. Zhang, Chin. Phys. C 41(6), 063102 (2017), arXiv:1612.09284[hep-ph doi: 10.1088/1674-1137/41/6/063102
[27] D. Curtin, R. Essig, S. Gori et al., Phys. Rev. D 90(7), 075004 (2014), arXiv:1312.4992[hep-ph doi: 10.1103/PhysRevD.90.075004
[28] R. Dermisek and J. F. Gunion, Phys. Rev. D 73, 111701 (2006), arXiv:hep-ph/0510322[hep-ph doi: 10.1103/PhysRevD.73.111701
[30] R. Dermisek, J. F. Gunion, and B. McElrath, Phys. Rev. D 76, 051105 (2007), arXiv:hep-ph/0612031[hep-ph doi: 10.1103/PhysRevD.76.051105
[31] M. Carena, T. Han, G. Y. Huang et al., JHEP 04, 092 (2008), arXiv:0712.2466[hep-ph
[32] K. Cheung, J. Song, and Q. S. Yan, Phys. Rev. Lett. 99, 031801 (2007), arXiv:hep-ph/0703149[hep-ph doi: 10.1103/PhysRevLett.99.031801
[33] J. Cao, F. Ding, and C. Han , JHEP 11, 018 (2013), arXiv:1309.4939[hep-ph
[34] X. F. Han, L. Wang, J. M. Yang et al., Phys. Rev. D 87(5), 055004 (2013), arXiv:1301.0090[hep-ph doi: 10.1103/PhysRevD.87.055004
[35] J. Cao, Y. He, P. Wu et al., JHEP 01, 150 (2014), arXiv:1311.6661[hep-ph
[36] L. Liu, H. Qiao, K. Wang et al., Chin. Phys. C 43(2), 023104 (2019), arXiv:1812.00107[hep-ph doi: 10.1088/1674-1137/43/2/023104
[37] L. Wang, X. F. Han, and B. Zhu, Phys. Rev. D 98(3), 035024 (2018), arXiv:1801.08317[hep-ph doi: 10.1103/PhysRevD.98.035024
[38] E. J. Chun, S. Dwivedi, T. Mondal et al., Phys. Lett. B 774, 20-25 (2017), arXiv:1707.07928[hep-ph doi: 10.1016/j.physletb.2017.09.037
[39] J. Bernon, J. F. Gunion, Y. Jiang et al., Phys. Rev. D 91(7), 075019 (2015), arXiv:1412.3385[hep-ph doi: 10.1103/PhysRevD.91.075019
[40] I. Engeln, M. Mühlleitner, and J. Wittbrodt, Comput. Phys. Commun. 234, 256-262 (2019), arXiv:1805.00966[hep-ph doi: 10.1016/j.cpc.2018.07.020
[41] U. Haisch, J. F. Kamenik, A. Malinauskas et al., JHEP 03, 178 (2018), arXiv:1802.02156[hep-ph
[42] S. Liu, Y. L. Tang, C. Zhang et al., Eur. Phys. J. C 77(7), 457 (2017), arXiv:1608.08458[hep-ph doi: 10.1140/epjc/s10052-017-5012-5
[43] S. F. King, M. Mühlleitner, R. Nevzorov et al., Nucl. Phys. B 870, 323-352 (2013), arXiv:1211.5074[hep-ph doi: 10.1016/j.nuclphysb.2013.01.020
[44] R. Benbrik, M. Gomez Bock, S. Heinemeyer et al., Eur. Phys. J. C 72, 2171 (2012), arXiv:1207.1096[hep-ph doi: 10.1140/epjc/s10052-012-2171-2
[45] J. J. Cao, Z. X. Heng, J. M. Yang et al., JHEP 03, 086 (2012), arXiv:1202.5821[hep-ph
[46] J. Cao, Z. Heng, J. M. Yang et al., JHEP 10, 079 (2012), arXiv:1207.3698[hep-ph
[47] Z. Kang, J. Li, and T. Li, JHEP 11, 024 (2012), arXiv:1201.5305[hep-ph
[48] S. F. King, M. Muhlleitner, and R. Nevzorov, Nucl. Phys. B 860, 207-244 (2012), arXiv:1201.2671[hep-ph doi: 10.1016/j.nuclphysb.2012.02.010
[49] U. Ellwanger, JHEP 03, 044 (2012), arXiv:1112.3548[hep-ph
[50] U. Ellwanger, A. Florent, and D. Zerwas, JHEP 01, 103 (2011), arXiv:1011.0931[hep-ph
[51] D. E. Lopez-Fogliani, L. Roszkowski, R. Ruiz de Austri et al., Phys. Rev. D 80, 095013 (2009), arXiv:0906.4911[hep-ph doi: 10.1103/PhysRevD.80.095013
[52] G. Belanger, C. Hugonie, and A. Pukhov, JCAP 01, 023 (2009), arXiv:0811.3224[hep-ph
[53] A. Djouadi, U. Ellwanger, and A. M. Teixeira, JHEP 04, 031 (2009), arXiv:0811.2699[hep-ph
[54] U. Ellwanger, AIP Conf. Proc. 1078(1), 73-78 (2009), arXiv:0809.0779[hep-ph
[55] C. Hugonie, G. Belanger, and A. Pukhov, JCAP 11, 009 (2007), arXiv:0707.0628[hep-ph
[56] K. Kowalska, S. Munir, L. Roszkowski et al., Phys. Rev. D 87, 115010 (2013), arXiv:1211.1693[hep-ph doi: 10.1103/PhysRevD.87.115010
[57] J. F. Gunion, Y. Jiang, and S. Kraml, Phys. Lett. B 710, 454-459 (2012), arXiv:1201.0982[hep-ph doi: 10.1016/j.physletb.2012.03.027
[58] J. Cao, Z. Heng, D. Li et al., Phys. Lett. B 710, 665-670 (2012), arXiv:1112.4391[hep-ph doi: 10.1016/j.physletb.2012.03.052
[59] J. Ellis and K. A. Olive, Eur. Phys. J. C 72, 2005 (2012), arXiv:1202.3262[hep-ph doi: 10.1140/epjc/s10052-012-2005-2
[60] P. Bechtle, J. E. Camargo-Molina, K. Desch et al., Eur. Phys. J. C 76(2), 96 (2016), arXiv:1508.05951[hep-ph doi: 10.1140/epjc/s10052-015-3864-0
[61] P. Athron et al. (GAMBIT), Eur. Phys. J. C 77(12), 824 (2017), arXiv:1705.07935[hep-ph doi: 10.1140/epjc/s10052-017-5167-0
[62] F. Wang, K. Wang, J. M. Yang et al., JHEP 12, 041 (2018), arXiv:1808.10851[hep-ph
[63] D. Das, U. Ellwanger and A. M. Teixeira, JHEP 04, 117 (2013), arXiv:1301.7584[hep-ph
[64] U. Ellwanger and C. Hugonie, JHEP 08, 046 (2014), arXiv:1405.6647[hep-ph
[65] K. Wang, F. Wang, J. Zhu et al., Chin. Phys. C 42(10), 103109-103109 (2018), arXiv:1811.04435[hep-ph doi: 10.1088/1674-1137/42/10/103109
[66] K. Nakamura and D. Nomura, Phys. Lett. B 746, 396-405 (2015), arXiv:1501.05058[hep-ph doi: 10.1016/j.physletb.2015.05.028
[67] K. Wang and J. Zhu, JHEP 06, 078 (2020), arXiv:2002.05554[hep-ph
[68] K. Wang and J. Zhu, Phys. Rev. D 101(9), 095028 (2020), arXiv:2003.01662[hep-ph doi: 10.1103/PhysRevD.101.095028
[69] K. Wang and J. Zhu, Chin. Phys. C 44(6), 061001 (2020), arXiv:1911.08319[hep-ph doi: 10.1088/1674-1137/44/6/061001
[70] U. Ellwanger and C. Hugonie, Eur. Phys. J. C 78(9), 735 (2018), arXiv:1806.09478[hep-ph doi: 10.1140/epjc/s10052-018-6204-3
[71] U. Ellwanger, JHEP 02, 051 (2017), arXiv:1612.06574[hep-ph
[72] M. Maniatis, Int. J. Mod. Phys. A 25, 3505-3602 (2010), arXiv:0906.0777[hep-ph doi: 10.1142/S0217751X10049827
[73] M. Carena, H. E. Haber, I. Low et al., Phys. Rev. D 93(3), 035013 (2016), arXiv:1510.09137[hep-ph doi: 10.1103/PhysRevD.93.035013
[74] U. Ellwanger, J. F. Gunion, and C. Hugonie, JHEP 02, 066 (2005), arXiv:hep-ph/0406215[hep-ph
[75] U. Ellwanger and C. Hugonie, Comput. Phys. Commun. 175, 290-303 (2006), arXiv:hep-ph/0508022[hep-ph doi: 10.1016/j.cpc.2006.04.004
[76] P. Bechtle, O. Brein, S. Heinemeyer et al., Comput. Phys. Commun. 181, 138-167 (2010), arXiv:0811.4169[hep-ph doi: 10.1016/j.cpc.2009.09.003
[77] P. Bechtle, O. Brein, S. Heinemeyer et al., Comput. Phys. Commun. 182, 2605-2631 (2011), arXiv:1102.1898[hep-ph doi: 10.1016/j.cpc.2011.07.015
[78] P. Bechtle, O. Brein, S. Heinemeyer et al., Eur. Phys. J. C 74(3), 2693 (2014), arXiv:1311.0055[hep-ph doi: 10.1140/epjc/s10052-013-2693-2
[81] E. Aprile et al. (XENON), Phys. Rev. Lett. 121(11), 111302 (2018), arXiv:1805.12562[astro-ph.CO doi: 10.1103/PhysRevLett.121.111302
[1] Kun Wang , Fei Wang , Jingya Zhu , Quanlin Jie . The semi-constrained NMSSM in light of muon g-2, LHC, anddark matter constraints. Chinese Physics C, doi: 10.1088/1674-1137/42/10/103109
[2] Kun Wang , Jingya Zhu , Quanlin Jie . Higgsino asymmetry and direct-detection constraints of light dark matter in the NMSSM with non-universal Higgs masses. Chinese Physics C, doi: 10.1088/1674-1137/**/*/******
[3] Kun Wang , Jingya Zhu . Light higgsino-dominated NLSPs in semi-constrained NMSSM. Chinese Physics C, doi: 10.1088/1674-1137/44/6/061001
[4] Zhen Liu , Lian-Tao Wang , Hao Zhang . Exotic decays of the 125 GeV Higgs boson at future e+e- colliders. Chinese Physics C, doi: 10.1088/1674-1137/41/6/063102
[5] Bing-Fang Yang , Zhi-Yong Liu , Ning Liu . Rare Higgs three body decay induced by top-Higgs FCNC coupling in the littlest Higgs model with T-parity. Chinese Physics C, doi: 10.1088/1674-1137/41/4/043103
[6] Jiayin Gu , Ying-Ying Li . Optimizing Higgs factories by modifying the recoil mass. Chinese Physics C, doi: 10.1088/1674-1137/42/3/033102
[7] Fa-Peng Huang , Yifu Cai , Hong Li , Xinmin Zhang . A possible interpretation of the Higgs mass by the cosmological attractive relaxion. Chinese Physics C, doi: 10.1088/1674-1137/40/11/113103
[8] Juan-Juan Niu , Lei Guo , Shao-Ming Wang . HZ associated production with decay in the Alternative Left-Right Model at CEPC and future linear colliders. Chinese Physics C, doi: 10.1088/1674-1137/42/9/093107
[9] Zhao Li , Jian Wang , Qi-Shu Yan , Xiaoran Zhao . Efficient numerical evaluation of Feynman integrals. Chinese Physics C, doi: 10.1088/1674-1137/40/3/033103
[10] Qing-Hong Cao , Fa-Peng Huang , Ke-Pan Xie , Xinmin Zhang . Testing the electroweak phase transition in scalar extension models at lepton colliders. Chinese Physics C, doi: 10.1088/1674-1137/42/2/023103
[11] Zhen-Wei Cui , Qiang Li , Gang Li , Man-Qi Ruan , Lei Wang , Da-Neng Yang . Measurement of H→μ+ μ- production in association with a Z boson at the CEPC. Chinese Physics C, doi: 10.1088/1674-1137/42/5/053001
[12] Samuel Alipour-fard , Nathaniel Craig , Minyuan Jiang , Seth Koren . Long live the Higgs factory: Higgs decays to long-lived particles at future lepton colliders. Chinese Physics C, doi: 10.1088/1674-1137/43/5/053101
[13] null . SUSY effects in Higgs production at high energy e+e- colliders. Chinese Physics C, doi: 10.1088/1674-1137/40/11/113104
[14] Hua-Dong Li , Cai-Dian Lü , Lian-You Shan . Sensitivity study of anomalous HZZ couplings at a future Higgs factory. Chinese Physics C, doi: 10.1088/1674-1137/43/10/103001
[15] Qi Bi , Kangyu Chai , Jun Gao , Yiming Liu , Hao Zhang . Investigating bottom-quark Yukawa interaction at Higgs factory. Chinese Physics C, doi: 10.1088/1674-1137/abcd2c
[16] Qing-Hong Cao , Ke-Pan Xie , Hao Zhang , Rui Zhang . New observable for measuring the CP property of top-Higgs interaction. Chinese Physics C, doi: 10.1088/1674-1137/abcfac
[17] Lijia Liu , Haoxue Qiao , Kun Wang , Jingya Zhu . A light scalar in the Minimal Dilaton Model in light of the LHC constraints. Chinese Physics C, doi: 10.1088/1674-1137/43/2/023104
[18] Huan-Yu Bi , Ren-You Zhang , Wen-Gan Ma , Yi Jiang , Xiao-Zhou Li , Peng-Fei Duan . Precision study of ${ {W^-W^+H}}$ production including parton shower effects at CERN Large Hadron Collider. Chinese Physics C, doi: 10.1088/1674-1137/43/12/123103
[19] Hua-Rong He , Xia Wan , You-Kai Wang . Anomalous ${{H\to ZZ \to 4\ell}}$ decay and its interference effects on gluon–gluon contribution at the LHC. Chinese Physics C, doi: 10.1088/1674-1137/abb4c8
[20] Xia Wan , You-Kai Wang . Probing the CP violating Hγγ coupling using interferometry. Chinese Physics C, doi: 10.1088/1674-1137/43/7/073101
Figures(10) / Tables(2)
Shiquan Ma, Kun Wang and Jingya Zhu. Higgs decay to light (pseudo)scalars in the semi-constrained NMSSM[J]. Chinese Physics C. doi: 10.1088/1674-1137
Shiquan Ma 1,
Kun Wang 1,
Corresponding author: Jingya Zhu, [email protected]
1. Center for Theoretical Physics, School of Physics and Technology, Wuhan University, Wuhan 430072, China
2. School of Physics and Electronics, Henan University, Kaifeng 475004, China
Abstract: The next-to minimal supersymmetric standard model (NMSSM) with non-universal Higgs masses, i.e., the semi-constrained NMSSM (scNMSSM), extends the minimal supersymmetric standard model (MSSM) by a singlet superfield and assumes universal conditions, except for the Higgs sector. It can not only maintain the simplicity and grace of the fully constrained MSSM and NMSSM and relieve the tension they have been facing since the discovery of the 125-GeV Higgs boson but also allow for an exotic phenomenon wherein the Higgs decay into a pair of light ($10\sim 60\;{\rm{GeV}}$ ) singlet-dominated (pseudo)scalars (hereafter, in this paper, we use "scalar" for both scalars and pseudoscalars, considering pseudoscalars can also be called CP-odd scalars). This condition can be classified into three scenarios according to the identitiesof the SM-like Higgs and the light scalar: (i) the light scalar is CP-odd, and the SM-like Higgs is $h_2$ ; (ii) the light scalar is CP-odd, and the SM-like Higgs is $h_1$ ; and (iii) the light scalar is CP-even, and the SM-like Higgs is $h_2$ . In this work, we compare the three scenarios, checking the interesting parameter regions that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and sensitivities in the detection of the exotic decay at the HL-LHC and future lepton colliders such as CEPC, FCC-ee, and ILC. Finally, several interesting conclusions are drawn, which are useful for understanding the different delicate mechanisms of the exotic decay and designing colliders in future.
In 2012, a new boson of approximately$ 125 \; {{\rm{GeV}}} $ was discovered at the LHC [1,2], which in later years, was consistently verified to be the SM-like Higgs boson with an increasing amount of data [3-7]. However, some other questions still exist, e.g., whether another scalar survives in the low mass region, and whether there is exotic Higgs decay into light scalars. Before the LHC, for the low integrated luminosity (IL), the LEP did not exclude a light scalar with a smaller production rate than the SM-like Higgs [8]. The CMS(ATLAS) collaboration searched for resonances directly in the $ bj\mu\mu $ channel in the $ 10\!\sim\!60 $ ($ 20\!\sim\!70 $) GeV range [9,10]. The two collaborations also searched for the exotic Higgs decay to light resonances in final states with $ b\bar{b}\tau^+\tau^- $ [11], $ b\bar{b}\mu^+\mu^- $ [12,13], $ \mu^+\mu^-\tau^+\tau^- $ [14-16], $ 4\tau $ [16,17], $ 4\mu $ [18-20], $ 4b $ [21], $ \gamma\gamma gg $ [22], and $ 4\gamma $ [23]. However, there is still sufficient space left for physics on the exotic decay. For example, in the $ b\bar{b}\tau^+\tau^- $ channel reported by CMS collaboration [11], the 95% exclusion limit is at least 3% in the $ 20\sim60 \; {{\rm{GeV}}} $ region. According to simulations, however, the future limits could be 0.3% at the High-Luminosity program of the Large Hadron Collider (HL-LHC) [24], 0.04% at the Circular Electron Positron Collider (CEPC), and 0.02% at the Future Circular Colliders in $ e^+e^- $ collisions (FCC-ee) [25, 26].
This exotic Higgs decay to light scalars can be investigated via many theories beyond the Standard Model (BSM) [27], e.g., the next-to minimal supersymmetric standard model (NMSSM), the simplest little Higgs model, the minimal dilaton model, the two-Higgs-doublet model, the next-to two-Higgs-doublet model, the singlet extension of the SM, etc. Several phenomenological studies on the exotic decay exist with these models [28-42].
The NMSSM extends the MSSM by a singlet superfield $ \hat{S} $, thereby solving the $ \mu $-problem and relaxing the fine-tuning tension resulting from the discovery of the Higgs in 2012 [43-49]. However, as supersymmetric (SUSY) models, the MSSM and NMSSM both suffer from a huge parameter space of over 100 dimensions. In most studies, some parameters are manually assumed equal at low-energy scales, leaving only about 10 free parameters, without considering the Renormalization Group Equations (RGEs) running from high scales [43-49]. In Ref. [33], decay of a Higgs boson of $ 125 \; {{\rm{GeV}}} $ into light scalars was studied in the NMSSM with parameters set in this way. In contrast, in constrained models, congeneric parameters are assumed universal at the Grand Unified Theoretical (GUT) scale, leaving only four free parameters in the fully-constrained MSSM (CMSSM) and four or five in the fully-constrained NMSSM (CNMSSM) [50-57]. However, it was found that CMSSM and CNMSSM were nearly excluded considering the $ 125 \; {{\rm{GeV}}} $ Higgs data, high mass bounds of gluino and squarks in the first two generations, muon g-2, and dark matter relic density and detections [56-62].
The semi-constrained NMSSM (scNMSSM) relaxes the unified conditions of the Higgs sector at the GUT scale; thus, it is also called the NMSSM with non-universal Higgs mass (NUHM) [63-66]. It not only keeps the simplicity and grace of the CMSSM and CNMSSM but also relaxes the tension that they have faced since the discovery of SM-like Higgs [67]. Moreover, it makes predictions about interesting light particles such as a singlino-like neutralino [68] and light Higgsino-dominated NLSPs [69-71]. In this work, we study the scenarios in the scNMSSM with a light scalar of $ 10\sim60 \; {{\rm{GeV}}} $ and the detections of exotic Higgs decay to a pair of it.
The main points of this paper are listed as follows. In Sec. II, we introduce the model briefly and provide some related analytic formulas. In Sec. III, we present in detail the numerical calculations and discussions. Finally, we draw our conclusions in Sec. IV.
II. THE MODEL AND ANALYTIC CALCULATIONS
The superpotential of NMSSM, with $ \mathbb{Z}_3 $ symmetry, is written as [72]
$W = W_{\rm{Yuk}}+\lambda \hat{S} \hat{H}_{u}\cdot\hat{H}_{d}+\frac{1}{3}\kappa \hat{S}^3\,, $
from which the so-called F-terms of the Higgs potential can be derived as
$ V_{\rm{F}} = |\lambda S|^2(|H_u|^2+|H_d|^2)+|\lambda H_u\cdot H_d+\kappa S^2|^2 \,. $
The D-terms are the same as in the MSSM
$ V_{\rm{D}} = \frac{1}{8}\left(g_1^2+g_2^2\right)\left(|H_d|^2-|H_u|^2\right)^2 +\frac{1}{2}g_2^2\left|H^{\dagger}_u H_d\right|^2 \,, $
where $ g_1 $ and $ g_2 $ are the gauge couplings of $ U(1)_Y $ and $ SU(2)_L $, respectively. Without considering the SUSY-breaking mechanism, at a low-energy scale, the soft-breaking terms can be imposed manually to the Lagrangian. In the Higgs sector, these terms corresponding to the superpotential are
$ \begin{aligned}[b] V_{\rm{soft}} =& M^2_{H_u}|H_u|^2+M^2_{H_d}|H_d|^2+M^2_S|S|^2 \\ &+\left(\lambda A_{\lambda}SH_u\cdot H_d+\frac{1}{3}\kappa A_{\kappa}S^3+{\rm h.c.}\right) \,, \end{aligned} $
where $ M^2_{H_u},\, M^2_{H_u},\, M^2_{S} $ are the soft masses of Higgs fields $ H_u,\, H_d,\,S $, respectively, and $ A_\lambda,\, A_\kappa $ are the trilinear couplings at the $ M_{\rm{SUSY}} $ scale. However, in the scNMSSM, the SUSY breaking is mediated by gravity; thus, the soft-parameters at the $ M_{\rm{SUSY}} $ scale are running naturally from the GUT scale complying with the RGEs.
At electroweak symmetry breaking, $ H_u $, $ H_d $, and $ S $ get their vacuum expectation values (VEVs) $ v_u $, $ v_d $, and $ v_s $, respectively, with $ \tan\beta\equiv v_u/v_d $, $ v \equiv \sqrt{v_u^2+v_d^2}\approx 174 \; {{\rm{GeV}}} $, and $ \mu_{\rm{eff}}\equiv \lambda v_s $. Then, they can be written as
$ \begin{aligned}[b] &H_u = \left( \begin{array}{c} H_u^+ \\ v_u+\dfrac{\phi_1+{\rm i}\varphi_1}{\sqrt{2}} \\ \end{array} \right), \\& H_d = \left( \begin{array}{c} v_d+\dfrac{\phi_2+{\rm i}\varphi_2}{\sqrt{2}} \\ H_d^- \\ \end{array} \right), \quad \\ &S = v_s+\frac{\phi_3+{\rm i}\varphi_3}{\sqrt{2}}. \end{aligned} $
The Lagrangian consists of the F-terms, D-terms, and soft-breaking terms; therefore, with the above equations, one can get the tree-level squared-mass matrix of CP-even Higgses in the base $ \{\phi_1, \phi_2, \phi_3\} $ and CP-odd Higgses in the base $ \{\varphi_1, \varphi_2, \varphi_3\} $ [72]. After diagonalizing the mass squared matrixes including loop corrections [73], one can get the mass-eigenstate Higgses (three CP-even ones $ h_{1,2,3} $ and two CP-odd ones $ a_{1,2} $, in mass order) from the gauge-eigenstate ones ($ \phi_{1,2,3} $ and $ \varphi_{1,2,3} $ in Eq. (5), with $ 1,2,3 $ corresponding to up-type, down-type, and singlet states, respectively):
$ \quad h_i = S_{ik}\, \phi_k, \quad a_j = P_{jk}\, \varphi_k \,, $
where $ S_{ik}, P_{jk} $ are the corresponding components of $ \phi_k $ in $ h_i $ and $ \varphi_k $ in $ a_j $, respectively, with $ i,k = 1,2,3 $ and $ j = 1,2 $.
In the scNMSSM, the SM-like Higgs (hereafter, uniformly denoted as $ h $) can be CP-even $ h_1 $ or $ h_2 $, and the light scalar (hereafter uniformly denoted as $ s $) can be CP-odd $ a_1 $ or CP-even $ h_1 $. Then, the couplings between the SM-like Higgs and a pair of light scalars $ C_{hss} $ can be written at tree level as [74]
$ \begin{aligned}[b] C_{h_2h_1h_1}^{\rm{tree}} \! = \!& \frac{\lambda^2}{\sqrt{2}} \left[v_u\left(\Pi^{122}_{211}+\Pi^{133}_{211}\right) \right.\\ & \left. +v_d\left(\Pi^{211}_{211}+\Pi^{233}_{211}\right) +v_s\left(\Pi^{311}_{211}+\Pi^{322}_{211}\right) \right] \\ &-\frac{\lambda\kappa}{\sqrt{2}} \left(v_u\Pi^{323}_{211}+v_d\Pi^{313}_{211}+2v_s\Pi^{123}_{211}\right) \\ &+\sqrt{2}\kappa^2v_s \Pi^{333}_{211}-\frac{\lambda A_{\lambda}}{\sqrt{2}}\Pi^{123}_{211}+\frac{\kappa A_{\kappa}}{3\sqrt{2}}\Pi^{333}_{211} \\ &+\frac{g^2}{2\sqrt{2}} \left[v_u \left(\Pi^{111}_{211}-\Pi^{122}_{211}\right)-v_d \left(\Pi^{211}_{211}-\Pi^{222}_{211}\right) \right] \,, \end{aligned} $
$ \Pi^{ijk}_{211} = 2S_{2i}S_{1j}S_{1k}+2S_{1i}S_{2j}S_{1k}+2S_{1i}S_{1j}S_{2k} \,; $
$ \begin{aligned}[b] C_{h_a a_1a_1}^{\rm{tree}} = &\frac{\lambda^2}{\sqrt{2}} \left[v_u \left(\Pi^{122}_{a11}+\Pi^{133}_{a11}\right) \right. \\ & \left. +v_d\left(\Pi^{211}_{a11}+\Pi^{233}_{a11}\right) +v_s\left(\Pi^{311}_{a11}+\Pi^{322}_{a11}\right) \right] \\ &+\frac{\lambda\kappa}{\sqrt{2}} \left[v_u\left(\Pi^{233}_{a11}-2\Pi^{323}_{a11}\right)+v_d \left(\Pi^{133}_{a11}-2\Pi^{313}_{a11}\right) \right. \\ &\left.+2v_s \left(\Pi^{312}_{a11}-\Pi^{123}_{a11}-\Pi^{213}_{a11}\right) \right.] +\sqrt{2}\kappa^2 v_s\Pi^{333}_{a11} \\ &+\frac{\lambda A_{\lambda}}{\sqrt{2}}\left(\Pi^{123}_{a11}+\Pi^{213}_{a11}+\Pi^{312}_{a11}\right)-\frac{\kappa A_{\kappa}}{3\sqrt{2}}\Pi^{333}_{a11} \\ &+\frac{g^2}{2\sqrt{2}} \left[v_u\left(\Pi^{111}_{a11}-\Pi^{122}_{a11}\right)-v_d \left(\Pi^{211}_{a11}-\Pi^{222}_{a11}\right) \right] \,, \end{aligned} $
where $ \Pi^{ijk}_{a11} = 2S_{ai}P_{1j}P_{1k} $, and $ a = 1,2 $. Thus, the width of Higgs decay to a pair of light scalars can be given by
$ \Gamma(h\to s s) = \frac{1}{32\pi m_{h}}C^2_{hss}\left({1-\frac{4m^2_{s}}{m^2_h}}\right)^{1/2} \,. $
Then, the light scalars decay to light SM particles, such as a pair of light quarks or leptons, gluons, or photons. The widths of light scalar decay to quarks and charged leptons at tree level are given by
$ \Gamma(s\to l^+l^-) = \frac{\sqrt{2}G_F}{8\pi}m_s m^2_l \left({1-\frac{4m^2_l}{m^2_s}}\right)^{p/2} \,,$
$ \Gamma(s\to q \bar{q}) = \frac{N_c G_F}{4\sqrt{2}\pi}C^2_{s q q}m_s m^2_q \left({1-\frac{4m^2_q}{m^2_s}}\right)^{p/2} \,, $
where $ p = 1 $ for CP-odd $ s $, and $ p = 3 $ for CP-even $ s $. The couplings between light scalar and up-type or down-type quarks are given by
$ C_{h_1t_L t^c_R} = \frac{m_t}{\sqrt{2}v \sin\beta}S_{11} \,, $
$ C_{h_1b_L b^c_R} = \frac{m_b}{\sqrt{2}v \cos\beta}S_{12} \,, $
$ C_{a_1t_L t^c_R} = {\rm i}\frac{m_t}{\sqrt{2}v \sin\beta}P_{11} \,, $
$ C_{a_1b_L b^c_R} = {\rm i}\frac{m_b}{\sqrt{2}v \cos\beta}P_{12} \,. $
III. NUMERICAL CALCULATIONS AND DISCUSSIONS
In this work, we first scan the following parameter space with NMSSMTOOLS-5.5.2 [74,75]:
$ \begin{aligned}[b] &0\!<\!\lambda\!<\!0.7, \qquad 0\!<\!\kappa\!<\!0.7, \qquad 1\!<\!\tan\!\beta\!<\!30, \\ &100\!<\!\mu_{\rm{eff}}\!<\!200 \; {{\rm{GeV}}}, \qquad 0\!<\!M_0\!<\!500 \; {{\rm{GeV}}}, \\ &0.5\!<\!M_{1/2}\!<\!2 \; {\rm{TeV}}, \qquad |A_0|,\, |A_{\lambda}|,\, |A_{\kappa}|\!<\!10 \; {\rm{TeV}} \,, \qquad \end{aligned} $
where we choose small $ \mu_{\rm{eff}} $ to get low fine tuning, small $ M_0 $ to get large muon g-2, and moderate $ M_{1/2} $ to meet both large muon g-2 and high gluino-mass bounds. The regions of other parameters are chosen to be wide to investigate all scenarios with a low mass scalar and the exotic Higgs decay.
The constraints we imposed in our scan include the following: (i) an SM-like Higgs of $ 123\!\!\sim\!\!127 \; {{\rm{GeV}}} $, with signal strengths and couplings satisfying the current Higgs data [3-7]; (ii) search results for exotic and invisible decay of the SM-like Higgs, and Higgs-like resonances in other mass regions, with HIGGSBOUNDS-5.7.1 [76-78]; (iii) the muon g-2 constraint, like that in Ref. [67]; (iv) the mass bounds of gluino and the first-two-generation squarks over $ 2 \; {\rm{TeV}} $and search results for electroweakinos in multilepton channels [79]; (vi) the dark matter relic density $ \Omega h^2 $ below 0.131 [80], and the dark matter and nucleon scattering cross section below the upper limits in direct searches [81,82]; and (vii) the theoretical constraints of vacuum stability and Landau pole.
After imposing these constraints, the surviving samples can be categorized into three scenarios:
• Scenario I: $ h_2 $ is the SM-like Higgs, and the light scalar $ a_1 $ is CP-odd;
• Scenario II: $ h_1 $ is the SM-like Higgs, and the light scalar $ a_1 $ is CP-odd;
• Scenario III: $ h_2 $ is the SM-like Higgs, and the light scalar $ h_1 $ is CP-even.
In Table 1, we list the ranges of parameters and light particle masses in the three scenarios. From the table, one can see that the parameter ranges are nearly the same except for $ \lambda $, $ \kappa $, and $ A_\kappa $, but the mass spectrums for light particles are totally different.
Scenario I Scenario II Scenario III
$ \lambda $ $ 0\sim0.58 $ $ 0\sim 0.24 $ $ 0\sim 0.57 $
$ \kappa $ $ 0\sim0.21 $ $ 0\sim0.67 $ $ 0\sim0.36 $
$ \tan\beta $ $ 14\sim27 $ $ 10\sim28 $ $ 13\sim28 $
$\mu_{\rm{eff} }/{\rm{GeV} }$ $ 103\sim200 $ $ 102\sim200 $ $ 102\sim200 $
$M_0\;/ {\rm{GeV} }$ $ 0\sim500 $ $ 0\sim500 $ $ 0\sim500 $
$M_{1/2}/ {\rm{TeV}}$ $ 1.06\sim1.47 $ $ 1.04\sim1.44 $ $ 1.05\sim1.47 $
$A_0/ {\rm{TeV} }$ $ -2.8\sim0.2 $ $ -3.2\sim-1.0 $ $ -2.8\sim0.6 $
$A_{\lambda} (M_{\rm{GUT} })/ {\rm{TeV} }$ $ 1.3\sim9.4 $ $ 0.1\sim10 $ $ 1.1\sim9.8 $
$A_{\kappa}(M_{\rm{GUT} })/{\rm{TeV} }$ $ -0.02\sim5.4 $ $ -0.02\sim0.9 $ $ -0.7\sim5.7 $
$A_{\lambda} (M_{\rm{SUSY} })/{\rm{TeV} }$ $ 2.0\sim10.1 $ $ 0.8\sim10.9 $ $ 1.6\sim10.2 $
$A_{\kappa}(M_{\rm{SUSY} })/{\rm{GeV} }$ $ -51\sim42 $ $ -17\sim7 $ $ -803\sim11 $
$m_{\tilde{\chi}^0_1}/ {\rm{GeV} }$ $ 3\sim129 $ $ 98\sim198 $ $ 3\sim190 $
$m_{h_1}/ {\rm{GeV} }$ $ 4\sim123 $ $ 123\sim127 $ $ 4\sim60 $
$m_{h_2}/{\rm{GeV} }$ $ 123\sim127 $ $ 127\sim5058 $ $ 123\sim127 $
$m_{a_1}/ {\rm{GeV} }$ $ 4\sim60 $ $ 0.5\sim60 $ $ 3\sim697 $
Table 1. The ranges of parameters and light particle masses in Scenario I, II, and III.
To study the different mechanisms of Higgs decay to light scalars in different scenarios, we recombine relevant parameters and show them in Fig. 1. From this figure, one can find the following:
Figure 1. (color online) Surviving samples for the three scenarios in the $ \lambda A_\lambda S_{i2} $ versus $ \lambda^2 v_s $ (upper), where $ S_{22} $ (left and right) and $ S_{12} $ (middle) are the down-type-doublet component coefficient in the SM-like Higgs, and $ \kappa A_\kappa $ versus $ \kappa^2 v_s $ (lower) planes, respectively. Colors indicate $ \lambda^2 v_u $ (upper) and $ \lambda\kappa v_s $ (lower), respectively.
• For Scenarios I and III, $ \lambda A_{\lambda}S_{22} \!\approx\! \lambda^2v_s $, where $ 0.03\!\lesssim\! S_{22}\!\lesssim\!0.07 $ is at the same order with $ 1/\tan\!\beta $, for the mass scale of the CP-odd doublet scalar $ M_A \!\thicksim\! 2\mu_{\rm{eff}}/ \sin\!2\beta \!\thicksim\! A_{\lambda} \!\gg\! \kappa v_s $, and $ \tan\!\beta\!\gg\!1 $ [33]. Thus, the SM-like Higgs is up-type-doublet dominated.
• For Scenario I, $ \kappa A_{\kappa} $, $ k^2v_s $, and $ \lambda\kappa v_s $ are at the same level of a few GeV; however, for Scenario II, $ \kappa^2 v_s $ can be as large as a few TeV for small $ \lambda $ and large $ \kappa $.
• Especially, for Scenario III, $ \kappa A_{\kappa} \!\approx\! -4\kappa^2 v_s $, or $ A_\kappa \!\approx\! -4\kappa v_s $.
According to the large data of the $ 125 \; {{\rm{GeV}}} $ Higgs and current null results searching for non-SM Higgs, the $ 125 \; {{\rm{GeV}}} $ Higgs should be doublet dominated, and the light scalar should be singlet dominated. In our cases, we found that, in the CP-even sector, the mixing between singlet and up-type doublet $ \eta_{us} $, the mixing between down-type doublet and up-type doublet $ \eta_{ud} $, and the mixing between singlet and down-type doublet $ \eta_{ds} $ are, respectively, roughly equal to
$ \begin{aligned}[b] \eta_{us} \approx &\frac{2\lambda v \mu_{\rm{eff}} \left[ 1-\left(\dfrac{M_A}{2\mu/\sin2\beta}\right)^2 -\dfrac{\kappa}{2\lambda}\sin2\beta\right]}{m_h^2-m_s^2} \,, \\ \eta_{ud} \approx & \frac{1}{\tan\beta} \,, \\ \eta_{ds} \approx & -\frac{\eta_{us}}{\tan\beta} \,, \end{aligned} $
where $ m_h $ and $ m_s $ are masses of the SM-like Higgs and the singlet-dominated CP-even scalar, respectively, and
$ |\eta_{ds}| \ll |\eta_{us}|, |\eta_{ud}| \ll 1 \,. $
And in the CP-odd sector, the mixing between singlet and down-type doublet $ \eta'_{ds} $, the mixing between down-type doublet and up-type doublet $ \eta'_{ud} $, and the mixing between singlet and up-type doublet $ \eta'_{us} $ are, respectively, roughly equal to
$ \begin{aligned}[b] \eta'_{ds} \approx &\dfrac{\lambda v \dfrac{M_A^2}{2\mu_{\rm{eff}}/\sin2\beta} -3 \kappa v \mu_{\rm{eff}}}{m^2_{a_2}-m^2_{a_1}} \approx \dfrac{\lambda v}{\mu_{\rm{eff}} \tan\beta} \,, \\ \eta'_{ud} \approx & \frac{1}{\tan\beta} \,, \\ \eta'_{us} \approx & -\frac{\eta'_{ds}}{\tan\beta} \,, \end{aligned} $
$ |\eta'_{us}| \ll |\eta'_{ds}|, |\eta'_{ud}| \ll 1 \,. $
Specifically, in Scenario I,
$ S_{23} = \eta_{us}\,,\; \; \; S_{22} = \eta_{ud}\,,\; \; \; P_{11} = \eta'_{us}\,,\; \; \; P_{12} = \eta'_{ds}\,; \; \; \; $
in Scenario II,
in Scenario III,
$S_{23} = \eta_{us}\,,\; \; \; S_{22} = \eta_{ud}\,,\; \; \; S_{11} = -\eta_{us}\,,\; \; \; S_{12} = \eta_{ds}\,. \; \; \; $
In Fig. 2, we show how small they can be and their relative scale. From this figure, we can see the following for the three scenarios.
Figure 2. (color online) Surviving samples for the three scenarios in the $ P_{11} $ versus $ S_{23} $ (left), $ P_{11} $ versus $ S_{13} $ (middle), and $ S_{11} $ versus $ S_{23} $ (right) planes, respectively, where $ S_{23} $ (left and right) and $ S_{13} $ (middle) are the singlet component in the SM-like Higgs, and $ P_{11} $ (left and middle) and $ S_{11} $ (right) are the up-type-doublet components of the light scalar, respectively. Colors indicate the parameter $ \lambda $.
• Scenario I: The up-type-doublet component of the light scalar, $ -\!0.0015 \!\lesssim\! P_{11} \!<\!0 $, is proportional to the parameter $ \lambda $; thus, the total doublet component of the light scalar is $ P_{1D}\!\equiv\! \sqrt{P_{11}^2+P_{12}^2}\!\thickapprox\! P_{11}\tan\beta \!\lesssim\!0.04 $, while the singlet component of the SM-like Higgs is $ |S_{23}|\!\lesssim\!0.3 $.
• Scenario II: The up-type-doublet component of the light scalar, $ -\!0.0006 \!\lesssim\!P_{11}\!<\!0 $, is proportional to the parameter $ \lambda $; thus, the total doublet component of the light scalar is $ 0<P_{1D}\!\lesssim\!0.013 $, while the singlet component in the SM-like Higgs is $ |S_{13}|\!\lesssim\!0.3 $.
• Scenario III: The up-type-doublet component of the light scalar and the singlet component of the SM-like Higgs are anticorrelated, i.e., $ S_{11}\!\thickapprox\!-S_{23} $, and their range is $ -0.15\!\lesssim\! S_{11}\!\lesssim\! 0.2 $, with the sign related to the parameter $ \lambda $. This also means that the mixing in the CP-even scalar sector is mainly between the singlet and the up-type doublet, and we found that $ 0.03\!\lesssim\!S_{22}\!\lesssim0.07 $ and $ S_{12}\!\lesssim\!0.03 $. Thus, the SM-like Higgs is up-type doublet dominated, which is applicable in all three scenarios, with $ S_{21}\!\approx\! 1 $ in Scenario I and III and $ S_{11}\!\approx\!1 $ in Scenario II.
Considering the values of and correlations among parameters and component coefficients, the couplings between the SM-like Higgs and a pair of light scalars can be simplified as
$ C_{h_2a_1a_1} \!\simeq \! \sqrt{2}\lambda^2v_u+\sqrt{2}\lambda A_{\lambda}P_{11}\tan\!\beta\,, $
$ C_{h_1a_1a_1}\! \simeq\! \sqrt{2}\lambda^2v_u\!+\!\!\sqrt{2}\lambda A_{\lambda}P_{11}\tan\!\beta \!+\!2\!\sqrt{2}\kappa^2v_s S_{13} \,, \!\!\!\!\! $
$ \begin{aligned}[b] C_{h_2h_1h_1} \simeq & \sqrt{2}\lambda^2v_u-\sqrt{2}\lambda A_{\lambda}S_{12} +\sqrt{2}\lambda^2v_s S_{11} \\ &+2\sqrt{2}\kappa^2 v_sS_{23} +\frac{3g^2}{\sqrt{2}}v_u S_{11}S_{11} \\ &-2\sqrt{2}\lambda\kappa v_s S_{12} \,. \end{aligned} $
In Fig. 3, we show the exotic branching ratio $ Br(h\!\to\!ss) $ including one-loop correction correlated with the mass of the light scalar and the coupling between the SM-like Higgs and a pair of the light scalars at tree level. Since the 125 GeV Higgs is constrained to be very SM-like, its decay widths and branching ratios to SM particles cannot vary much, which leads indirectly to strong upper limits on exotic branching ratios of the SM-like Higgs [3-5]①. Thus, combined with Eq. (9), it is natural that the branching ratios to light scalars are proportional to the square of the tri-scalar couplings. The significant deviations for the negative-coupling samples in Scenario III are because of the one-loop correction of the stop loops,
Figure 3. (color online) Surviving samples for the three scenarios in the exotic branching ratio $ Br(h\!\to\!ss) $ versus the tri-scalar coupling $ C_{hss}^{\rm{tree}} $ at tree level planes, respectively, with colors indicating the mass of light Higgs $ m_s $, where $ h $ denotes the SM-like Higgs $ h_2 $ (left and right) and $ h_1 $ (middle), and $ s $ denotes the light scalar $ a_1 $ (left and middle) and $ h_1 $ (right).
$ \Delta C_{h_2h_1h_1} \simeq S_{21} S_{11}^2 \frac{3\sqrt{2}m_t^4}{16\pi^2 v_u^3} \ln \left( \frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right), $
which can be as large as $ 5 \; {{\rm{GeV}}} $, whereas for Scenario I and II, they are
$ \Delta C_{h_2a_1a_1} \simeq S_{21} P_{11}^2 \frac{3\sqrt{2}m_t^4}{16\pi^2 v_u^3} \ln \left( \frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right), $
$ \Delta C_{h_1a_1a_1} \simeq S_{11} P_{11}^2 \frac{3\sqrt{2}m_t^4}{16\pi^2 v_u^3} \ln \left( \frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right). $
Since $ P_{11}\!\ll\! S_{11} $, as seen from Fig. 2, the loop correction in Scenarios I and II is much smaller than that in Scenario III. In the following figures and discussions, we consider the coupling $ C_{hss} $ to include the one-loop correction $ \Delta C_{hss} $, unless otherwise specified.
A. Detections at the HL-LHC
At the LHC, the SM-like Higgs can first be produced in gluon fusion (ggF), vector boson fusion (VBF), associated with vector boson (Wh, Zh), or associated with $ t\bar{t} $ processes, where the cross section in the ggF process is much larger than that of others. Then, the SM-like Higgs can decay to a pair of light scalars, and each scalar can then decay to a pair of fermions, gluons, or photons. The ATLAS and CMS collaborations have searched for these exotic decay modes in the final states of $ b\bar{b}\tau^+\tau^- $ [11], $ b\bar{b}\mu^+\mu^- $ [12,13], $ \mu^+\mu^-\tau^+\tau^- $ [14-16], $ 4\tau $ [16,17], $ 4\mu $ [18-20], $ 4b $ [21], $ \gamma\gamma gg $ [22], $ 4\gamma $ [23], etc. These results are included in the constraints we considered.
As we checked, the main decay mode of the light scalar is usually to $ b\bar{b} $ when $ m_s\gtrsim 2m_b $. However, the color backgrounds at the LHC are very large; thus, a subleading Zh production process is used in detecting $ h\!\!\to\!\! 2s \!\!\to\!\! 4b $, and VBF is used for $ h\!\!\to\!\! 2s \!\!\to\!\! \gamma\gamma gg $. For the other decay mode, the main production process ggF can be used. Considering the cross sections of production and branching ratios of decay, as well as the detection precisions, we found that the detections in $ 4b $, $ 2b2\tau $, and $ 2\tau 2\mu $ channels are important for the scNMSSM. The signal rates are $ \mu_{\rm{Zh}} \!\times\! Br(h\!\to\! ss \!\to\! 4b) $, $ \mu_{\rm{ggF}} \!\times\! Br(h\!\to\! ss \!\to\! 2b2\tau) $, and $ \mu_{\rm{ggF}} \!\times\! Br(h\!\to\! ss \!\to\! 2\tau2\mu) $, respectively, where $ \mu_{\rm{ggF}} $ and $ \mu_{\rm{Zh}} $ are the ggF and Zh production rates normalized to their SM value, respectively [3-5]②.
For detections of the exotic decay at the HL-LHC, we use the simulation results of 95% exclusion limit in Refs. [24,33]. Suppose, with an integrated luminosity of $ L_0 $, the 95% exclusion limit for branching ratio in some channel is $ Br_0 $ in the simulation result; then, for a sample in the model, if the signal rate is $ \mu_i\!\times\! Br $ ($ i $ denotes the production channel), the signal significance with integrated luminosity of $ L $ will be
$ ss = 2 \;\frac{\mu_i\!\times\! Br}{Br_0} \sqrt{\frac{L}{L_0}}, $
and the integrated luminosity needed to exclude the sample in the channel at 95% confidence level (with $ ss = 2 $) will be
$ L_{\rm{e}} = L_0 \left(\frac{Br_0}{\mu_i\!\times\! Br}\right)^2, $
and the integrated luminosity needed to discover the sample in the channel (with $ ss = 5 $) will be
$ L_{\rm{d}} = L_0 \left(\frac{5}{2}\right)^2 \left(\frac{Br_0}{\mu_i\!\times\! Br}\right)^2. $
In Figs. 4, 5, and 6, we show the signal rates for the surviving samples in the three scenarios and the 95% exclusion bounds [24,33] in the $ 4b $, $ 2b2\tau $, and $ 2\tau2\mu $ channels, respectively. From these figures, one can see the following:
Figure 4. (color online) Surviving samples for the three scenarios in the signal rate $ \mu_{\rm{Zh}} \!\times\! Br(h\!\to\! ss\!\to\! 4b) $ versus the mass of light Higgs $ m_s $ planes, respectively, with colors indicating the tri-scalar coupling $ C_{hss} $ including one-loop correction, where $ h $ denotes the SM-like Higgs $ h_2 $ (left and right) and $ h_1 $ (middle), and $ s $ denotes the light scalar $ a_1 $ (left and middle) and $ h_1 $ (right). The solid curves indicate the simulation results of the 95% exclusion limit in the corresponding channel at the HL-LHC with $ 300 {\; {\rm{fb}}}^{-1} $ [33].
Figure 5. (color online) Same as in Fig. 4, but shows the signal rate $ \mu_{\rm{ggF}} \!\times\! Br(h\!\to\! ss\!\to\! 2\tau 2b) $ and 95% exclusion bounds in the corresponding channel at the HL-LHC with $ 3000 {\; {\rm{fb}}}^{-1} $ [24].
Figure 6. (color online) Same as in Fig. 4, but shows the signal rate $ \mu_{\rm{ggF}} \!\times\! Br(h\!\to\! ss\!\to\! 2\tau 2\mu) $ and 95% exclusion bounds in the corresponding channel at the HL-LHC with $ 3000 {\; {\rm{fb}}}^{-1} $ [24].
• With a light scalar heavier than $ 30 \; {{\rm{GeV}}} $, the easiest way to discover the exotic decay is via the $ 4b $ channel, and the minimal integrated luminosity needed to discover the decay in this channel can be $ 650 {\; {\rm{fb}}}^{-1} $ for Scenario II.
• With a light scalar lighter than $ 20 \; {{\rm{GeV}}} $, the $ 2\tau2\mu $ channel can be important, especially for samples in Scenario II, and the minimal integrated luminosity needed to discover the decay in this channel can be $ 1000 {\; {\rm{fb}}}^{-1} $.
• With a light scalar heavier than $ 2m_b $, it is possible to discover the decay in the $ 2b2\tau $ channel, and the minimal integrated luminosity needed to discover the decay in this channel can be $ 1500 {\; {\rm{fb}}}^{-1} $ for Scenario II.
B. Detections at the future lepton colliders
In future lepton colliders, such as CEPC, FCC-ee, and International Linear Collider (ILC), the main production process of the SM-like Higgs is Zh, and the color backgrounds are minimal; thus, these lepton colliders are powerful in detecting the exotic decay. There have been simulation results in many channels, such as $ 4b $, $ 4j $, $ 2b2\tau $, and $ 4\tau $ [26]. With the same method as in the last subsection, one can perform similar analyses.
In Figs. 7, 8, 9, and 10, we show the signal rates for surviving samples in the three scenarios and the 95% exclusion bounds (following the simulation results in Ref. [26]) at the CEPC, FCC-ee, and ILC, and in the $ 4b $, $ 4j $, $ 2b2\tau $, and $ 4\tau $ channels, respectively. In these processes, the backgrounds are mainly from SM Higgs decays to four light particles through SM gauge bosons. From these figures, one can see the following:
Figure 7. (color online) Surviving samples for the three scenarios in the signal rate $ \mu_{\rm{Zh}} \!\times\! Br(h\!\to\! ss\!\to\! 4b) $ versus the mass of light Higgs $ m_s $ planes, respectively, with colors indicating the tri-scalar coupling $ C_{hss} $ including one-loop correction, where $ h $ denotes the SM-like Higgs $ h_2 $ (left and right) and $ h_1 $ (middle), and $ s $ denotes the light scalar $ a_1 $ (left and middle) and $ h_1 $ (right). The solid, dashed, and dotted lines are the 95% exclusion bounds from simulations in the corresponding channel at the CEPC with $ 5\,{\rm{ab}}^{-1} $, FCC-ee with $ 30\,{\rm{ab}}^{-1} $, and ILC with $ 2\,{\rm{ab}}^{-1} $, respectively [26].
Figure 8. (color online) Same as in Fig. 7, but shown are the signal rates $ \mu_{\rm{Zh}} \!\times\! Br(h\!\to\! ss\!\to\! 4j) $ and 95% exclusion bounds in the corresponding channel [26]. The "$ 4j $" denotes four jets, including gluon and light quarks, except for $ b $.
Figure 9. (color online) Same as in Fig. 7, but shown are the signal rates $ \mu_{\rm{Zh}} \!\times\! Br(h\!\to\! ss\!\to\! 2b 2\tau) $ and 95% exclusion bounds in the corresponding channel [26].
Figure 10. (color online) Same as in Fig. 7, but shown are the signal rates $ \mu_{\rm{Zh}} \!\times\! Br(h\!\to\! ss\!\to\! 4\tau) $ and 95% exclusion bounds in the corresponding channel [26].
• As in Fig. 7, when the light scalar is heavier than approximately$ 15 \; {{\rm{GeV}}} $ and the tri-scalar coupling is large enough, the branching ratio for the $ 4b $ channel is significant. The minimal integrated luminosity needed to discover the decay in this channel can be $ 0.31 {\; {\rm{fb}}}^{-1} $ for Scenarios II and III at the ILC.
• As in Fig. 8, for Scenarios I and II, the exotic Higgs decay can be expected to be observed in the $ 4j $ channel when its mass is lighter than $ 11 \; {{\rm{GeV}}} $, whereas for Scenario III, the light scalar available at the CEPC can be as heavy as $ 40 \; {{\rm{GeV}}} $. The minimal integrated luminosity needed to discover the exotic decay in this channel can be $ 18 {\; {\rm{fb}}}^{-1} $ for Scenario II at the ILC.
• As in Figs. 9 and 10, the signal rates in $ 2b2\tau $ and $ 4\tau $ channel show similar trends. The branching ratios are small before the light scalar reaches the mass threshold, and the maximum values of branching ratios occur around $ m_s = 12 \; {{\rm{GeV}}} $; the minimal integrated luminosity needed to discover the decay in the $ 2b2\tau $ channel is $ 3.6 {\; {\rm{fb}}}^{-1} $ for Scenario II at the ILC, and that in the $ 4\tau $ channel is $ 0.22 {\; {\rm{fb}}}^{-1} $ for Scenario III at the ILC.
IV. CONCLUSIONS
In this work, we have discussed the exotic Higgs decay to a pair of light scalars in the scNMSSM, or the NMSSM with NUHM. First, we performed a general scan over the nine-dimension parameter space of the scNMSSM, considering the theoretical constraints of vacuum stability and Landau pole as well as experimental constraints of Higgs data, non-SM Higgs searches, muon g-2, sparticle searches, relic density and direct searches for dark matter, etc. Then, we found three scenarios with a light scalar of $ 10\sim 60 \; {{\rm{GeV}}} $: (i) the light scalar is CP-odd, and the SM-like Higgs is $ h_2 $; (ii) the light scalar is CP-odd, and the SM-like Higgs is $ h_1 $; and (iii) the light scalar is CP-even, and the SM-like Higgs is $ h_2 $. For the three scenarios, we check the parameter regions that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and the detections at the hadron colliders and future lepton colliders.
In this work, we compare the three scenarios, checking the interesting parameter regions that lead to the scenarios, the mixing levels of the doublets and singlets, the tri-scalar coupling between the SM-like Higgs and a pair of light scalars, the branching ratio of Higgs decay to the light scalars, and the detections at the hadron colliders and future lepton colliders.
Finally, we draw the following conclusions regarding a light scalar and the exotic Higgs decay to a pair of it in the scNMSSM:
• There are different interesting mechanisms in the three scenarios to tune parameters to obtain the small tri-scalar couplings.
• The singlet components of the SM-like Higgs in the three scenarios are at the same level of $ \lesssim0.3 $ and are roughly one order of magnitude larger than the doublet component of the light scalar in Scenario I and II.
• The couplings between the SM-like Higgs and a pair of light scalars at tree level are $ -3\sim 5 $, $ -1\sim 6 $, and $ -10\sim 5 $ GeV for Scenario I, II, and III, respectively.
• The stop-loop correction to the tri-scalar coupling in Scenario III can be a few GeV, much larger than those in Scenarios I and II.
• The most effective way to discover the exotic decay at the future lepton collider is via the $ 4\tau $ channel, while that at the HL-LHC is via the $ 4b $ channel for a light scalar heavier than 30 GeV and via $ 2b2\tau $ or $ 2\tau2\mu $ channel for a lighter scalar.
The details of the minimal integrated luminosity needed to discover the exotic Higgs decay at the HL-LHC, CEPC, FCC-ee, and ILC are summarized in Table 2, and the tuning mechanisms in the three scenarios to obtain the small tri-scalar coupling can be seen from Figs. 1,2 and Eqs. (17)-(26).
Decay Mode Futrue colliders
HL-LHC CEPC FCC-ee ILC
($ b\bar{b} $ )($ b\bar{b} $ ) $ 650 {\; {\rm{fb}}}^{-1} $ (@II) $ 0.42 {\; {\rm{fb}}}^{-1} $ (@III) $ 0.41 {\; {\rm{fb}}}^{-1} $ (@III) $ 0.31 {\; {\rm{fb}}}^{-1} $ (@II)
($ jj $ )($ jj $ ) − $ 21 {\; {\rm{fb}}}^{-1} $ (@II) $ 18 {\; {\rm{fb}}}^{-1} $ (@II) $ 25 {\; {\rm{fb}}}^{-1} $ (@II)
($ \tau^+\tau^- $ )($ \tau^+\tau^- $ ) − $ 0.26 {\; {\rm{fb}}}^{-1} $ (@III) $ 0.22 {\; {\rm{fb}}}^{-1} $ (@III) $ 0.31 {\; {\rm{fb}}}^{-1} $ (@III)
($ b\bar{b} $ )($ \tau^+\tau^- $ ) $ 1500 {\; {\rm{fb}}}^{-1} $ (@II) $ 4.6 {\; {\rm{fb}}}^{-1} $ (@II) $ 3.6 {\; {\rm{fb}}}^{-1} $ (@II) $ 4.4 {\; {\rm{fb}}}^{-1} $ (@II)
($ \mu^+\mu^- $ )($ \tau^+\tau^- $ ) $ 1000 {\; {\rm{fb}}}^{-1} $ (@II) − − −
Table 2. The minimum integrated luminosity for discovering (at $ 5\sigma $ level) the exotic Higgs decay at the future colliders, where "@I, II, III" indicates the three different scenarios. | CommonCrawl |
Atomic Number Table
The next is Helium (= 2), then Lithium (= 3) so the answer is increase in increments of 1. Ever since, elements have been arranged on the periodic table according to their atomic numbers. The Bohr model was a one-dimensional model that used one quantum number to describe the distribution of electrons in the atom. mass number, but what do these numbers mean? The atomic number has the symbol 'z', 00:00:27,220 -- 00:00:33,280 this number tells you how many protons are in one atom of an element. The atomic number of oxygen is 8, because oxygen has A. The mass of the atom of a particular isotope relative to hydrogen 1 (or to one twelfth the mass of carbon 12), generally very close to the whole number represented by the sum of the protons and neutrons in the atomic nucleus of the isotope; it is not to be confused with the atomic weight of an element, which may include a number of isotopes in natural proportion. In the s-block and p-block of the periodic table, elements within the same period generally do not exhibit trends and similarities in properties (vertical trends down groups are more. Number of Neutrons = Mass Number - Atomic Number. - A summary quiz Resources Topical and themed. Symbol Z Abbr. from its mass number and atomic number: Potassium has a mass number of 39 and an atomic number of 19. the number indicating the order of a chemical element in the periodic system of elements of D. The 14 is the total mass of the atom. This represents the Helium Atom, Its symbol is He, Its mass number is 4, Its atomic number is 2, It has 2 protons, 2 neutrons (4 - 2 = 2), and 2 electrons. Periodic table of the elements. The atomic number of a sodium atom is 11 and its mass number is 23. The element that has the atomic number 17 is? 5. Elements are arranged from left to right and top to bottom in order of increasing atomic number. Look very carefully at the boxes in the periodic table. isotopes true b c a carbon-12; 12 amu false The values of atomic masses measured in grams are inconveniently small and. The 14 is the total mass of the atom. This charge, later termed the atomic number, could be used to number the elements within the periodic table. A small selection of non-spectroscopic atomic data has been provided for each element. Moseley's discovery was summarized as the periodic law: When elements are arranged in order of increasing atomic number. The subshell with n =2 and l =1 is the 2 p subshell; if n =3 and l =0, it is the 3 s subshell, and so on. This leaderboard is disabled as your options are different to the resource owner. The Periodic Table. The periodic table is an arrangement of the elements according to increasing atomic number. https://www. Your Account Isn't Verified! In order to create a playlist on Sporcle, you need to verify the email address you used during registration. The atomic number top left is the number of protons in the nucleus. This is the atomic number of the element. 37 amu CHAPTER 5, Atomic Structure and the Periodic Table(continued) An atom of neon-22 has two more neutrons in its nucleus than an atom of neon-20. Define "isotope" using mass number, atomic number, number of protons, neutrons and electrons. The atomic number of an atom is the number of protons in the nucleus of that atom, and it is. According to Wikipedia, the great online dictionary, and I quote: "It is a colorless. Aug 27, 2019 · The iconic periodic table of elements, devised by the Russian chemist Dmitri Mendeleev, is a two-dimensional array of the chemical elements, ordered by atomic number and arranged 18 across by. 00794 2 He Helium 4. Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations. Abbreviations and Definitions: Atomic number: The number of protons in an atom. Use the "Hint" button to get a free letter if an answer is giving you trouble. so whenever you see a number with decimal point on periodic table, then that is the atomic mass. Since the Atomic Number of hydrogen is 1 that is where it finds its appropriate place at the start of the table. It is also called quicksilver. So all of this basically describes the atomic number isotopes and kind of help you differentiate or decipher what exactly you're looking at when you're looking at the periodic table. Periodic table of the elements, the organized array of all the chemical elements in order of increasing atomic number. Atomic mass: The mass of an atom is primarily determined by the number of protons and neutrons in its nucleus. The periodic table of elements arranges all of the known chemical elements in an informative array. Get an answer for 'What is the relationship between atomic size and atomic number of an atom?' and find homework help for other Science questions at eNotes. The most commonly stable and commonly found isotopes on earth is 12 C with an abundance of 98. The concept of atomic number emerged from the work of G. By ordering the Periodic Table for Kids now, you are giving your or your friend's children one of the lenses of science. Radon: Atomic mass number given for longest lived isotope. Atomic weight is 12. What is the atomic mass of mercury? 3. elements in the periodic table. Correspondingly, it is also the number of planetary electrons in the neutral atom. The atomic weights are available for elements 1 through 118 and isotopic compositions or abundances are given when appropriate. When it includes chemistry and moreover physics, normally the atomic amount associated to a chemical ( moreover referred to as the proton amount) is actually the number of protons utterly positioned throughout the nucleus associated to an atom of that , and for that goal identical to the associated fee selection of the particular nucleus. reactivity. pdf and image format. atomic number for Group 2A and for Period 3 of the periodic table. This leaderboard is disabled as your options are different to the resource owner. Sodium = 11. As one goes across a period the atomic radius decreases because more protons are in the atoms as one goes across the period and the electrons are in the same shell. What is Atomic Number? Used in both chemistry and physics, the atomic number is the number of protons found in the nucleus of an atom. The Bohr model was a one-dimensional model that used one quantum number to describe the distribution of electrons in the atom. The periodic table is said to be the most important part of chemistry as it makes it. 0107 7 N Nitrogen 14. More importantly, and the reason why the ordering of the elements according to. Ni is the symbol for what element? Nickel 4. Symbol Z Abbr. Atomic mass of Silicon is 28. Since the Atomic Number of hydrogen is 1 that is where it finds its appropriate place at the start of the table. The atomic number or proton number (symbol Z) of a chemical element is the number of protons found in the nucleus of every atom of that element. This leaderboard is currently private. In going across a period (a horizontal row) in the periodic table, the valence electrons of the elements all have the same principal quantum number and so we would predict no change in the radii from a change in the value of n. - An optional practical. Elements are arranged from left to right and top to bottom in order of increasing atomic number. Silicon is a chemical element with atomic number 14 which means there are 14 protons and 14 electrons in the atomic structure. The number of elementary positive charges (protons) contained within the nucleus of an atom. These two components of the nucleus are referred to as nucleons. Thus, the standard atomic weight is given as a single value with an uncertainty that includes both measurement uncertainty. The shell number is followed by the letter of the sub-shell, with the number of electrons in the shell indicated by a superscript number. According to Wikipedia, the great online dictionary, and I quote: "It is a colorless. The atomic number of the element is used to arrange the elements in the periodic table. As you go down a column in the periodic table, atoms increase in size. Periodic Table of Elements - Elements Database Periodic Table Our periodic table of chemical elements presents complete information on the chemical elements including the chemical element symbol, atomic number, atomic weight and description. To find the number of neutrons, subtract the atomic number from its mass number. increasing relative atomic mass, and elements with similar properties were grouped together. The element with atomic number 13 and symbol Al may be spelt as aluminium or as aluminum. jpg from AA 1elements: Periodic Table are arranged in increasing atomic number. What is the Mass Number? The. The funky design features a cream tone ceramic body with pink and gold patterns and sputnik inspired chrome pieces around the stem. Its atomic number 115, Uup Symbol and since it has no official name, it is known as it, which is on one five in Latin. First, there is an integer (whole number) in some part of the box. To write a complete electron configuration for an uncharged atom, Determine the number of electrons in the atom from its atomic number. isotopes true b c a carbon-12; 12 amu false The values of atomic masses measured in grams are inconveniently small and. Since the Atomic Number of hydrogen is 1 that is where it finds its appropriate place at the start of the table. Atomic Weight vs Mass Number. The neutron number, or number of neutrons, when added to the atomic number, equals atomic mass number. Noble gases are placed at the end of every period, so if you remember this table, then using a bit of common sense, we can easily locate the position of any element. Cu, Ag, and Au are all in what group # 7. The atomic number on the periodic table (or anywhere else) tells you which element is being specified. Periodic Table Cipher Tool to convert atomic numbers into letters. Explore LB's board "periodic table" on Pinterest. 2 Reference E95. 941* 4 Be 9. Abbreviations and Definitions: Atomic number: The number of protons in an atom. All of the elements in the second row (the second period) have two orbitals for their electrons. Atomic numbers were first assigned to the elements c. One-half the distance between two adjacent atoms in crystals of elements. Define atomic number. Hydrogen is the lightest and most abundant element in the entire universe, and makes up almost all of a nebula's mass. atomic number for Group 2A and for Period 3 of the periodic table. First, there is an integer (whole number) in some part of the box. The periodic table of the elements. reactivity. Atomic Number of Silicon. Hydrogen, the first element in the periodic table, has the atomic number of one, corresponding with the number of protons in the nucleus. How to use the atomic number and the mass number to represent different isotopes The following video shows an example of calculating the number of neutrons. The value of l also has a slight effect on the energy of the subshell; the energy of the subshell increases with l ( s < p < d < f ). unique atomic number. Each element's atomic number, name, element symbol, and group and period numbers on the periodic table are given. The isotope experts should become familiar with how atomic weight is a weighted average of atoms with different mass numbers but the same atomic number. However, it is not giving the exact mass of the atom. Table of Isotopic Masses and Natural Abundances This table lists the mass and percent natural abundance for the stable nuclides. 4527 amu 35 Cl 75. substitution B. Its atomic number 115, Uup Symbol and since it has no official name, it is known as it, which is on one five in Latin. atomic number listed as AN banners to match the organisation of the Periodic Table. - A progress check. Fundamental properties of atoms including atomic number and atomic mass. 37 amu CHAPTER 5, Atomic Structure and the Periodic Table(continued) An atom of neon-22 has two more neutrons in its nucleus than an atom of neon-20. edu is a platform for academics to share research papers. Using the periodic table as a guide, give the name and symbol of the element that has the given number of protons in the nucleus. This is the quickest way to do this. These guys are the mass number, and these guys just to read the fact that chlorine has 17 protons this is the atomic number. If we add the number of electrons that each sublevel holds it looks like this: 1s 2 2s 2 2p 6 3s 2 3p 6 4s 2 3d 10 4p 6 5s 2 4d 10 5p 6 6s 2 4f 14 5d 10 6p 6 7s 2 5f 14 6d 10 7p 6. This two minute video shows how to read the periodic table. Notice that no matter what form of the table you are using, there are always three very specific items that appear for each element. Total Cards. As you go down a column in the periodic table, atoms increase in size. 003 3 Li Lithium 6. The element with atomic number 55 and symbol Cs may be spelt as caesium or as cesium. Some elements changed spots, making the pattern of properties even more regular. Periodic Table of the Elements 1 H 1. Aug 27, 2019 · The iconic periodic table of elements, devised by the Russian chemist Dmitri Mendeleev, is a two-dimensional array of the chemical elements, ordered by atomic number and arranged 18 across by. Atomic mass number is also called nucleon number. jpg from AA 1elements: Periodic Table are arranged in increasing atomic number. The periodic table - AQA. We remember from our school chemistry course that every element has its own specific atomic number. Example: 4He 2. No matter what you're looking for or where you are in the world, our global marketplace of sellers can help you find unique and affordable options. periodic table with atomic mass and atomic number Group 3. Though individual atoms always have an integer number of atomic mass units, the atomic mass on the periodic table is stated as a decimal number because it is an average of the various isotopes of an element. #avenger-8-air-hockey-table-by-atomic-game-tables #Hockey-Tables , Shop Game Room Furniture with Offer Free Shipping and Free In Home Delivery Nationwide. 4 Pauling Scale. Atomic number, the number of a chemical element (q. The rest of the isotopes are unstable. List the symbols for two transition metals. It is the same as the number of protons that the atom of each element has, so sometimes atomic number is called proton number. The mass number measures the number of protons and neutrons in the nucleus of a particular atom. That is the whole idea of memory pegs. Atomic Number of the elements. Look very carefully at the boxes in the periodic table. 811* 6 C 12. At a basic level, the atomic mass is the number of protons plus the number of neutrons. It is also worthy to note that Mendeleev's 1871 arrangement was related to the atomic ratios in which elements formed oxides, binary compounds with oxygen whereas today's periodic tables are arranged by increasing atomic numbers, that is, the number of protons a particular element contains. Moseley; he arranged the elements in an order based on certain characteristics of their X-ray spectra and then numbered them accordingly. How to use the atomic number and the mass number to represent different isotopes The following video shows an example of calculating the number of neutrons. 01 neutrons because you can't have part of a neutron. Along a period of the periodic table, the atomic number is gradually. An early attempt was made by a German Chemist named Johann Dobereiner , in 1817. The first chemical element is Hydrogen and the last is Ununoctium. com makes it easy to get the grade you want!. The periodic table lists all the elements, with information about their atomic weights, chemical symbols, and atomic numbers. Since the number of protons determines where an element goes in the periodic table, simple addition shows the new element to bear the atomic number 115, which had never been seen before. Periodic table of the elements. In an uncharged atom, the atomic number is also equal to the number of electrons. Keep in mind that not all periodic tables are exactly the same so some may have. Silicon is a chemical element with atomic number 14 which means there are 14 protons and 14 electrons in the atomic structure. Periodic Table Questions page 2 of 8 ___ 38. The arrangement of the periodic table leads us to visualize certain trends among the atoms. Atomic Number of Silicon. on-line shopping has currently gone a long method; it's modified the way consumers and entrepreneurs do business today. The periodic table is an arrangement of the elements according to increasing atomic number. You searched for: atomic tables! Etsy is the home to thousands of handmade, vintage, and one-of-a-kind products and gifts related to your search. Here's a list of chemical elements ordered by increasing atomic number. The entire periodic table has been arranged around atomic number, with each next element being the same stable configuration of the previous element except with one more proton and electron pair. The electron configuration describes the distribution of electrons in the shell of an atom at various energy states. The Periodic table of elements lists the atomic number above the element symbol. The atomic number on the periodic table (or anywhere else) tells you which element is being specified. first 50 elements of the periodic table. Example: 4He 2. Free Printable Periodic Table of Elements. Copper has the atomic number of 29 for its 29 protons. It is denoted by the letter Z. Magnetic Quantum Number (ml): ml = -l,. Learn more TSMC is 'raiding the periodic table' to keep Moore's Law alive. This is because the additional electrons in elements with higher atomic numbers are filling orbitals that have a larger average radius (more accurately, the extent of the electron probability distribution is larger). Periodic table of the elements, the organized array of all the chemical elements in order of increasing atomic number. The periodic table lists all the elements, with information about their atomic weights, chemical symbols, and atomic numbers. The periodic table is a chart of all the elements arranged in increasing atomic number. The periodic table of elements lists elements in ascending order according to its atomic number. The atomic number uniquely identifies a chemical element. The periodic table is arranged in order of atomic number. There you can find the metals, semi-conductor (s), non-metal (s), inert noble gas (ses), Halogens, Lanthanoides, Actinoids (rare earth elements) and transition metals. 93% and 13 C which forms the remaining form of carbon on earth. If this pattern holds true, you have definitely found the atomic number. Only its atomic number is. Table Lamp - Atomic Age LED Metal Accent Light Number of bids and bid amounts may be. Spin Quantum Number (m s) Table of Allowed Quantum Numbers Writing Electron Configurations Properties of Monatomic Ions References Quantum Numbers and Atomic Orbitals. What is the atomic mass of mercury? 3. And you can find the atomic number on the periodic table. Description: Replaced with innodb_adaptive_flushing_method. The atomic number uniquely identifies a chemical element. So we're going to talk about hydrogen in this video. Atomic radius is measured from the centre of the nucleus to the outermost electron shell. The modern arrangement of the elements is known as the Periodic Table of Elements and is arranged according to the atomic number of elements. 040 @ Material Shale Sandstone Fused silica Limestone Polyhalite Serpentine Basalt Granite Calcite Feldspar Slate No rite Grandpiorite C-axis axis Syeni te salt Dolomite Chlorite Anhydrite Quartzite pyrite Hematite Magnetite Sphalirite Thermal Conductivity. The Periodic Table of the Elements 1 H Hydrogen 1. When it includes chemistry and moreover physics, normally the atomic amount associated to a chemical ( moreover referred to as the proton amount) is actually the number of protons utterly positioned throughout the nucleus associated to an atom of that , and for that goal identical to the associated fee selection of the particular nucleus. Atomic mass is measured in Atomic Mass Units (amu) which are scaled relative to carbon, 12 C,. ) Moseley's discovery cleared up the cobalt-nickel and argon-potassium problems. It is also called quicksilver. The atomic weights (called as well Atomic Mass) have been removed from the original table as they correspond to the average mass of the atoms of an element. The Next 18 Elements. 5Air Powered Hockey Table by Atomic Game Tables Free Shipping On Orders Over $49. The numbers indicate approximately the highest oxidation number of the elements in that group, and so indicate similar chemistry with other elements with the same numeral. In the periodic table of elements, there are seven horizontal rows of elements called periods. By ordering the Periodic Table for Kids now, you are giving your or your friend's children one of the lenses of science. 2 Group 2A Element Atomic Number Atomic Radius Be 4 1. (See also The Periodic Table: Metals, Nonmetals, and Metalloids. What is the Mass Number? The. mass number, but what do these numbers mean? The atomic number has the symbol 'z', 00:00:27,220 -- 00:00:33,280 this number tells you how many protons are in one atom of an element. You go right, then go down when you reach the end of a row. The element number is its atomic number, which is the number of protons in each of its atoms. On the periodic table, the elements are arranged in the order of atomic number across a period. Use a periodic table to help you fill in the chart below. Atomic Mass "An atomic weight (relative atomic mass) of an element from a specified source is the ratio of the average mass per atom of the element to 1/12 of the mass of 12 C" in its nuclear and electronic ground state. Give the symbol for two halogens. Groups in the periodic table. The modern periodic table is arranged in such a way that all the elements have an increasing atomic number, and subsequently, increasing mass number. Placement or location of elements on the. So for hydrogen, hydrogen's atomic number is one. First 40 Elements by Atomic Number and Weight 40 terms. Moseley's Periodic Table. Periodic Table of Elements - Elements Database Periodic Table Our periodic table of chemical elements presents complete information on the chemical elements including the chemical element symbol, atomic number, atomic weight and description. How to use the atomic number and the mass number to represent different isotopes The following video shows an example of calculating the number of neutrons. The chemical element mercury is a shiny, silvery, liquid metal. The neutron number, or number of neutrons, when added to the atomic number, equals atomic mass number. Atomic Numbers- Electrons, Neutrons, and Protons Because neutrons and protons are almost the same mass, the total number of protons and neutrons in an atom is the atomic mass. So for hydrogen, hydrogen's atomic number is one. width of localized states, associated with different atomic arrangements, to be 5–66 meV. An isotope is a product of an element whose atom has a different neutron number than atomic number. 01 in the periodic table. Uut and Uup Add Their Atomic Mass To Periodic Table. Concept of atomic number, mass number, fractional atomic mass, isotopes, isobars The nuclei of atoms is made up of protons and neutrons. List the symbols for two transition metals. For sodium (Na) the atomic number is 11. The good thing with elements is that they're defined by atomic numbers, meaning they're defined by the number of protons in the nucleus. The element with atomic number 55 and symbol Cs may be spelt as caesium or as cesium. eight protons in the nucleus. They are known as the element's atomic number, and in the periodic table of elements, the atomic number of an element is the same as the number of protons contained within its nucleus. The resulting Periodic Table of the Elements below includes the Atomic Number, Atomic mass, Symbol, Name, Electronegativity, Density, Ionization energy, Boiling point, Melting point, Electron Configuration, Oxidation States, Ground State Level, and Atomic Radius. The chemical symbol for Silicon is Si. Nuclides marked with an asterisk (*) in the abundance column. Given information about an element, find the mass and name of an isotope. Many of the properties of elements and their reactivity relate to their position in the periodic table. There's quite a lot to be learned about atoms. eight protons in the nucleus. The atomic number is the number of protons in the nucleus of an atom, therefore it is the same to the charge number of the element. If you look at an image of the periodic table, you start at the top-left (Hydrogen, atomic number = 1). In chemistry, the atomic number is the number of protons found in the nucleus of an atom. The periodic table is an arrangement of the elements according to increasing atomic number. Each element also lists its atomic number, atomic weight, name and abbreviation, a short description of its use and information on whether it is a solid, a liquid or a gas. The neutron number, or number of neutrons, when added to the atomic number, equals atomic mass number. Since the Atomic Number of hydrogen is 1 that is where it finds its appropriate place at the start of the table. The general trend is that atomic radii increase within a family with increasing atomic number. First 40 Elements by Atomic Number and Weight 40 terms. See more ideas about Science Classroom, Science lessons and Chemistry. Each element is uniquely defined by its atomic number. info and details all of the elements in the Periodic table, the numbers of protons. The entire periodic table has been arranged around atomic number, with each next element being the same stable configuration of the previous element except with one more proton and electron pair. (b) Complete the table with the number of protons, electrons, neutrons, and complete chemical symbol (showing the mass number and atomic number) for each neutral atom. Though individual atoms always have an integer number of atomic mass units, the atomic mass on the periodic table is stated as a decimal number because it is an average of the various isotopes of an element. Using the data below, make a bar graph of atomic radius vs. Standard Atomic Weight The standard atomic weight is the average mass of an element in atomic mass units ("amu"). Shop Furniture, Home Décor, Cookware & More! 2-Day Shipping. atomic mass. Francium: Click here to buy a book, photographic periodic table poster, card deck, or 3D print based on. The mass of an electron is very small (9. Moving through the periodic table in this fashion produces the following order of sublevels up through 6s:. (An isotope of an element is an atom of that element that has a particular Mass Number and is often identified according to it, e. Isotope: Atoms of the same element with the same atomic number, but different number of neutrons. The atomic number corresponds to the number of protons in the nucleus of an atom of that element. ATOMIC NUMBER AND MASS NUMBERS. The periodic table is arranged in order of atomic number. 012182 5 B 10. The chemical symbol for Silicon is Si. The atomic number is usually located above the element symbol. The atomic number of each element increases by one, reading from left to right. from its mass number and atomic number: Potassium has a mass number of 39 and an atomic number of 19. Next on the. Periodic Table Packet #1 ANSWERS Directions : Use your Periodic table to comp lete the worksheet. The electron configuration describes the distribution of electrons in the shell of an atom at various energy states. This periodic table of elements provides comprehensive data on the chemical elements including scores of properties, element names in many languages and most known nuclides (Isotopes). The electrons are arranged in shells around the nucleus. (a) Chemical properties of an element are the periodic function of their _____ (atomic weight/atomic number). Atomic Number of Elements in Periodic Table. atomic mass. So the atomic mass number below the symbol of the element in the periodic table is called the relative, weighted average atomic mass. Use the "Hint" button to get a free letter if an answer is giving you trouble. It is present everywhere in our environment and is released into the environment through forest fires and volcanic. Element Groups: Alkali Metals Alkaline Earth Metals Transition Metals Other Metals Metalloids Non-Metals Halogens Noble Gases Rare Earth Elements. The chemical behavior of the elements is determined by the number of protons. The isotope experts should become familiar with how atomic weight is a weighted average of atoms with different mass numbers but the same atomic number. atomic weights vary in normal materials, but upper and lower bounds of the standard atomic weight have not been assigned by IUPAC or the variations may be too small to affect the standard atomic weight value significantly. What is Atomic Number? Used in both chemistry and physics, the atomic number is the number of protons found in the nucleus of an atom. What is the atomic number? Every element has a unique atomic number. It is identical to the charge number of the nucleus. Radon: Atomic mass number given for longest lived isotope. $$\mathrm{_{64}Gd}$$ (…) The same meanings are described in the German standard DIN 1338 (2011). Look very carefully at the boxes in the periodic table. During the tip sliding process, the side boundaries of the graphite substrate were constrained to be fixed in all directions. Similarly, as you move down a group the atomic number increases. The modern arrangement of the elements is known as the Periodic Table of Elements and is arranged according to the atomic number of elements. The periodic table represents neutral atoms. Periodic table of the elements, the organized array of all the chemical elements in order of increasing atomic number. Atoms contain protons, neutrons and electrons. It also corresponds to the number of electrons in the neutral atom. If there are 9 protons, there must be 10 neutrons for the total to add up to 19. It is denoted by the letter Z. 'carbon-14'. Atomic Symbol The abbreviation or one/two letters that represent an element is called the atomic symbol. eight protons in the nucleus. Here's a list of chemical elements ordered by increasing atomic number. Periodic table of the elements. Similarly, as you move down a group the atomic number increases. What is the atomic number of this helium atom? A neutral atom must have equal numbers of protons and electrons, so the atomic number of an element also gives the number of electrons. Moseley; he arranged the elements in an order based on certain characteristics of their X-ray spectra and then numbered them accordingly. This is a weighted average of the naturally-occurring isotopes and this is not relevant for astrophysical applications. Concept of atomic number, mass number, fractional atomic mass, isotopes, isobars The nuclei of atoms is made up of protons and neutrons. atomic weights is usually the number that has decimal points. Students know how to use the periodic table to identify alkali metals, alkaline earth metals. Use a periodic table to help you fill in the chart below. See more ideas about Science Classroom, Science lessons and Chemistry. These two components of the nucleus are referred to as nucleons. The scientist Dmitri Mendeleev, a Russian chemist, proposed an arrangement of know elements based on their atomic mass. It is present everywhere in our environment and is released into the environment through forest fires and volcanic. Learn more TSMC is 'raiding the periodic table' to keep Moore's Law alive. Periodic Trends Worksheet Atomic Radius 1. What is the atomic mass of mercury? 200. | CommonCrawl |
Sex-specific patterns of senescence in artificial insect populations varying in sex-ratio to manipulate reproductive effort
Charly Jehan1,
Manon Chogne1,
Thierry Rigaud1 &
Yannick Moret ORCID: orcid.org/0000-0002-2435-84161
BMC Evolutionary Biology volume 20, Article number: 18 (2020) Cite this article
The disposable soma theory of ageing assumes that organisms optimally trade-off limited resources between reproduction and longevity to maximize fitness. Early reproduction should especially trade-off against late reproduction and longevity because of reduced investment into somatic protection, including immunity. Moreover, as optimal reproductive strategies of males and females differ, sexually dimorphic patterns of senescence may evolve. In particular, as males gain fitness through mating success, sexual competition should be a major factor accelerating male senescence. In a single experiment, we examined these possibilities by establishing artificial populations of the mealworm beetle, Tenebrio molitor, in which we manipulated the sex-ratio to generate variable levels of investment into reproductive effort and sexual competition in males and females.
As predicted, variation in sex-ratio affected male and female reproductive efforts, with contrasted sex-specific trade-offs between lifetime reproduction, survival and immunity. High effort of reproduction accelerated mortality in females, without affecting immunity, but high early reproductive success was observed only in balanced sex-ratio condition. Male reproduction was costly on longevity and immunity, mainly because of their investment into copulations rather than in sexual competition.
Our results suggest that T. molitor males, like females, maximize fitness through enhanced longevity, partly explaining their comparable longevity.
Life history theory assumes that organisms are constrained to optimally trade-off limited energetic and time resources between reproduction and life span, to maximize fitness [1, 2]. This principle is at the core of the theory of ageing, which predicts that, as reproduction is resource demanding, current reproduction is traded-off against future reproduction and survival, caused by a reduced investment into somatic protection and maintenance [2,3,4]. However, recent studies have sometimes revealed patterns of actuarial (decline in survival rate with age) and reproductive (decline in reproductive success with age) senescence rather contrasted with this prediction [5,6,7]. Since individuals may differ in both resource acquisition and resource allocation between traits, depending on individual and environmental quality, the cost of reproduction can remain undetected at the population level [8, 9].
Studies that investigated cost of reproduction in terms of senescence mainly focused on females [10, 11]. Those on males often referred to sexual selection theory and therefore on the cost of producing and maintaining sexual traits [12]. In males, cost of reproduction may result from resource demands for courtship, mating, struggling with female resistance, mate guarding, the production of sperm and accessory gland proteins [13,14,15,16]. They may also engage into costly intra-sexual competition for females through pre- and post-copulatory contests with other males [17]. In females, cost of reproduction may result from gamete production, offspring care, harassment by males, mating injuries, sexually transmitted diseases and damaging seminal substances [18,19,20,21]. These differential costs may have contributed to the evolution of sexually dimorphic life-history strategies in many species through which males and females achieve maximal fitness. For instance, while males may maximize fitness by increasing mating success at the expense of longevity, females may maximize fitness through longevity because offspring production, although resource intensive, requires time too. The different reproductive costs may also contribute to different patterns of senescence between males and females, which may vary within and among populations, depending on their relative intensity. Strong investment into reproduction early in life seems to contribute to accelerating reproductive and actuarial senescence [22]. However, our understanding of the impacts of the costs of reproduction on senescence mainly relies on theoretical and correlative studies, whereas experimental investigations are still scarce.
Somatic protection partly depends on the immune system, whose competence may diminish with age. Such an immunosenescence causes enhanced sensitivity to infection and inflammatory diseases, increasing risk of morbidity and mortality with age [23, 24]. Increased reproductive effort was found associated with enhanced susceptibility to parasitism and disease [25] or decreased immune activity [17, 26,27,28,29]. Trade-offs between reproductive and immune functions for limited resources, or negative pleiotropic effects of reproductive hormones on immune defence, have been proposed as proximate causes of the cost of reproduction [30]. However, contradictory results are common as studies also failed to demonstrate such a cost [31,32,33]. If investment into reproduction can induce a progressive decline in somatic functions, strong investment into early reproductive effort may generate accelerated immunosenescence and contribute to actuarial senescence.
Recent correlative evidence suggests that population structure, such as sex-ratio, affects individual reproductive effort with potential sex-specific consequences on senescence [34, 35]. In particular, variation in sex-ratio is predicted to modulate the cost of mating, through the strength of sexual selection in males [36], influencing the putative trade-off between reproductive effort and somatic maintenance [11]. Furthermore, cost of reproduction in females is also predicted to depend on population sex-ratio as it is expected to influence male competition for fertilization [16]. Hence, experimentally varying population sex-ratio appears to be a valuable tool to manipulate males and females reproductive effort and test its impacts on senescence at the population level.
Here, in a single experiment, we investigated the consequences of variable levels of investment in breeding effort on lifetime reproduction, survival and immunity of males and females of the mealworm beetle, Tenebrio molitor, of which we have manipulated the sex-ratio in artificial populations. In this highly promiscuous insect, manipulating the sex-ratio of populations is expected to affect both the average intensity of intra sexual competition or sexual selection, and individual mating rate. In male-biased sex-ratio conditions, males should face fewer mating opportunities, whereas females should show high individual reproductive effort. By contrast, in female-biased sex-ratio condition, males should copulate more frequently, whereas females should have fewer opportunities to mate. This experimental design allowed us to test the cost of different key features of male and female reproduction in terms of senescence at the population level by examining their lifetime changes in survival, fertility, reproductive effort, body condition and immunity. Note, however, that manipulating sex-ratio may only affect the opportunity for sexual selection and not the actual sexual selection [37], and life-history particularities of biological models should be taken into account. For example, common wisdom is that male-biased sex-ratio conditions should accelerate male reduction of survival, reproduction and immunity because of intense pre- and post-copulatory intra sexual competition. However, in T. molitor, mating might be where the largest costs arise in reproduction for males (see below), and accelerated senescence is expected in populations with female-biased sex-ratio because males should produce higher reproductive effort. Indeed, direct observations of the mating behaviour of T. molitor suggest that males do not engage in costly physical contests to access females [38, 39]. Courtship and mating are relatively brief during which males transfer a spermatophore that does not release the sperm before 7–10 min post-copulation [40]. Males may then perform rather passive short post-copulatory mate guarding, consisting on staying within 1 cm of the female for more than.
One minute in the presence of competitors [38, 39]. However, males do not appear to have evolved specific post-copulatory mate-guarding behaviour like those observed in other insects [41]. The spermatophore transferred during copulation contains nutrient-rich substances that constitute a nuptial gift [42], whose cost may prevent males to copulate again for 20 min after the last copulation [41]. Hence, as mating is more costly than pre- and post-copulatory sexual competition, T. molitor males may best achieve fitness through longevity, just like females, which would ultimately prevent the evolution of divergent patterns of actuarial senescence between males and females. Females, for their part, may exhibit strong early reproductive effort in populations with male-biased sex-ratio, they also should exhibit accelerated decline in reproduction, and immunity or earlier immune dysregulation, correlating with reduced survival with age. In populations with female-biased sex-ratio, females should survive, reproduce and maintain immunity at older age, as they might exhibit lower early life reproductive effort.
Mealworm beetles
Mealworm beetles are stored grain product pests that live several months in populations of variable density and at sex-ratio of about 50% (± 20%). T. molitor males and females may initiate reproduction from the fifth day post emergence, although they reach their full sexual maturity from the eighth day post emergence. They can mate many times with several partners within their 2 to 5 months of adult life. Females are continuously receptive to mating during adulthood and may produce up to 30 eggs per day although egg production may decline after 3 weeks [43]. Although able to store sperm in their spermatheca, females need to mate frequently to maintain high egg production [44].
The immune system of T. molitor relies on both constitutive cellular (e.g. haemocytes) and enzymatic (e.g. prophenoloxidase system) components at the core of the inflammatory response [45]. Their activity is cytotoxic [46], causing self-damage [47] and lifespan reduction [48,49,50,51]. They were found to decrease after mating [52] and either decline [53] or increase [54] with age. In addition, the inducible production of antibacterial peptides in the haemolymph [45] is an energetically costly process that may reduce survival [55]. As selection on immune expression and immune regulation might be weaker after reproductive senescence, age-related decline of baseline levels of immunity might be observed and immune activation may occur at old age due to dysregulation [54, 56].
Artificial populations and experimental design
Virgin adult beetles of controlled age (10 ± 2 days post-emergence) were obtained from pupae haphazardly sampled from a stock culture maintained in laboratory conditions (24 ± 2 °C, 70% RH in permanent darkness) at Dijon, France. Prior to the experiments, all these experimental insects were maintained separately in laboratory conditions, and supplied ad libitum with bran flour and water, supplemented by apple.
Fifteen artificial populations of 100 adult beetles were made according to three sex-ratio conditions. Five populations had a balanced sex-ratio, each comprising 50 males and 50 females (thereafter named the 50%_males condition), and were considered as the reference populations. Five populations had a male-biased sex-ratio, each comprising 75 males and 25 females (75%_males). Finally, five populations had a female-biased sex-ratio, each comprising 25 males and 75 females (25%_males). Each population was maintained in a plastic tank (L × 1 x H, 27 × 16.5 × 11.5 cm) containing bran flour, supplied once a week with apple and water. Every 2 weeks, each population was transferred into a clean tank supplied with fresh bran flour, thus avoiding the development of the progeny with the experimental adults.
Age specific reproductive assay
Reproductive capacity of females and males in each population was estimated weekly. To this purpose, 4 females haphazardly picked in each population were individually transferred into a plastic Petri dish (9 cm in diameter), containing bleached flour, a 2 mL centrifuge tube of water and a piece of apple. Each female was allowed to lay eggs in the Petri dish for 3 days, and was then returned to their initial population box. Two weeks later, the number of larvae was counted in each Petri dish to quantify female fertility, which is the number of viable larvae produced per female [57].
Concomitantly, four males haphazardly picked in each population were also individually transferred into Petri dishes, as above. Reproductive success of males was estimated through direct measures of their fertility (number of viable offspring per male [57]) instead of measuring spermatophores or counting the sperm, which are rough surrogates of male reproductive success. Each male was provided with a virgin female aged from 8 to 15 days for 24 h and was then returned in its initial population. Each female was then allowed to lay eggs in the Petri dish for three additional days to estimate, as described above, male fertility. In T. molitor, males may affect female fecundity (number of potential eggs produced by the female) and therefore their fertility, according to the respective quality of spermatophores and sperm transferred during mating. Consequently, male's success was a measure of the potential reproductive effort, not the one realized within its experimental population.
While assayed for their reproduction, focal females and males were replaced by marked individuals of the same age and sex in all the populations, to keep sex-ratio and density constant. Substitutes were from the same cohort as the experimental insects, kept in a separate tank of mixed-sex population. They were marked by clipping a piece of one elytra. When focal insects assayed for their reproduction were returned into their initial population box, substitutes were removed and returned into their tank.
Estimation of male and female reproductive effort at the population level
Survival of the insects was checked weekly, and dead insects were replaced by marked substitutes of the same sex and about the same age to keep the population sex-ratio and density of individuals constant. No measurement was performed on these marked individuals.
As the experimental design does not allow gathering measurements of longevity and fertility for each individual of the population, we estimated male and female reproductive effort (RE) at the population level, from the above age-specific measures of fertility, for each of the five population replicates, within sex-ratio conditions. This estimate was calculated as the total number of viable larvae produced per female or male in each replicate (i.e. the cumulative number of larvae produced during the whole experiment in a given replicate divided by the number of females or males tested for this replicate), divided by their respective average lifespan in the population replicate (i.e. the average lifetime of females or males in each relicate). The equation is given as follow:
$$ {RE}_r=\frac{l}{ML} $$
Where l is the total number of offspring (here viable larvae) produced per assayed females or males in the population replicate r, and ML is the recorded mean lifespan (in weeks) of males and females in the replicate r. RE values (as offspring per individuals and per mean weeks of survival in the population) of each sex and population replicate within each sex-ratio condition were used as data points for comparisons between modalities of sex-ratio.
Note that female RE values are likely representative of both female and male conditions resulting from the experiment, because the female reproductive performance resulted from mating with males from their respective population. By contrast, male RE values are representative of the male condition only, because male reproductive performance was standardized by pairing it with a virgin and age-controlled female that did not experience the experimental conditions. Therefore, male RE must be seen as a surrogate of male reproduction potential.
Body condition and haemolymph collection
At weeks 2, 4, 6 and 12 after the start of the experiment, 4 females and 4 males were picked at random in each population to estimate their body condition and immunity. The first three time points were chosen as being relevant of the time period during which most of the beetle reproduction is achieved and survival is still relatively high [44]. It is also within this period of time that a potential decline in somatic protection, including immunity, is predicted. The last time point corresponds to a period of time when reproduction should have almost ceased and when few beetles remain alive. As immunity measurements was destructive sampling, sampled insects were replaced by marked substitutes as above, to keep sex-ratio and density constant. However, this substitution was definitive, as sampled individuals were not returned to their initial population box after being assayed. The below estimation of the insect body condition and immunity was done as described in [58]. Beetles were first sized by measuring the length of the right elytra with Mitutoyo digital callipers (precision 0.1 mm) and weighed to the nearest mg with an OHAUS balance (discovery series, DU114C). Body condition was then estimated by the residuals of the regression between body size and body mass. Then, beetles were chilled on ice for 10 min before the sampling of 5 μL of haemolymph from a wound made in the beetle's neck and flushed in a microcentrifuge tube containing 25 μL of phosphate-buffered saline (PBS 10 mM, pH 7.4). A 10-μL aliquot was immediately used to measure haemocyte count. Another 5-μL aliquot was kept in an N-phenylthiourea-coated microcentrifuge tube (P7629, Sigma-Aldrich, St Louis, MO, USA) and stored at − 80 °C for later examination of its antibacterial activity. The remaining haemolymph solution (15 μL) was further diluted in 15 μL of PBS and stored at − 80 °C for later measurement of its phenoloxidase activity.
Immune parameters
Haemocyte count was measured using a Neubauer improved haemocytometer under a phase-contrast microscope (magnification × 400).
Antimicrobial activity of the haemolymph was measured using the inhibition zone assay described in [58]. Briefly, an overnight culture of the bacterium Arthrobacter globiformis from the Pasteur Institute (CIP105365) was added to a Broth medium containing 1% agar to achieve a final concentration of 105 cells.mL− 1. Six millilitres of the medium was subsequently poured per Petri dish and, after solidification, 12 wells were made inside the agar plate in which 2 μL of each haemolymph sample was deposited. Plates were then incubated overnight at 28 °C and the diameter of each zone of inhibition was measured.
For each haemolymph sample, both (i) the activity of naturally activated phenoloxidase (PO) enzyme only (hereafter PO activity) and (ii) the activity of PO plus that of proenzymes (proPO) (hereafter Total-PO activity) were measured using the spectrophotometric assay described in [59]. Total-PO activity quantification required the activation of proPO into PO with chymotrypsin, whereas PO activity was measured directly from the sample. Frozen haemolymph samples were thawed on ice and centrifuged (3500 g, 5 min, 4 °C). In a 96-well plate, 5 μL of supernatant were diluted in 20 μL of PBS and were added either 140 μL of distilled water to measure PO activity or 140 μL of 0.07 mg. mL− 1 chymotrypsin solution (Sigma-Aldrich, St Louis, MO, USA, C-7762) to measure Total-PO activity. Subsequently, 20 μL of a 4 mg.mL− 1 L-Dopa solution (Sigma-Aldrich, St Louis, MO, USA, D-9628) were added to each well. The reaction proceeded for 40 min at 30 °C, in a microplate reader (Versamax, Molecular Devices, Sunnyval, CA, USA). Reads were taken every 15 s at 490 nm and analysed using the software SOFT-Max®Pro 4.0 (Molecular Devices, Sunnyvale, CA, USA). Enzymatic activity was measured as the slope (Vmax value: change in absorbance unit per min) of the reaction curve during the linear phase of the reaction and reported to the activity of 1 μL of pure haemolymph.
Cox-regressions with a time-dependent covariate were used to analyse the difference in survival rates with respect to sex-ratio during the time (in weeks) from the start of the experiment and the death of all individuals. Sex-ratio was coded as categorical variables. The effect of sex ratio in the statistical model used the reference survival function generated from the data derived from the females or the males of the 50%-male sex-ratio condition. Time (in weeks) was incremented as a covariate in interaction with sex-ratio in the model as hazard ratios when the survival functions where not constant over time (for more details, see [60]). The analyses of fertility (i.e. the number of larvae produced per female or male at each week) and immune parameters were performed using mixed models, either Linear or Generalized linear depending on the nature of the data (see table legends). Starting models included sex-ratio condition, week (continuous variable for fertility, ordinal variable for immunity), their interaction, body condition and replicates treated as a random factor (REML estimates of variance component). The models presented here are those minimizing the AICc, where ΔAICc > 2 is usually considered to be good support [61], after comparisons of all models including predictors and their interactions, in a stepwise fashion (see Additional file 1: Table S1). The analyses of reproductive effort were made using ANOVA testing the effect of sex-ratio conditions. Analyses were made using IBM® SPSS® Statistics 19, JMP v. 10.0 and R version 3.3.2 (The R Foundation for Statistical Computing, Vienna, Austria, http://www.r-project.org). All the data files are available from the Dryad data base [62].
Demography: survival, fertility and reproductive effort
A first survival analysis comparing males and females of the 50%-male sex-ratio condition, which presumably corresponds to the sex-ratio condition in natural populations of T. molitor, showed no difference in longevity between males and females (Cox regression: Wald statistics = 0.004, d.f. = 1, p = 0.947, see Additional file 1: Figure S1). Survival of females and males was significantly affected by the sex-ratio condition (Table 1, Fig. 1). In the 75%-male sex-ratio condition, females exhibited an accelerated mortality with time by a factor of 13% per week compared to females of the 25 and 50%-male sex-ratio conditions (see odd ratio of Sex-ratio*Time-Cov in Table 1a). There was no significant difference in survival between females in the 25 and 50%-male sex-ratio conditions (Table 1a, Fig. 1a). Contrasting with females, males in the 75%-male sex-ratio condition survived significantly longer than those in the 25 and 50%-male sex-ratio conditions (by 53 and 50%, respectively, see odd ratio in Table 1b, Fig. 1b).
Table 1 Survival of adult females (a) and males (b) of Tenebrio molitor according to sex-ratio condition (Sex-ratio). The "simple" contrast was used for Sex-ratio (survival of males in the 50% of male condition was used as baseline). For females (a), a time-dependent procedure was used to account for the time-dependent effect of Sex-ratio on the risk of mortality (T × Sex-ratio). This procedure was not necessary for males as the effect of Sex-ratio on the risk of mortality was constant over time (b)
Age-specific survival according to sex-ratio condition. a females; b males
Whereas female fertility decreased with time, this pattern was dependent on the sex-ratio condition (Table 2a, Fig. 2a, see Additional file 1: Figure S2). Indeed, female fertility in the 75 and 25%-male sex-ratio conditions was lower than that in the 50%-male sex-ratio condition during the first 2 weeks, and became subsequently higher (Fig. 2a). Male fertility decreased with time in all sex-ratio conditions, with no significant effect of sex-ratio condition (Table 2b, Fig. 2b, see Additional file 1: Figure S2). As expected for both sexes, heavier females produced more larvae than lighter ones (Table 2).
Table 2 Fertility: generalized linear mixt models (GLMM, Poisson distribution, Log link function) analysing the factors influencing the number of larvae produced by females (a), (n = 638) and males (b), (n = 737)
Age-specific fertility of females (a) and males (b) according to sex-ratio condition. a the tested females were those coming from the experimental tanks. b the tested females were virgin females mated with males coming from the experimental tanks. Dots are the means (for variation around the means see Additional file 1: Figure S2) and lines are the predictions of the models
Female's RE differed among sex-ratio conditions (F2, 12 = 8.06 p = 0.006) and was significantly higher in the 75%-male than in the 25%-male sex-ratio condition (Fig. 3a). Female RE in the 50%-male sex-ratio condition showed an intermediate value (Fig. 3a). Male RE significantly differed among sex-ratio conditions (F2, 12 = 4.63 p = 0.032). Male RE in the 75%-male sex-ratio condition was significantly lower than in the 25%-male sex-ratio condition (Fig. 3b). Like for females, male's RE from the balanced sex-ratio condition showed an intermediate value, which was not significantly different from the two other sex-ratio conditions (Fig. 3b).
Estimated mean reproductive effort. Reproductive effort (RE - mean number of viable offspring produced per individual and per week of survival in the population) of females (left panel) and males (right panel) according to sex-ratio condition. Lines are means, dots are values of single replicates. Values surrounded by different letters were significantly different after Tukey-Kramer HSD post-hoc test (α = 0.05)
Body condition and immunity
Male and female body condition, estimated by the residuals of the regression between body size and body mass, exhibited a similar decline with age, which was not affected by the sex-ratio condition (Table 3a, see Additional file 1: Figure S3). In females, immunological parameters were never affected by the sex-ratio condition (Table 3b-f). Both PO activity and Total-PO activity changed with age with lowest values at week 6 (Fig. 4a), and were positively influenced by body condition (Table 3b, c). By contrast, anti-bacterial activity of the haemolymph increased with age (Table 3e, Fig. 4b). Haemocyte counts of females only differed among population replicates (Table 3d, see Additional file 1: Figure S3). As opposed to females, some of the immunological parameters of males were affected by the sex-ratio condition (Table 3). Male PO activity was influenced by the interaction between time and sex-ratio (Table 3b). While PO activity of males in the 50%-male sex-ratio condition decreased during the first 6 weeks, PO activity of males in the other two sex-ratio conditions increased between week 2 and 4. In all sex-ratio conditions, PO activity increased again at week 12 (Fig. 4c). In addition, PO activity of males in the 25%-male sex-ratio condition was overall lower than PO activity of males in the other sex-ratio conditions (Fig. 4c). Total-PO activity only differed between population replicates (Table 3c). As in females, antibacterial activity in the haemolymph of males increased with age (Table 3e, Fig. 4d). However, the size of the zones of inhibition of males exhibiting positive antibacterial activity (reflecting the intensity of this activity) was higher for males in the 50%-male sex-ratio condition than for males in the other sex-ratio conditions (Table 3f, Fig. 4e). Finally, haemocyte counts of males in all sex-ratio conditions varied with time, mainly because of its high value at week 6 (Table 3d, Fig. 4f).
Table 3 Body condition and immune parameters. Mixed linear models or generalized linear model analysing the factors influencing body condition (a), PO activity (b), Total PO activity (c), haemocyte count (d), the proportion of beetles exhibiting antibacterial activity in their haemolymph (e) and the intensity of this antibacterial activity as the size of the zone of inhibition (f) in both females (left) and males (right). Models included sex-ratio condition, Age in weeks (ordinal variable), their interaction, body condition, and replicates as a random factor
Immune parameters in females and males according to individual age and/or sex-ratio condition. a females PO activity; b proportion of females exhibiting antibacterial activity; c male PO activity; d proportion of males exhibiting antibacterial activity; e male intensity of antibacterial activity as the size (in mm) of zones of inhibition; f male haemocyte count. Values are means among replicates ± s. e. m. Number in the bars are sample size
By manipulating the sex-ratio of artificial populations of mealworm beetles, Tenebrio molitor, we successfully affected the reproductive effort of both males and females. Note that for males, our estimations are rather relevant of their reproductive potential effort or maximal reproductive effort because they were tested using young virgin females. As predicted, female biased sex-ratio led females to exhibit a relatively low reproductive effort, whereas males reproductive potential was the highest. By contrast, male biased sex-ratio increased the reproductive effort of females while that of males dropped. Males and females from populations with balanced sex-ratio exhibited intermediate reproductive effort. Interestingly, males and females of the 50% sex-ratio condition had similar longevity. This absence of divergent patterns of actuarial senescence between males and females may result from a relatively strong investment of males into mating activity rather than into sexual competition. Therefore, like in females, longevity appears to be an important criterion to maximize fitness in males of T. molitor.
While varying in their reproductive effort according to sex-ratio condition, females exhibited different patterns of actuarial and reproductive senescence. Females in the 75%-male sex-ratio condition (with the higher reproductive effort) suffered from an increased mortality compared to females in the other conditions. While we did not directly observed the mating behaviour in our experiments, frequent mating events and harassments by males may explain this accelerated mortality and is in line with previous observations from other insect species [34, 63,64,65].
Female fertility with time in the 50% sex-ratio condition contrasted to that of females in the other two sex-ratio conditions. They produced many offspring during the first 2 weeks of their adult life, then fewer to become almost null at 8 weeks onward. As previously reported in both vertebrates and invertebrates [22, 66], intense early reproductive activity is associated to earlier reproductive decline. Females in the other sex-ratio conditions (25 and 75% of males) produced relatively fewer offspring when young adults but kept this reproductive effort when becoming older. In the 25% sex-ratio condition, the low proportion of males may have constrained female access to mating, preventing them to reach their maximal early reproductive potential. Such a low early reproductive investment, possibly accompanied by low male harassment, may have preserved female late reproduction. In the 75% male sex-ratio condition, high proportion of males was expected to increase interactions between males as well as enhancing mate guarding [38]. This may also have prevented young females to have optimal access to mating, while compensating by a higher probability of mating as they aged. However, female reproduction in this male-biased sex-ratio condition stopped earlier than in the others, because their survival had rapidly declined. All together, the data suggest that the reproductive effort of females in the 75% male sex-ratio condition was more costly than that of females in the other sex-ratio conditions, constraining them to trade-off their longevity against their reproduction.
Despite a more costly reproductive effort, females in the 75%-male sex-ratio condition did not exhibit any further functional decline compare to females in the other sex-ratio conditions. While body condition declined with female age, such a decline was similar in all sex-ratio conditions. Similarly, changes in female immune activity were never influenced by the sex-ratio. While haemocyte counts remained constant with female age, antibacterial activity increased. Similar results were reported in the bumblebee, Bombus terrestris [53]. With age, the probability of having been exposed to microbes increases. This may explain the higher proportion of older individuals exhibiting induced antibacterial activity in their haemolymph, as insects can produce prophylactic long lasting antibacterial responses after a single bacterial challenge [67, 68]. PO activity declined at week 6 in females, which is consistent with the beginning of senescence, when reproduction started to end but survival is still relatively high (Fig. 1, see Additional file 1: Figure S2). PO activity increased again at week 12, among the rare surviving individuals (Fig. 1). This higher late PO-activity may have two non-exclusive explanations. On the one hand, it may result from selection where individuals with the best somatic protection, involving high PO-activity, survived longer than those having poorer ones. On the other hand, high levels of PO activity at week 12 could also result from a deregulation of the host inflammatory response [69, 70]. The impact of female reproductive effort on immunity seems limited, at least on the constitutive base levels of the immune parameters we have measured. We cannot exclude that female ability to produce an immune response upon challenge or others non-measured immune parameters could be affected. Nonetheless, our results show that increasing the reproductive effort of T. molitor females affected demographic senescence, mainly through longevity reduction, but with apparently limited effect on immune senescence.
Changes in the reproductive effort (reproductive potential) of males through the manipulation of the sex-ratio also affected their survival. It is often assumed that most of the cost of reproduction in males involves sexual pre-copulatory competition. Thus males in the 75%-male sex-ratio condition could have been expected to engage in strong and costly intra-sexual competition for females, resulting in low reproductive success and shorter longevity compared to the other sex-ratio conditions, as previously shown in vertebrates [11, 71, 72] and invertebrates [34]. However, although male fertility slightly declined with age, it was not affected by the experimental sex-ratio condition, suggesting that males exhibited similar patterns of reproductive senescence, independently of their reproductive effort. Male reproductive senescence might also be revealed by the production of lower quality offspring with age [73], which was not tested in this study. In addition, males from the 75%-sex-ratio condition showed longer longevity. This phenomenon may have two main explanations, consistent with predictions linked to the peculiar mating behaviour of T. molitor. On the one hand, competition for females in that sex-ratio condition was not very strong or costly. Under high risk of sexual competition, male reproductive success may depend on their investment into pre-copulatory (e.g., courtship and aggressive behaviours with other males) and/or post-copulatory (e.g., mate guarding) behaviours to limit sperm competition [74]. So far, T. molitor males were never reported to engage in physical contest either before or after copulation [38] and current evidence suggests that male-male competition is unlikely to bear strong costs [38, 41, 75, 76]. Our experimental design nevertheless did not allowed us to verify these assumptions. On the other hand, each mating event is costly for T. molitor males, because nutrient-rich spermatophores are transferred to females in addition to the sperm [42]. Since, on average, males in the 75%-male sex-ratio condition may have copulated less frequently than males in the other sex-ratio conditions, they may have saved resources that contributed to their longer survival. By contrast, males from the 25%-male sex-ratio condition likely performed more mating events than males in the other sex-ratio conditions, but this was apparently not costly enough to significantly impair their longevity compared to the 50%-male sex-ratio condition.
Our results suggest that males in this 25%-male sex-ratio condition had paid a functional cost for their higher reproductive effort, especially in terms of immunity. Indeed, males in the 25%-male sex-ratio condition showed a reduced immune activity possibly resulting from their higher reproductive effort. First, males in the 25%-male sex-ratio condition had reduced PO activity despite having a similar concentration of total phenoloxidase enzymes in their haemolymph than males of the other sex-ratio conditions. This lower PO activity was constant over time and contrasted with that of males of the two other sex-ratio conditions, for which the temporal pattern of PO activity resembled that of females (high levels in early weeks, decline in week 6, and re-increase in week 12). Since mating activity is known to transiently reduce PO activity in T. molitor [52], such a down regulation of the PO activity in males of the 25%-male sex-ratio condition might be reflecting their higher mating activities. Higher secretion of juvenile hormone might be involved in mediating mating-induced PO activity depression [52], which could contribute to reduce longevity [77]. Juvenile hormone also prevents the release of cytotoxic substances by active PO enzymes that could reduce insect longevity by self-damaging host tissues and organs [47, 49,50,51, 78]. These combined effects may have contributed to the observed absence of difference between the survival of males in the 25% and the 50%-male sex-ratio conditions. Second, as observed for females, the proportion of males exhibiting positive antibacterial activity in their haemolymph increased with age in all the sex-ratio conditions. As stated earlier, this was expected as the probability of having being challenged by opportunistic microbes increases with age. However, the size of the zones of inhibition observed from the haemolymph of males in the 25%-male sex-ratio condition was significantly smaller than that of males in the other sex-ratio conditions, suggesting that males in the 25%-male sex-ratio condition produced less antibacterial peptides than the other males. As mating activity, through the production and transfer of spermatophores, is particularly resource-demanding for males in terms of protein content [42], the higher mating activity of males in the 25%-male sex-ratio condition could have depleted the necessary protein resource to produce as much antibacterial peptides as in the other sex-ratio conditions. Mating may mediate such a trade-off through juvenile hormone secretion, which functions to switch on physiological processes associated with gametogenesis and spermatophore production [79].
Manipulating sex-ratio of artificial populations of T. molitor had important impacts on reproductive effort of females and males, but resulted in contrasting sex-specific trade-offs on demographic and immune traits. Increasing female reproductive effort did not affect immunity but strongly reduced longevity. Not surprisingly, females may then maximize fitness by moderate early investment into reproduction and longevity. While decreasing male reproductive effort enhanced longevity, increasing it impairs immunity. Males may therefore favour reproduction at the expense of their immunity when given the opportunity to increase their reproductive effort. This is in line with the Bateman's principle applied to immunity, where males gain fitness by increasing reproductive effort at the expense of immunity [80]. It is also consistent with the disposable soma theory of ageing, as reproduction compromises somatic protection [3, 4]. Nevertheless, our results also suggest that sexual competition in T. molitor is not a strong modulator of the male reproductive strategy towards early mating opportunities [81]. Basically, like in females, most of the cost of reproduction in males results from multiple copulations. This thus contrasts with the hypothesis that males should gain fitness by increasing mating success by investing in sexual competition at the expense of longevity [82, 83]. Since longevity is a key life history trait for both males and females of T. molitor, sex-specific patterns of actuarial senescence are not expected to evolve in this species. Accordingly, males and females showed similar patterns of survival with age in populations with balanced sex-ratio. Our results may further indirectly suggest that divergent actuarial senescence between males and females should evolve in species in which males strongly invest into sexual competition [11, 22].
All data files will be available from the Dryad Digital database at https://doi.org/10.5061/dryad.cvdncjt11.
AICc:
Corrected Akaike Information Criterion
Number of larvae
Mean Lifespan
PO:
Phenoloxidase
ProPO:
ProPhenoloxidase
r :
Reproductive Effort
REML:
REstricted Maximum Likelihood
Stearns SC. The evolution of life histories. Oxford: Oxford University Press; 1992.
Hughes KA, Reynolds RM. Evolutionary and mechanistic theories of aging. Annu Rev Entomol. 2005;50:421–45.
Williams GC. Natural selection, the costs of reproduction, and a refinement of Lack's principle. Am Nat. 1966;100:687–90.
Kirkwood TB, Rose MR. Evolution of senescence: late survival sacrificed for reproduction. Philos Trans R Soc Lond Ser B Biol Sci. 1991;332:15–24.
Jones OR, Scheuerlein A, Salguero-Gómez R, Camarda CG, Schaible R, Casper BB, et al. Diversity of ageing across the tree of life. Nature. 2014;505:169–73.
Jones OR, Gaillard J-M, Tuljapurkar S, Alho JS, Armitage KB, Becker PH, et al. Senescence rates are determined by ranking on the fast-slow life-history continuum. Ecol Lett. 2008;11:664–73.
Rodríguez-Muñoz R, Boonekamp JJ, Liu XP, Skicko I, Fisher DN, Hopwood P, et al. Testing the effect of early-life reproductive effort on age-related decline in a wild insect. Evolution. 2019;73:317–28.
van Noordwijk AJ, de Jong G. Acquisition and allocation of resources: their influence on variation in life history tactics. Am Nat. 1986;128:137–42.
Bleu J, Gamelon M, Sæther B-E. Reproductive costs in terrestrial male vertebrates: insights from bird studies. Proc R Soc B Biol Sci. 2016;283:20152600.
Hamel S, Gaillard J-M, Yoccoz NG, Loison A, Bonenfant C, Descamps S. Fitness costs of reproduction depend on life speed: empirical evidence from mammalian populations. Ecol Lett. 2010;13:915–35.
Lemaître J-F, Gaillard J-M, Pemberton JM, Clutton-Brock TH, Nussey DH. Early life expenditure in sexual competition is associated with increased reproductive senescence in male red deer. Proc R Soc B Biol Sci. 2014;281:20140792.
Tidière M, Gaillard J-M, Müller DWH, Lackey LB, Gimenez O, Clauss M, et al. Does sexual selection shape sex differences in longevity and senescence patterns across vertebrates? A review and new insights from captive ruminants. Evolution. 2015;69:3123–40.
Walker WF. Sperm utilization strategies in nonsocial insects. Am Nat. 1980;115:780–99.
Dewsbury DA. Ejaculate cost and male choice. Am Nat. 1982;119:601–10.
Andersson MB. Sexual selection. Princeton: Princeton University Press; 1994.
Arnqvist G, Rowe L. Sexual conflict. Princeton: Princeton University Press; 2005.
Hosken DJ. Sex and death: microevolutionary trade-offs between reproductive and immune investment in dung flies. Curr Biol. 2001;11:R379–80.
Stockley P. Sexual conflict resulting from adaptations to sperm competition. Trends Ecol Evol. 1997;12:154–9.
Jennions MD, Petrie M. Why do females mate multiply? A review of the genetic benefits. Biol Rev Camb Philos Soc. 2000;75:21–64.
Harshman LG, Zera AJ. The cost of reproduction: the devil in the details. Trends Ecol Evol. 2007;22:80–6.
Fortin M, Meunier J, Laverré T, Souty-Grosset C, Richard F-J. Joint effects of group sex-ratio and Wolbachia infection on female reproductive success in the terrestrial isopod Armadillidium vulgare. BMC Evol Biol. 2019;19:65.
Lemaître J-F, Berger V, Bonenfant C, Douhard M, Gamelon M, Plard F, et al. Early-late life trade-offs and the evolution of ageing in the wild. Proc R Soc B Biol Sci. 2015;282:20150209.
DeVeale B, Brummel T, Seroude L. Immunity and aging: the enemy within? Aging Cell. 2004;3:195–208.
Shanley DP, Aw D, Manley NR, Palmer DB. An evolutionary perspective on the mechanisms of immunosenescence. Trends Immunol. 2009;30:374–81.
Norris K, Evans MR. Ecological immunology: life history trade-offs and immune defense in birds. Behav Ecol. 2000;11:19–26.
Owens IPF, Wilson K, Owens IPF, Wilson K. Immunocompetence: a neglected life history trait or conspicuous red herring? Trends Ecol Evol. 1999;14:170–2.
Zera AJ, Harshman LG. The physiology of life history trade-offs in animals. Annu Rev Ecol Syst. 2001;32:95–126.
Fedorka KM, Zuk M, Mousseau TA. Immune suppression and the cost of reproduction in the ground cricket, Allonemobius socius. Evolution. 2004;58:2478–85.
Simmons LW, Roberts B. Bacterial immunity traded for sperm viability in male crickets. Science. 2005;309:2031.
French SS, DeNardo DF, Moore MC. Trade-offs between the reproductive and immune systems: facultative responses to resources or obligate responses to reproduction? Am Nat. 2007;170:79–89.
Drummond-Barbosa D, Spradling AC. Stem cells and their progeny respond to nutritional changes during Drosophila oogenesis. Dev Biol. 2001;231:265–78.
Greenman CG, Martin LB, Hau M. Reproductive state, but not testosterone, reduces immune function in male house sparrows (Passer domesticus). Physiol Biochem Zool. 2005;78:60–8.
Schwenke RA, Lazzaro BP, Wolfner MF. Reproduction-immunity trade-offs in insects. Annu Rev Entomol. 2016;61:239–56.
Rodríguez-Muñoz R, Boonekamp JJ, Fisher D, Hopwood P, Tregenza T. Slower senescence in a wild insect population in years with a more female-biased sex ratio. Proc Biol Sci. 2019;286:20190286.
Tompkins EM, Anderson DJ. Sex-specific patterns of senescence in Nazca boobies linked to mating system. J Anim Ecol. 2019;88:986–1000.
Clutton-Brock TH, Parker GA. Potential reproductive rates and the operation of sexual selection. Q Rev Biol. 1992;67:437–56.
Klug H, Heuschele J, Jennions MD, Kokko H. The mismeasurement of sexual selection. J Evol Biol. 2010;23:447–62.
Carazo P, Font E, Alfthan B. Chemosensory assessment of sperm competition levels and the evolution of internal spermatophore guarding. Proc Biol Sci. 2007;274:261–7.
Gage MJG, Baker RR. Ejaculate size varies with socio-sexual situation in an insect. Ecol Entomol. 1991;16:331–7.
Gadzama NM, Happ GM. The structure and evacuation of the spermatophore of Tenebrio molitor L. (Coleoptera: Tenebrionidae). Tissue Cell. 1974;6:95–108.
Drnevich JM. Number of mating males and mating interval affect last-male sperm precedence in Tenebrio molitor L. Anim Behav. 2003;66:349–57.
Carver FJ, Gilman JL, Hurd H. Spermatophore production and spermatheca content in Tenebrio molitor infected with Hymenolepis diminuta. J Insect Physiol. 1999;45:565–9.
Dick J. Oviposition in certain Coleoptera. Ann Appl Biol. 1937;24:762–96.
Drnevich JM, Papke RS, Rauser CL, Rutowski RL. Material benefits from multiple mating in female mealworm beetles (Tenebrio molitor L.). J Insect Behav. 2001;14:215–30.
Vigneron A, Jehan C, Rigaud T, Moret Y. Immune defenses of a beneficial pest: the mealworm beetle, Tenebrio molitor. Front Physiol. 2019;10. https://doi.org/10.3389/fphys.2019.00138.
Nappi AJ, Ottaviani E. Cytotoxicity and cytotoxic molecules in invertebrates. Bioessays. 2000;22:469–80.
Sadd BM, Siva-Jothy MT. Self-harm caused by an insect's innate immunity. Proc Biol Sci. 2006;273:2571–4.
Pursall ER, Rolff J. Immune responses accelerate ageing: proof-of-principle in an insect model. PLoS One. 2011;6:e19972.
Daukšte J, Kivleniece I, Krama T, Rantala MRJ, Krams IA. Senescence in immune priming and attractiveness in a beetle. J Evol Biol. 2012;25:1298–304.
Krams I, Daukšte J, Kivleniece I, Kaasik A, Krama T, Freeberg TM, et al. Trade-off between cellular immunity and life span in mealworm beetles Tenebrio molitor. Curr Zool. 2013;59:340–6.
Khan I, Agashe D, Rolff J. Early-life inflammation, immune response and ageing. Proc R Soc B Biol Sci. 2017;284:20170125.
Rolff J, Siva-Jothy MT. Copulation corrupts immunity: a mechanism for a cost of mating in insects. PNAS. 2002;99:9916–8.
Moret Y, Schmid-Hempel P. Immune responses of bumblebee workers as a function of individual and colony age: senescence versus plastic adjustment of the immune function. Oikos. 2009;118:371–8.
Khan I, Prakash A, Agashe D. Immunosenescence and the ability to survive bacterial infection in the red flour beetle Tribolium castaneum. J Anim Ecol. 2016;85:291–301.
Moret Y, Schmid-Hempel P. Survival for immunity: the price of immune system activation for bumblebee workers. Science. 2000;290:1166–8.
Licastro F, Candore G, Lio D, Porcellini E, Colonna-Romano G, Franceschi C, et al. Innate immunity and inflammation in ageing: a key for understanding age-related diseases. Immun Ageing. 2005. https://doi.org/10.1186/1742-4933-2-8.
Shenk MK. Fertility and fecundity. In: The international encyclopedia of human sexuality. New York City: American Cancer Society; 2015. p. 369–426. https://doi.org/10.1002/9781118896877.wbiehs153.
Moret Y. "Trans-generational immune priming": specific enhancement of the antimicrobial immune response in the mealworm beetle, Tenebrio molitor. Proc Biol Sci. 2006;273:1399–405.
Zanchi C, Troussard J-P, Martinaud G, Moreau J, Moret Y. Differential expression and costs between maternally and paternally derived immune priming for offspring in an insect. J Anim Ecol. 2011;80:1174–83.
Norušis MJ. IBM SPSS Statistics 19 statistical procedures companion. Upper Saddle River: Prentice Hall; 2012. https://trove.nla.gov.au/version/173310980. Accessed 31 Oct 2019
Burnham KP, Anderson DR. Multimodel inference: understanding AIC and BIC in model selection. Sociol Methods Res. 2004;33:261–304.
Jehan C, Chogne M, Rigaud T, Moret Y. Data from: sex-specific patterns of senescence in artificial insect populations varying in sex-ratio to manipulate reproductive effort. Dryad Digital Repository. 2020. https://doi.org/10.5061/dryad.cvdncjt11.
Adler MI, Bonduriansky R. The dissimilar costs of love and war: age-specific mortality as a function of the operational sex ratio. J Evol Biol. 2011;24:1169–77.
Archer CR, Zajitschek F, Sakaluk SK, Royle NJ, Hunt J. Sexual selection affects the evolution of lifespan and ageing in the decorated cricket Gryllodes Sigillatus. Evolution. 2012;66:3088–100.
Tatar M, Carey JR, Vaupel JW. Long-term cost of reproduction with and without accelerated senescence in callosobruchus maculatus: analysis of age-specific mortality. Evolution. 1993;47:1302–12.
Creighton JC, Heflin ND, Belk MC. Cost of reproduction, resource quality, and terminal investment in a burying beetle. Am Nat. 2009;174:673–84.
Haine ER, Pollitt LC, Moret Y, Siva-Jothy MT, Rolff J. Temporal patterns in immune responses to a range of microbial insults (Tenebrio molitor). J Insect Physiol. 2008;54:1090–7.
Dhinaut J, Chogne M, Moret Y. Immune priming specificity within and across generations reveals the range of pathogens affecting evolution of immunity in an insect. J Anim Ecol. 2018;87:448–63.
Scherfer C, Tang H, Kambris Z, Lhocine N, Hashimoto C, Lemaitre B. Drosophila Serpin-28D regulates hemolymph phenoloxidase activity and adult pigmentation. Dev Biol. 2008;323:189–96.
Labaude S, Moret Y, Cézilly F, Reuland C, Rigaud T. Variation in the immune state of Gammarus pulex (Crustacea, Amphipoda) according to temperature: are extreme temperatures a stress? Dev Comp Immunol. 2017;76:25–33.
Promislow DEL. Costs of sexual selection in natural populations of mammals. Proc R Soc Lond Ser B Biol Sci. 1992;247:203–10.
Clutton-Brock TH, Isvaran K. Sex differences in ageing in natural populations of vertebrates. Proc Biol Sci. 2007;274:3097–104.
Carazo P, Molina-Vila P, Font E. Male reproductive senescence as a potential source of sexual conflict in a beetle. Behav Ecol. 2011;22:192–8.
Font E, Desfilis E. Chapter 3 - courtship, mating, and sex pheromones in the mealworm beetle (Tenebrio molitor). In: Ploger BJ, Yasukawa K, editors. Exploring animal behavior in laboratory and field. San Diego: Academic Press; 2003. p. 43–58. https://doi.org/10.1016/B978-012558330-5/50005-4.
Happ GM. Multiple sex pheromones of the mealworm beetle, Tenebrio molitor L. Nature. 1969;222:180.
Drnevich JM, Hayes EF, Rutowski RL. Sperm precedence, mating interval, and a novel mechanism of paternity bias in a beetle (Tenebrio molitor L.). Behav Ecol Sociobiol. 2000;48:447–51.
Herman WS, Tatar M. Juvenile hormone regulation of longevity in the migratory monarch butterfly. Proc Biol Sci. 2001;268:2509–14.
Zhao P, Lu Z, Strand MR, Jiang H. Antiviral, anti-parasitic, and cytotoxic effects of 5,6-dihydroxyindole (DHI), a reactive compound generated by phenoloxidase during insect immune response. Insect Biochem Mol Biol. 2011;41:645–52.
Wigglesworth VB. The principles of insect physiology. Netherlands: Springer; 1972. https://www.springer.com/la/book/9780412246609. Accessed 4 Apr 2019
Rolff J. Bateman's principle and immunity. Proc Biol Sci. 2002;269:867–72.
Bonduriansky R, Maklakov A, Zajitschek F, Brooks R. Sexual selection, sexual conflict and the evolution of ageing and life span. Funct Ecol. 2008;22:443–53.
Vinogradov AE. Male reproductive strategy and decreased longevity. Acta Biotheor. 1998;46:157–60.
Carranza J, Pérez-Barbería FJ. Sexual selection and senescence: male size-dimorphic ungulates evolved relatively smaller molars than females. Am Nat. 2007;170:370–80.
We thank G. Sorci for critical comments on the manuscript, C. Sabarly, A. Salis and M. Teixeira Brandao for technical assistance.
This study was funded by the CNRS and a grant from the ANR (ANR-14-CE02–0009). Funding agencies contributed strictly financially to the performed research.
UMR CNRS 6282 BioGéoSciences, Équipe Écologie Évolutive, Université Bourgogne-Franche Comté, Dijon, France
Charly Jehan, Manon Chogne, Thierry Rigaud & Yannick Moret
Charly Jehan
Manon Chogne
Thierry Rigaud
Yannick Moret
CJ, TR and YM conceived the ideas and designed methodology, CJ, MC and YM collected the data, CJ, TR and YM analysed the data, CJ, TR and YM led the writing of the manuscript. All authors contributed critically to the drafts and gave final approval for publication.
Correspondence to Charly Jehan or Yannick Moret.
Table S1. AICc values for the models presented in Tables 2 and 3. The models in bold are those presented in the tables. Figure S1. Male and female age specific mortality rate according to time for each sex-ratio condition. Arrow indicate the time when the values of 50% of mortality rate is reached for the first time and dashed lines indicate 50% of the population is dead. Values are means among replicates ± s. e. m. Figure S2. Details of variation in fertility in females (A) and males (B). Values are means among replicates ± s. e. m. Figure S3. Physiological parameters: females in bright grey and males in black. Body condition, PO activity according to time, Total-PO activity according to time and sex-ratio condition, Haemocyte count, Proportion of individuals producing antibacterial activity, Diameter of inhibition zone according to time and sex-ratio condition. Values are means among replicates ± s. e. m.
Jehan, C., Chogne, M., Rigaud, T. et al. Sex-specific patterns of senescence in artificial insect populations varying in sex-ratio to manipulate reproductive effort. BMC Evol Biol 20, 18 (2020). https://doi.org/10.1186/s12862-020-1586-x
Disposable soma theory
Immuno-senescence
Tenebrio molitor
Evolutionary ecology and behaviour | CommonCrawl |
CoeViz: a web-based tool for coevolution analysis of protein residues
Frazier N. Baker1,2 &
Aleksey Porollo ORCID: orcid.org/0000-0002-3202-50992,3
BMC Bioinformatics volume 17, Article number: 119 (2016) Cite this article
Proteins generally perform their function in a folded state. Residues forming an active site, whether it is a catalytic center or interaction interface, are frequently distant in a protein sequence. Hence, traditional sequence-based prediction methods focusing on a single residue (or a short window of residues) at a time may have difficulties in identifying and clustering the residues constituting a functional site, especially when a protein has multiple functions. Evolutionary information encoded in multiple sequence alignments is known to greatly improve sequence-based predictions. Identification of coevolving residues further advances the protein structure and function annotation by revealing cooperative pairs and higher order groupings of residues.
We present a new web-based tool (CoeViz) that provides a versatile analysis and visualization of pairwise coevolution of amino acid residues. The tool computes three covariance metrics: mutual information, chi-square statistic, Pearson correlation, and one conservation metric: joint Shannon entropy. Implemented adjustments of covariance scores include phylogeny correction, corrections for sequence dissimilarity and alignment gaps, and the average product correction. Visualization of residue relationships is enhanced by hierarchical cluster trees, heat maps, circular diagrams, and the residue highlighting in protein sequence and 3D structure. Unlike other existing tools, CoeViz is not limited to analyzing conserved domains or protein families and can process long, unstructured and multi-domain proteins thousands of residues long. Two examples are provided to illustrate the use of the tool for identification of residues (1) involved in enzymatic function, (2) forming short linear functional motifs, and (3) constituting a structural domain.
CoeViz represents a practical resource for a quick sequence-based protein annotation for molecular biologists, e.g., for identifying putative functional clusters of residues and structural domains. CoeViz also can serve computational biologists as a resource of coevolution matrices, e.g., for developing machine learning-based prediction models. The presented tool is integrated in the POLYVIEW-2D server (http://polyview.cchmc.org/) and available from resulting pages of POLYVIEW-2D.
Protein folding and function are determined by groups of amino acid residues, which are usually located distantly in the sequence but tend to appear in spatial proximity. Sequence-based identification of residues critical in protein structure or function is a long standing problem in structural bioinformatics. On the other hand, demand for sequence-based annotations has been increasing in the age of modern high-throughput genome and transcriptome sequencing.
Both protein structure and functional site prediction methods utilize evolutionary information derived from a multiple sequence alignment (MSA) usually with the focus on individual residues. At the same time, cooperative nature of protein folding and function determined by groups of residues distant in sequence prompted many studies for identification of coevolving residues from the MSA. Earlier methods identified correlated mutations using mutual information [1, 2], Pearson correlation coefficient (also known as McBASC) [3–6], χ2 statistic (also known as OMES) [7], and two-state maximum likelihood [8]. An alternative approach was to express amino acid covariance using a statistical coupling energy (∆∆G) defined as the difference in "free energy" between the full sequence alignment and subalignment (also known as statistical coupling analysis, SCA) [9], which was later updated to simplify the definition of ∆∆G [10, 11]. The more recent advanced methods utilize approaches from statistical physics to discriminate direct and indirect correlations (direct-coupling analysis, DCA) [12, 13], with further improvements by introducing the inverse Potts model algorithm and a pseudolikelihood maximization procedure (plmDCA) [14]. Another recent method, PSICOV, employs sparse inverse covariance estimation to identify true covariation signal in the MSA [15].
Sequence databases that are used to generate MSA may present considerable overrepresentation of some species compared to others, a human-introduced bias driven by research interests. Therefore, many sequences may be derived from closely related species that did not have time to diverge to represent truly independent sequences from the same protein family. This effect is called phylogenetic noise or bias. One of the major challenges in coevolution analysis is to reduce this noise from the MSA. Earlier approaches were to weigh contribution of each aligned sequence by its sequence identity to a query protein or by the number of gaps in the alignment. Modern methods introduce a separate procedure to account for phylogenetic bias in the MSA mitigating the influence of the multiple closely related sequences (see, e.g., MirrorTree [16], CAPS [17], DCA [13], PSICOV [15]). These procedures are estimated to take most of the computational time in the overall coevolution analysis [18]. An alternative fast approach for improving mutual information without considering explicitly the phylogeny in the MSA was suggested by adjusting the covariance metric with the average product correction (APC) [19].
Recent successful examples of utilizing the coevolving residues include predictions of inter- and intra-protein residue-residue contacts [20–22], and prediction of mutation effects [23]. Further reading on the methods for identification of coevolving residues in proteins and their various applications can be found in recent reviews [18, 24]. Collectively, with all apparent advantages of methods in coevolution analysis that greatly facilitate protein modeling and functional annotations, there are certain limitations impeding biologists to widely utilize these methods, including requirements for considerable computational resources and restrains to relatively short proteins or conserved domains.
CoeViz was developed to provide molecular biologists with a web-based tool that can deal with proteins thousands of residues long enabling a fast, automated, and interactive analysis of coevolution data derived using a variety of covariance metrics and different corrections. The tool provides versatile means to identify and visualize inter-residue contacts and groups of residues involved in the same function. Two examples are presented to illustrate identification of the residues constituting (1) a catalytic site in Cys-Gly metallodipeptidase (SwissProt: DUG1_YEAST), and (2) functional linear motifs and repeats in the APC/C activator protein Cdc20 (SwissProt: CDC20_YEAST).
Coevolution and conservation metrics
Unless the MSA for a given protein is provided by the user, alignments are generated on the server side using three iterations of PSI-BLAST [25] with the profile-inclusion threshold of expect (e)-value = 0.001 and the number of aligned sequences 2000. The sequence homology search can be done against the Pfam [26] or NCBI NR databases. The latter database is represented by three options: full and reduced to 90 % or 70 % sequence identity by CD-HIT [27]. While PSI-BLAST generates local alignments, coevolution metrics are still computed from them because (1) refinement by global alignments can be very computationally intensive for thousands of sequences; (2) global alignment algorithms may fail for multi-domain proteins (especially those homologs with an alternative order of the domains); and (3) local alignments are sufficient for coevolution analysis as illustrated in [13].
Coevolution scores are computed from the MSA using three different covariance metrics: mutual information (MI, Eq. 1) [2], chi-square statistic (χ2, Eq. 2) [7], and Pearson correlation (r, Eq. 3). Conservation is defined by the joint Shannon entropy (S, Eq. 4). Each metric, in turn, is computed using four weighting schemes: weighted by sequence dissimilarity or sequence gapping in the alignment (Eqs. 5 and 6), by phylogeny background as defined in [13] (Eq. 7), and non-weighted. MI scores have an additional adjustment using the average product correction (APC, Eq. 8) to produce MIp scores (Eq. 9) [19]. All metrics based on frequencies are computed using four states as possible combinations of amino acids at two positions (i and j), where each amino acid is either equal (X) or not equal (!X) to the one in the query sequence.
$$MI\left(i,j\right)={\displaystyle {\sum}_x{\displaystyle {\sum}_y{p}_{ij}\left(x,y\right) \log \frac{p_{ij}\left(x,y\right)}{p_{i}(x){p}_j(y)}}}$$
$${x}^2\left(i,j\right)={\displaystyle {\sum}_x}{\displaystyle {\sum}_y}\frac{{\left({p}_{ij}\left(x,y\right)-{p}_i(x){p}_j(y)\right)}^2}{p{}_i(x){p}_j(y)}$$
$$r\left(i,j\right)=\frac{1}{N_{eff}}{\displaystyle {\sum}_l\frac{w_{sl}\left({s}_{il}-{\overline{s}}_l\right)\left({s}_{jl}-{\overline{s}}_j\right)}{\sigma_i{\sigma}_j}}$$
$$S\left(i,j\right)=-{\displaystyle {\sum}_x{\displaystyle {\sum}_y{p}_{ij}\left(x,y\right) \log {p}_{ij}\left(x,y\right)}}$$
$$p(s)=\frac{w_{sl}}{N_{eff}+\lambda }$$
$${N}_{eff}={\displaystyle {\sum}_l{w}_{sl}}$$
$${w}_a^{ph}=1/\left|\left\{b\in \left\{1,\dots, N\right\}\left| seqid\left({A}^a,{A}^b\right)>80\%\right.\right\}\right|$$
where x = {X; !X} and y = {Y; !Y}; p (s) is the observed frequency of state s = {x; y; x,y}; N eff is the effective sum of weights of alignments where both positions are not gaps. w sl is a weighted count of state s, which is equal to 1 for non-weighted scores, 1–(percent of sequence identity) or 1–(percent of gaps) of the alignment l for weighting by sequence dissimilarity or alignment gapping, respectively, and w a ph for weighting by phylogeny. w a ph is a weight for sequence A a in the MSA of N total sequences that equals to one over the number of sequences A b in the MSA that have at least 80 % sequence identity to A a. 80 % was chosen as a midpoint of the range 70–90 %, where there is no strong dependence observed on the precise threshold value [13]. s il is a similarity score that quantifies the change of an amino acid at position i to the one in the aligned sequence l. \({\overline{s}}_l\) and σ i are mean and standard deviation, respectively, of all similarity scores of changes for a given position represented across the all sequences aligned to the query. Similarity scores are taken from the position specific similarity matrix (PSSM) generated by PSI-BLAST. λ is a pseudo count, which is equal to 1 for all metrics here.
$$APC\left(a,b\right)=\frac{MI\left(a,\overline{x}\right)MI\left(b,\overline{x}\right)}{\overline{MI}}$$
$$MIp\left(a,b\right)=MI\left(a,b\right)-APC\left(a,b\right)$$
where \(MI\left(a,\overline{x}\right)\) is the mean MI of column a, and \(\overline{MI}\) is the overall mean MI.
Negative values of MIp scores are assigned to 0, and then all MI scores are min-max normalized to range [0, 1]. S is normalized to the same range by factor 1/log (4). χ2 values are converted to the corresponding cumulative probabilities at degree of freedom (df) = 1.
Scores for each metric are organized in symmetrical matrices with the main diagonal presenting plain or weighted frequencies, as defined above, of each individual residue for MI-and χ2-based metrics, and the individual Shannon entropies using 20 states (20 amino acids) for S-based metric. Individual entropies are computed using probability part of the PSSM files from the PSI-BLAST output and normalized to range [0, 1] by factor 1/log (20). Residues of the query protein are clustered using hierarchical clustering with the complete linkage method. Prior to clustering, negative r scores are assigned to 0; MI, r, and χ2 scores are converted to distances by 1–score transformation. Both the clustering and conversion of χ2 to cumulative probabilities are performed using the R statistical package (functions hclust and pchisq, respectively).
The web interface for coevolution analysis (CoeViz) is implemented as part of the protein visualization server POLYVIEW-2D [28] that shows CoeViz as an option for the further sequence-based analysis from its resulting pages (Fig. 1). CoeViz accounts for a custom residue numeration (e.g., non-consecutive or with insertion codes), which is common for proteins deposited in Protein Databank (PDB, [29]). A request for analysis initiates MSA and coevolution calculations on the server side that may take from minutes to hours depending on the query sequence length, size of the generated MSA, and load of the computing cluster. Once all scores for a requested metric with different weighting schemes are computed, the subsequent analysis, visualization, and switching between the adjustments for the given metric are conducted in real time.
A flowchart of CoeViz. Protein data are submitted as defined in the POLYVIEW-2D server ([28], http://polyview.cchmc.org/polyview_doc.html), which includes PDB-formatted coordinate files, output from the sequence-based prediction servers, or custom sequence profiles. At the protein visualization page, there is an option provided to request analysis of covariance of amino acids (CoeViz). The user can choose a covariance metric and a database to generate the MSA or provide a file with the constructed MSA. CoeViz computes a requested covariance or conservation metric with all implemented adjustments separately and performs hierarchical clustering. Once calculations are completed, CoeViz provides an interactive web-interface to review covariance data using heatmaps, circular diagrams, and clustering trees. From the circular diagrams, the user has options to map identified correlated amino acids to a protein 3D structure or sequence depending on the input data. All generated results can be exported in text or graphics formats
The computed data can be interactively explored using heat maps at different zoom levels. The color gradient is from blue (0 = no covariation) through white (0.5 = moderate covariance) to red (1 = complete covariance) for MI-, r-, and χ2-based metrics, whereas for joint entropy it is blue (1 = no joint conservation) through white to red (0 = complete joint conservation). Cluster trees are static; however, the cluster tree image is automatically updated when a different adjusted metric is chosen. In addition to residue labeling, the cluster tree leaves are colored according to hydropathic properties of amino acids, which may facilitate identification of clusters of hydrophobic or charged residues. The color convention follows the previous definition in POLYVIEW-2D and can be found on its documentation web-page. Residue groupings can also be reviewed through interactive circular diagrams. These diagrams allow for navigation based on residue relationships, rather than on position within the sequence. Once a set of related residues is defined on the diagram, they can be automatically mapped to the protein 3D structure using the Jmol applet [30] or POLYVIEW-3D server [31] if the input to POLYVIEW-2D was a protein coordinate file (e.g., from PDB). Otherwise, they can only be mapped to a protein sequence using POLYVIEW-2D [28].
The interactive web interface utilizes D3 [32] and Aight (https://github.com/shawnbot/aight) JavaScript libraries. Data export options include images of cluster trees (in the PNG format), a current view of the heat map (PNG), and relational circular diagrams (SVG). All generated matrices with coevolution scores, as well as the underlying MSA, can be exported in tab-separated text format.
Figure 2 illustrates how CoeViz can help identify functionally important residues using a peptidase from baker's yeast (SwissProt: DUG1_YEAST) as an example. Dug1p is a Cys-Gly dipeptidase and belongs to the M20A family of metallopeptidases [33]. The enzyme requires two Zinc ions in the active site to cleave the substrate. Based on χ2 scores weighted by sequence dissimilarity, residues binding Zn (H102, D137, E172, H450) and a catalytic residue (E171) are clustered together (Fig. 2b). Interestingly, R348 is in the same cluster (Fig. 2c). When the residues are mapped to 3D structure available in Protein Databank (PDB:4G1P), where the enzyme is co-crystallized with the substrate and Zn ions in the active site, R348 appears to be on the opposite side of the active site cavity and in contact with the substrate (Fig. 2e) suggesting its role in substrate recognition and positioning the dipeptide into the catalytic center. On the other hand, when the closest relationships are reviewed for residue E171, all the functional residues, Zn binding and catalytic, appear on the diagram (Fig. 2d).
Amino acid coevolution profile reveals residues constituting the active site of the Cys-Gly metallodipeptidase (SwissProt: DUG1_YEAST). a A fragment of the heat map displaying amino acid coevolution computed using χ2 weighted by sequence dissimilarity derived from sequence alignments to the protein sequence defined in PDB ID 4G1P against NR database with 90 % identity reduction. b A fragment of the cluster tree derived from the chi-square data converted to a distance matrix. c The zoomed in cluster of amino acids that contains known Zn binding residues (H102, D137, E172, H450) and a catalytic site (E171). d From the heat map, one can retrieve a circular diagram representing the closest relationships to a given residue; here is to the one of catalytic residues (E171) after applying a ≥0.3 cutoff to χ2-based cumulative probabilities. e From the circular diagram, one can map the clustered residues to the submitted protein 3D structure; here is to DUG1 (PDB:4G1P). Residues highlighted red (H102, D137, E172, D200, H450) are amino acids binding Zn (grey spheres); magenta – catalytic residues (D104, E171); blue is a residue involved in substrate recognition (R348). The substrate (Cys-Gly) is rendered as sticks colored by an atom type
The same protein structure was submitted to the ConSurf server [34] to see if it can identify the catalytic site. Out of 480 residues, 150 were found to be highly conserved (score 9), majority of which are in a protein core and most likely involved in protein folding, not function. These results illustrate the limits of the single residue conservation based methods in identification of functional sites, when they cannot distinguish functionally important residues from the structural determinants.
Figure 3 demonstrates how CoeViz can facilitate identification of functional linear motifs and structural domains on the example of the anaphase promoting complex/cyclosome (APC/C) activator protein Cdc20 from baker's yeast (SwissProt: CDC20_YEAST). It regulates the ubiquitin ligase activity and substrate specificity of APC/C (see UniProt:P26309 for references). According to UniProt annotation, Cdc20 comprises 7 WD structural repeats, and the following linear motifs: D-box (17-RSVLSIASP-25), bipartite nuclear localization signal (NLS, 85-RRDSSFFKDEFDAKKDK-101), C-box (144-DRYIPIL-150), and KEN-box (586-KENRSKN-592). As can be seen from the secondary structure (SS) prediction by SABLE [35], Cdc20 contains only one structural domain formed by WD repeats (Fig. 3a). Functional motifs are located in disordered (coil) regions of the protein, and therefore they would be obscure to the other, domain/family profile-oriented coevolution approaches, since the MSA would not cover those regions.
Amino acid coevolution profile reveals residues constituting a structural domain and locations of the functional linear motifs in Cdc20 (SwissProt: CDC20_YEAST). a SS prediction by SABLE visualized by POLYVIEW-2D with residues highlighted in functional motifs and a structural domain: red – residues constituting D- and KEN-boxes; green–residues in the bipartite NLS; blue–C-box; residues with bold face are in the WD-repeats domain. Keys for graphical SS elements can be found in the POLYVIEW-2D documentation. b A full heat map displaying amino acid coevolution computed using MI weighted by phylogeny and derived from sequence alignments to the protein sequence defined in UniProt:P26309 against the whole NR database. Boundaries of the WD domain and functional motifs, as defined in UniProt, are highlighted with green lines. c A zoom-in view of the heat map fragment centered on D-box. d A zoom-in view of the heat map fragment centered on C-box. e A zoom-in view of the heat map fragment showing the upper-left corner of the WD-repeats domain
ProSite [36], one of the prominent resources for protein sequence annotations, finds only 4 WD repeats in the sequence and no motifs mentioned above. On the other hand, CoeViz with MI metric adjusted for phylogeny noise reveals boundaries of the WD-repeats domain and locations of D- and C-boxes (Fig. 3b-e). There have been observations published that short linear functional motifs are more conserved than their flanking (or adjacent) residues or the same motifs in non-functional instances (see review [37]). We suggest that coevolutionary information may amplify this signal because of the cooperative nature of these motifs, where more than one residue needs to be conserved to perform the function. However, this analysis is beyond the scope of this work.
Coevolution analysis may facilitate the finding of groups of residues involved in the same function or domain folding. CoeViz both computes a number of coevolution and conservation metrics and provides interactive interface to analyze the data and identify relevant clusters of residues. The problem of potential phylogenetic bias in the MSA is addressed by a number of ways, including the use of the sequence databases with reduced redundancy, explicit phylogeny correction for similar sequences, and average product correction for mutual information. The tool represents a practical resource for a quick sequence-based protein annotation for molecular biologists, e.g., for identifying putative functional regions and structural domains. CoeViz also can serve computational biologists as a resource of coevolution matrices, e.g., for developing machine learning-based prediction models.
Project name: CoeViz
Project home page: http://polyview.cchmc.org/
Operating system: Platform independent
Programming languages: Perl, JavaScript, R
Other requirements: A web-browser supporting the HTML5 standard
License: Free for all users
Any restrictions to use by non-academics: None
APC:
average product correction
DCA:
direct coupling analysis
multiple sequence alignment
NCBI:
NLS:
nuclear localization signal
PSSM:
position specific scoring matrix
SCA:
statistical coupling analysis
SS:
Korber BT, Farber RM, Wolpert DH, Lapedes AS. Covariation of mutations in the V3 loop of human immunodeficiency virus type 1 envelope protein: an information theoretic analysis. Proc Natl Acad Sci U S A. 1993;90(15):7176–80.
Clarke ND. Covariation of residues in the homeodomain sequence family. Protein Sci. 1995;4(11):2269–78. doi:10.1002/pro.5560041104.
Gobel U, Sander C, Schneider R, Valencia A. Correlated mutations and residue contacts in proteins. Proteins. 1994;18(4):309–17. doi:10.1002/prot.340180402.
Neher E. How frequent are correlated changes in families of protein sequences? Proc Natl Acad Sci U S A. 1994;91(1):98–102.
Pazos F, Helmer-Citterich M, Ausiello G, Valencia A. Correlated mutations contain information about protein-protein interaction. J Mol Biol. 1997;271(4):511–23.
Yip KY, Patel P, Kim PM, Engelman DM, McDermott D, Gerstein M. An integrated system for studying residue coevolution in proteins. Bioinformatics. 2008;24(2):290–2.
Larson SM, Di Nardo AA, Davidson AR. Analysis of covariation in an SH3 domain sequence alignment: applications in tertiary contact prediction and the design of compensating hydrophobic core substitutions. J Mol Biol. 2000;303(3):433–46.
Pollock DD, Taylor WR, Goldman N. Coevolving protein residues: maximum likelihood identification and relationship to structure. J Mol Biol. 1999;287(1):187–98.
Lockless SW, Ranganathan R. Evolutionarily conserved pathways of energetic connectivity in protein families. Science. 1999;286(5438):295–9. doi:7890 [pii].
Dekker JP, Fodor A, Aldrich RW, Yellen G. A perturbation-based method for calculating explicit likelihood of evolutionary co-variance in multiple sequence alignments. Bioinformatics. 2004;20(10):1565–72.
Fodor AA, Aldrich RW. On evolutionary conservation of thermodynamic coupling in proteins. J Biol Chem. 2004;279(18):19046–50.
Weigt M, White RA, Szurmant H, Hoch JA, Hwa T. Identification of direct residue contacts in protein-protein interaction by message passing. Proc Natl Acad Sci U S A. 2009;106(1):67–72.
Morcos F, Pagnani A, Lunt B, Bertolino A, Marks DS, Sander C, et al. Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proc Natl Acad Sci U S A. 2011;108(49):E1293–301.
Ekeberg M, Lovkvist C, Lan Y, Weigt M, Aurell E. Improved contact prediction in proteins: using pseudolikelihoods to infer Potts models. Phys Rev E Stat Nonlin Soft Matter Phys. 2013;87(1):012707. doi:10.1103/PhysRevE.87.012707.
Jones DT, Buchan DW, Cozzetto D, Pontil M. PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. Bioinformatics. 2012;28(2):184–90. doi:10.1093/bioinformatics/btr638.
Pazos F, Valencia A. Similarity of phylogenetic trees as indicator of protein-protein interaction. Protein Eng. 2001;14(9):609–14.
Fares MA, Travers SA. A novel method for detecting intramolecular coevolution: adding a further dimension to selective constraints analyses. Genetics. 2006;173(1):9–23.
De Juan D, Pazos F, Valencia A. Emerging methods in protein co-evolution. Nat Rev Genet. 2013;14(4):249–61.
Dunn SD, Wahl LM, Gloor GB. Mutual information without the influence of phylogeny or entropy dramatically improves residue contact prediction. Bioinformatics. 2008;24(3):333–40. doi:10.1093/bioinformatics/btm604.
Lovell SC, Robertson DL. An integrated view of molecular coevolution in protein-protein interactions. Mol Biol Evol. 2010;27(11):2567–75.
Kamisetty H, Ovchinnikov S, Baker D. Assessing the utility of coevolution-based residue-residue contact predictions in a sequence-and structure-rich era. Proc Natl Acad Sci U S A. 2013;110(39):15674–9.
Dago AE, Schug A, Procaccini A, Hoch JA, Weigt M, Szurmant H. Structural basis of histidine kinase autophosphorylation deduced by integrating genomics, molecular dynamics, and mutagenesis. Proc Natl Acad Sci U S A. 2012;109(26):E1733–42.
Figliuzzi M, Jacquier H, Schug A, Tenaillon O, Weigt M. Coevolutionary Landscape Inference and the Context-Dependence of Mutations in Beta-Lactamase TEM-1. Mol Biol Evol. 2016;33(1):268–80.
Marks DS, Hopf TA, Sander C. Protein structure prediction from sequence variation. Nat Biotechnol. 2012;30(11):1072–80.
Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, et al. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 1997;25(17):3389–402.
Finn RD, Bateman A, Clements J, Coggill P, Eberhardt RY, Eddy SR, et al. Pfam: the protein families database. Nucleic Acids Res. 2014;42(Database issue):D222–30.
Li W, Godzik A. Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences. Bioinformatics. 2006;22(13):1658–9.
Porollo AA, Adamczak R, Meller J. POLYVIEW: a flexible visualization tool for structural and functional annotations of proteins. Bioinformatics. 2004;20(15):2460–2.
Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, et al. The Protein Data Bank. Nucleic Acids Res. 2000;28(1):235–42.
Hanson RM. Jmol-a paradigm shift in crystallographic visualization. J Appl Crystallogr. 2010;43:1250–60. doi:10.1107/S0021889810030256.
Porollo A, Meller J. Versatile annotation and publication quality visualization of protein complexes using POLYVIEW-3D. BMC Bioinformatics. 2007;8:316.
Bostock M, Ogievetsky V, Heer J. D-3: Data-Driven Documents. Ieee T Vis Comput Gr. 2011;17(12):2301–9.
Kaur H, Kumar C, Junot C, Toledano MB, Bachhawat AK. Dug1p Is a Cys-Gly peptidase of the gamma-glutamyl cycle of Saccharomyces cerevisiae and represents a novel family of Cys-Gly peptidases. J Biol Chem. 2009;284(21):14493–502.
Glaser F, Pupko T, Paz I, Bell RE, Bechor-Shental D, Martz E, et al. ConSurf: identification of functional regions in proteins by surface-mapping of phylogenetic information. Bioinformatics. 2003;19(1):163–4.
Adamczak R, Porollo A, Meller J. Combining prediction of secondary structure and solvent accessibility in proteins. Proteins. 2005;59(3):467–75. doi:10.1002/prot.20441.
Sigrist CJ, Cerutti L, Hulo N, Gattiker A, Falquet L, Pagni M, et al. PROSITE: a documented database using patterns and profiles as motif descriptors. Brief Bioinform. 2002;3(3):265–74.
Davey NE, Van Roey K, Weatheritt RJ, Toedt G, Uyar B, Altenberg B, et al. Attributes of short linear motifs. Mol Biosyst. 2012;8(1):268–81. doi:10.1039/c1mb05231d.
This work was supported in part by the National Institutes of Health (NIH 8UL1TR000077-05) award.
Department of Electrical Engineering and Computing Systems, University of Cincinnati, 2901 Woodside Drive, Cincinnati, OH, 45221, USA
Frazier N. Baker
Center for Autoimmune Genomics and Etiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, Cincinnati, OH, 45229, USA
& Aleksey Porollo
Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, 3333 Burnet Avenue, Cincinnati, OH, 45229, USA
Aleksey Porollo
Search for Frazier N. Baker in:
Search for Aleksey Porollo in:
Correspondence to Aleksey Porollo.
AP conceived of the study. FNB designed and implemented the web-server functionality. AP performed the protein analysis for illustration in the paper. Both authors drafted the manuscript, read and approved the final version.
Baker, F.N., Porollo, A. CoeViz: a web-based tool for coevolution analysis of protein residues. BMC Bioinformatics 17, 119 (2016). https://doi.org/10.1186/s12859-016-0975-z
Received: 23 October 2015
Accepted: 01 March 2016
Coevolution analysis
Coevolving residues
Co-occurring residues
Covariation of residues
Protein annotation
Web-server
Sequence analysis (applications) | CommonCrawl |
The Psychology and Improbability of Shuffle Play
By drorzel on January 7, 2013.
Kate and I went down to New York City (sans kids, as my parents were good enough to take SteelyKid and The Pip for the weekend) this weekend, because Kate had a case to argue this morning, and I needed a getaway before the start of classes today. We hit the Rubin Museum of Art, which is just about the right size for the few hours we had, got some excellent Caribbean food at Negril Village, then saw The Old Man and the Old Moon in a church basement at NYU (the show was charming, the space was stiflingly hot by the end). All in all, a good weekend.
I drove back Sunday afternoon, and was reminded how, psychologically, the way I relate to music is totally shaped by past technology. I took so many long trips in the cassette tape era, I automatically assume that I'll either get specific songs in a specific order (I always expect "Beast of Burden" right after "Ziggy Stardust," and "Ob-La-Di Ob-La-Da" after "American Pie," because I was so fond of the mix tapes that had those combos on them), or completely unpredictable songs via the radio. So, as I neared Sloatsburg on the NY Thruway, I caught myself thinking, "Wow. This is a really good run of a bunch of good driving songs." Of course, as I realized a second later, that was because I had the iPod running shuffle play on the "really good driving songs" playlist.
In an effort to get a little low-impact science content out of this "I'm a dumbass" anecdote, another thing happened with the playlist that's worth a little math. As I said, I have a "really good driving songs" playlist on my iPod, which runs to 335 songs (I own a lot of music...). 18 of those are by The Hold Steady, one of my favorite bands working today. My iPod seems weirdly averse to playing them, though, so whenever I go on a trip using that playlist, I play a game of "How long will we go before hearing a Hold Steady song?"
Yesterday's trip home, about three hours door-to-door, went through 51 songs, not one of them a Hold Steady song. This is pretty typical, at least in my memory, but how unlikely is it, really?
Well, the Hold Steady constitute 18/335 = 5.37% of the "really good driving songs" playlist, which means that in a random draw there's a 5.37% chance of a Hold Steady song coming up first. And thus, a 94.63% chance of the first song not being a Hold Steady song.
A really simplistic way of estimating the likelihood of making it through yesterday's drive without hearing a single Hold Steady tune would be to raise that fraction to the power of the number of songs:
$latex P(51)=0.9463^{51} = 0.0599 $
So, the odds of yesterday's Hold Steady free ride should be a bit under 6%.
But, of course, that's a really naive, physicist-y estimate. In order for that to be accurate, you would need to put each played song back into the draw. But that's not what happens in reality-- my iPod will shuffle its way through the whole 335 songs once before replaying a song. So, in fact, the odds of hearing a Hold Steady song go up as the shuffle goes on. There's a 18/334 = 5.39% chance of hearing them on the second song, and 18/333 = 5.41% chance on the third song, and an 18/285 = 6.32% chance at the 51st song. To properly handle this, we need to count how many ways there are to play 51 songs out of a set of 335 songs, without ever playing one of those 18 Hold Steady tunes.
So, how would you go about that? Well, the easiest way to estimate it would be to count the total number of possible combinations of 51 songs that aren't by the Hold Steady (317 of them):
$latex 317 \times 316 \times 315 \times ... \times 267 = 5.09 \times 10^{125} $
and then divide by the total number of combinations of 51 songs out of the full set of 335 songs:
Take the ratio of those two numbers, and you get... 4.698%. Which really isn't all that different from the ultra-naive estimate, is it? That's due to the fact that 51 isn't all that big a fraction of 335. The agreement gets much worse as the number increases-- at 51 songs, the correct value is 78% of the naive value, close enough for back-of-the-envelope. At 61 songs, it's only 70%, at 71 songs it's 60%, and by 101 songs, the correct value is a third of the naive value (which is down to 0.4%).
(You might argue about whether the order of the songs matters-- the above counting is for the case where order matters, so abcdef and abcdfe would be counted as two different strings even though they contain the same six characters. If you think you only care about which songs get played, not the order, the relevant numbers are $latex 3.278 \times 10^{59} $ and $latex 6.978 \times 10^{60}$ for a ratio of... 4.698%. So it doesn't actually make a difference. But if you're in the car, the order matters, thus the above....)
So, is this incontrovertible mathematical evidence that my iPod hates my favorite band? Not necessarily, because Sunday's drive was a single event. And while the probability of things coming out that way is low (a Hold Steady free drive should happen roughly one out of every 20 trips to Manhattan), you can't really say anything about the probability of a single event. Anecdotally, though, this seems to be pretty common-- on the way down, we made it almost to the New Jersey border before getting a Hold Steady song, and on at least one previous occasion I made it into The City without one. So I'm suspicious... maybe Craig Finn stole Steve Jobs's girlfriend, or something.
Or, maybe I was just unlucky. Tough to say, really, without more data. I guess I'll need to make more regular trips to New York and back, keeping track of what gets played. And as long as I'm doing this-- for SCIENCE!, you know-- I might as well get some good food and arts while I'm there. Now I just need to convince the NSF to reimburse my mileage...
For extra credit, seize on the fact that the Many-Worlds Interpretation predicts that there will always be some branch of the multiverse in which I drive between New York and Schenectady without hearing a Hold Steady song, and attempt to draw sweeping conclusions from this about the validity of the MWI. (Extra credit offer void in California, Utah, and for people named Neil Bates.)
But that's not what happens in reality– my iPod will shuffle its way through the whole 335 songs once before replaying a song.
I haven't tried this with an iPod, but a few years ago I observed iTunes in shuffle play to pull up the same song twice in a row (I don't mean a duplicate version, I mean the same artist and album), and it wasn't so unusual to hear a song and see it (again, the exact same track, not a duplicate version) appear a second time in the list of the next 15 songs. However, that was several versions of iTunes ago, and I haven't re-run the experiment recently.
By Eric Lund (not verified) on 07 Jan 2013 #permalink
I'm sure you know this, but isn't the answer that the iPod's shuffle feature is not perfectly random?
By Ori Vandewalle (not verified) on 07 Jan 2013 #permalink
The question is how non-random is it, and in what way? That is, is it giving priority to some songs/artists over others? It's not a higher rating thing, because all the songs on that playlist are either 4 or 5 stars.
By drorzel on 07 Jan 2013 #permalink
Eric Lund: There's a setting for that, actually. There's "choose a song at random after each song" and then there's "shuffle into a list then play the list in order".
Ori Vandewalle: There's actually a slider in iTunes to determine HOW RANDOM the random shuffler should be. "Truly random", the default, feels nonrandom to many users because it will produce trends and patterns - you might get 3 AC/DC songs in a row, or something, So there's a slider to make it *less* random by forcing it to pick songs farther from the one it just played - after it plays AC/DC, other AC/DC songs are downweighted to be less likely.
By John (not verified) on 07 Jan 2013 #permalink
Go On Till You Come to the End; Then Stop
ScienceBlogs is coming to an end. I don't know that there was ever a really official announcement of this, but the bloggers got email a while back letting us know that the site will be closing down. I've been absolutely getting crushed between work and the book-in-progress and getting Charlie the…
Meet Charlie
It's been a couple of years since we lost the Queen of Niskayuna, and we've held off getting a dog until now because we were planning a big home renovation-- adding on to the mud room, creating a new bedroom on the second floor, and gutting and replacing the kitchen. This was quite the undertaking…
Physics Blogging Round-Up: August
Another month, another set of blog posts. This one includes the highest traffic I think I've ever seen for a post, including the one that started me on the path to a book deal: -- The ALPHA Experiment Records Another First In Measuring Antihydrogen: The good folks trapping antimatter at CERN have…
The Age Math Game
I keep falling down on my duty to provide cute-kid content, here; I also keep forgetting to post something about a nerdy bit of our morning routine. So, let's maximize the bird-to-stone ratio, and do them at the same time. The Pip can be a Morning Dude at times, but SteelyKid is never very happy to…
Kid Art Update
Our big home renovation has added a level of chaos to everything that's gotten in the way of my doing more regular cute-kid updates. And even more routine tasks, like photographing the giant pile of kid art that we had to move out of the dining room. Clearing stuff up for the next big stage of the… | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.